Top Page > Browsing
How to reduce memory usage for very large systems?
Date: 2022/01/24 15:06
Name: Chong Wang   <ch-wang@outlook.com>

Hi OpenMX developers and users,

I am trying to calculate the band structure of a very large system using conventional methods [not O(N)]. This system contains ~4000 atoms (~100000 orbits including spin). Currently, my calculation always fails with insufficient memory. My cluster contains several nodes, each node has 128 physical cores and 256G memory. Right now I am running testing jobs with Gamma point only on one node.

As far as I know, parallel calculation with OpenMP reduces memory usage. However, even if I use only one MPI process with 128 threads (mpirun -np 1 $HOME/apps/openmx-3.9.9/openmx in.dat -nt 128), the job still fails due to insufficient memory. Any suggestions?

Thanks,
Chong Wang
メンテ
Page: [1]

Re: How to reduce memory usage for very large systems? ( No.1 )
Date: 2022/01/24 18:15
Name: Naoya Yamaguchi

Hi,

To decrease the RAM usage, you can make the PAOs poorer by decreasing the number of the orbitals.

And, you can roughly estimate the usage based on a calculation of a smaller system in advance.
For example, if a 10000 orbital calculation needs 1 GB, a 100000 orbital calculation can need (100000/10000)^3=1000 GB.

Regards,
Naoya Yamaguchi
メンテ
Re: How to reduce memory usage for very large systems? ( No.2 )
Date: 2022/01/25 05:04
Name: Chong Wang  <ch-wang@outlook.com>

Hi Naoya,

Thanks for the tips.

If a job needs 1000G memory and I use two nodes to carry out the calculations, does each node only need 500G memory? My understanding is that it is unlikely this way since each node has its own copy of many things, but I wonder if you have a rough idea of how the memory usage scales with number of computing nodes?

Best,
Chong
メンテ
Re: How to reduce memory usage for very large systems? ( No.3 )
Date: 2022/01/25 12:27
Name: Naoya Yamaguchi

Dear Chong,

As far as I know, the current version of OpenMX uses the ScaLAPACK that distributes matrices to several MPI processes and operates them, so that I wonder if the upper limit of a problem size increases, as the number of nodes increases.

Regards,
Naoya Yamaguchi
メンテ
Re: How to reduce memory usage for very large systems? ( No.4 )
Date: 2022/01/25 12:29
Name: Naoya Yamaguchi

Dear Chong,

If you calculate the band structure, you can use the combination of the O(N) and conventional schemes.
http://www.openmx-square.org/openmx_man3.9/node88.html

Regards,
Naoya Yamaguchi
メンテ

Page: [1]

Thread Title (must) Move the thread to the top
Your Name (must)
E-Mail (must)
URL
Password (used in modification of the submitted text)
Comment (must)

   Save Cookie