This thread is locked.Only browsing is available.
Top Page > Browsing
Memory & new version
Date: 2011/04/27 21:41
Name: Jan Rusz

Dear Prof. Ozaki and OpenMX team,

thank you very much for releasing such an excellent code to public. We are applying it in our current research with my student and we both very much appreciate it.

I would like to ask two questions:

1) we have noticed that OpenMX can be rather demanding on memory (in comparison to SIESTA, which I believe uses similar methods, at least in conventional diagonalization). We are working with systems with ~3-12 thousands of atoms (Si, C, H) using both conventional or OrderN diagonalization using Krylov subspace method. I wondered, which parameters influence the memory demands most significantly? I would suspect the basis size and energy cut-off. Is there anything else what could substantially influence memory demands? Is perhaps Krylov method more memory demanding than others?

A side remark: when doing geometry optimization, for a few MD steps the allocated memory seems to increase. So sometimes the calculation works well for few MD steps and then crashes due to lack of memory. Is that a memory leak or just a gradual filling some data-structure about "history" of geometry optimization? (The built-in memory leak tester did not discover any suspicious behavior.)

Would it not be useful to many users to implement an optional writing of (some of) the large arrays to files and re-read them, when needed?

2) on a recent conference in Berlin I have seen your excellent talk on low-order scaling exact DFT solvers. Do you please intend to release the new version to public? If so, excuse me please for my anxiousness, but could you please say your estimate when that will happen? :)

Thank you in advance for your kind response.

Best regards

Jan Rusz
メンテ
Page: [1]

Re: Memory & new version ( No.1 )
Date: 2011/05/04 21:10
Name: T. Ozaki

Hi,

> I wondered, which parameters influence the memory demands most significantly? I would
> suspect the basis size and energy cut-off. Is there anything else what could substantially
> influence memory demands? Is perhaps Krylov method more memory demanding than others?

As you guessed, the memory consumption mostly depends on the number of basis functions, Nb,
and the cutoff energy, Ecut, as O(Nb) and O(Ecut^3/2), respectively. Also, the cutoff radius
of basis functions, rcut, largely affects the memory consumption as O(rcut^3).

Though the Krylov subspace method tends to require a large memory, the memory size required
for each node can be reduced by the MPI parallelizaion.

In addition, the OpenMP/MPI hybrid parallelization is very effective to reduce the memory
consumption as explained in the manual.

> A side remark: when doing geometry optimization, for a few MD steps the allocated memory
> seems to increase. So sometimes the calculation works well for few MD steps and then
> crashes due to lack of memory. Is that a memory leak or just a gradual filling some
> data-structure about "history" of geometry optimization? (The built-in memory leak
> tester did not discover any suspicious behavior.)

In case of the BFGS, RF, and EF optimizers, the memory requirement will increase
after the optimization step reaches at MD.Opt.StartDIIS, since they require allocation
of several relatively large arrays.

> on a recent conference in Berlin I have seen your excellent talk on low-order scaling
> exact DFT solvers. Do you please intend to release the new version to public?
> If so, excuse me please for my anxiousness, but could you please say your estimate
> when that will happen? :)

Thank you for having an interest to the low-order scaling method. We are planning to
release a new version at the end of the summer, which will have the functionality.

Regards,

TO
メンテ

Page: [1]