This thread is locked.Only browsing is available.
Top Page > Browsing
large systems
Date: 2006/11/09 08:13
Name: JessK


Hello,

I tried to perform relaxation of large bulk, cell is 8x8x32 Angstrom, 160 atoms, cutoff 250 Ry. I tried different O(N) methods, but all of them crashed before first MD step with nontrivial reason like 'segmentation fault'. I tried to use different machines (intel em64t, cray xt3), but no luck. I think I have enough memory for this job. Any ideas?

thanks,
JK
メンテ
Page: [1]

Re: large systems ( No.1 )
Date: 2006/11/09 11:04
Name: T.Ozaki

Hi,

Maybe yours is a memory problem.
For the system consisting of 160 atoms, the conventional scheme
is still faster than the O(N) method.

Regards,

TO
メンテ
Re: large systems ( No.2 )
Date: 2006/11/10 07:20
Name: JessK


Hi,

Indeed I tried to use 'band' scheme first, and the code used all my memory. So, i thought O(N) could help.

thanks,
JK
メンテ
Re: large systems ( No.3 )
Date: 2007/01/13 01:27
Name: jessK


Dr. Ozaki,

Yes, I realized that it is a memory issue. Using the cluster with 8Gb on each node I was able to run it (slow, but works). But on usual PC cluster with medium amount of memory (~4Gb) it is a big challenge, because the memory is not distributed over the nodes (so, increasing numbero of CPUs doesn't help). In fact, calculations of the same system with usual plane-wave code require much smaller amount of memory (~2Gb on node). This situation kills my attempts to apply OpenMX for the research. Is it possible to trace, why the code needs such huge amount of memory?

thanks,
JK

メンテ

Page: [1]