This thread is locked.Only browsing is available.
Top Page > Browsing
Too many output of the cube files by mpi
Date: 2023/04/13 17:12
Name: Lingzhi Zhang   <zy38431037@gmail.com>

Dear OpenMX developer and users,

I'm using openmx to generate STM images.

As I expected, it should give a cube file named as *.pden.cube.
But it actually gives many cube files named *.pden.cube_a. a is a number.
And I find that it meets the number of mpi threads I used.
If I set mpi threads as 100, it gives 100 cube files.

But something strange is that it didn't always give so many cube files.
In some cases, it gives only one cube file although the difference between the cases is just the energy window I set.

I just wonder how should I solve it. I have tried add_gcube, but it failed
with the error infos
[k028230@ohtaka1 -0.5]$ add_gcube ang_xy.pden.cube_000 xy.pden.cube_001 ang_xy.pden.cube
Found a difference in the number of grid on a-axis
Found a difference in the number of grid on b-axis
Found a difference in the number of grid on c-axis
Found a difference in the number of atoms
Found a difference in x-coordinate of the origin
Found a difference in y-coordinate of the origin
Found a difference in z-coordinate of the origin
Found a difference in the vector of a-axis
Found a difference in the vector of b-axis
Found a difference in the vector of c-axis
Segmentation fault (core dumped)

My calculation is conducted in ISSP ohtaka.

My dat file is as below,
stm.dat



<Hubbard.U.values  # eV
  Ru 1s 0.0 2s 0.0 3s 0.0 1p 0.0 2p 0.0 1d 1.5 2d 0.0 1f 0.0
  Cl 1s 0.0 2s 0.0 1p 0.0 2p 0.0 1d 0.0 1f 0.0
  C  1s 0.0 2s 0.0 1p 0.0 2p 0.0 1d 0.0
Hubbard.U.values>

#
# SCF or Electronic System
#
scf.XcType                  GGA-PBE    # LDA|LSDA-CA|LSDA-PW|GGA-PBE
scf.SpinPolarization        NC          # On|Off|NC
scf.SpinOrbit.Coupling      On          # On|Off, default=off     
scf.ElectronicTemperature  0          # default=300 (K)
scf.energycutoff            1200        # default=150 (Ry)       
scf.maxIter                2400        # default=40
scf.EigenvalueSolver        Band        # DC|GDC|Cluster|Band #Check
scf.Kgrid                  6 6 1      # means n1 x n2 x n3
scf.Generation.Kpoint      regular    # regular|MP
scf.Mixing.Type            rmm-diish  # Simple|Rmm-Diis|Gr-Pulay|Kerker|Rmm-Diisk
scf.Init.Mixing.Weight      0.01        # default=0.30
scf.Min.Mixing.Weight      0.0000000001# default=0.001
scf.Max.Mixing.Weight      0.400      # default=0.40
scf.Mixing.History          50          # default=5
scf.Mixing.StartPulay      50          # default=6
scf.Mixing.EveryPulay      1          # default=6
scf.criterion              1.0e-8      # default=1.0e-6 (Hartree)
scf.lapack.dste            dstevx      # dstevx|dstedc|dstegr,default=dstevx



#
# Band dispersion
#
Band.dispersion            on          # on|off, default=off
Band.Nkpath                3
<Band.kpath
40  0.0000000000  0.0000000000  0.0000000000    0.5000000000  0.0000000000  0.0000000000  G  M
40  0.5000000000  0.0000000000  0.0000000000    0.3333333333  0.3333333333  0.0000000000  M  K
40  0.3333333333  0.3333333333  0.0000000000    0.0000000000  0.0000000000  0.0000000000  K  G
Band.kpath>


#
# DOS and PDOS
#
Dos.fileout                on          # on|off, default=off
Dos.Erange                -10.0 10.0    # default = -20 20
Dos.Kgrid                  20 20 1      # default = Kgrid1 Kgrid2 Kgrid3
FermiSurfer.fileout        on          # default = off, on/off

#
# STM
#
partial.charge                on      # on|off, default=off
partial.charge.energy.window  2.0      # in eV


#
# DFT+U
#
scf.Hubbard.U                on  # on|off, default=off
scf.DFTU.Type                1  # 1:Simplified(Dudarev)|2:General, default=1
scf.Hubbard.Occupation        dual

#
# vdW
#
scf.dftD                    on
version.dftD                  2
DFTD3.damp                  zero
DFTD.IntDirection          1 1 1

#
# Constraint DFT for NC Spin
#
scf.Constraint.NC.Spin      on      # on|on2|off, default=off
scf.Constraint.NC.Spin.v    4.0    # default=0.0(eV)

#
# SCF restart
#
scf.restart    on


My script is as below
#!/bin/sh
#SBATCH -p F4cpu
#SBATCH -N 4
#SBATCH -n 128
#SBATCH -c 4
#SBATCH -J STM-zz-mono

#SBATCH --mail-type=BEGIN
#SBATCH --mail-type=END
#SBATCH --mail-user=lingzhi@g.ecc.u-tokyo.ac.jp

set -e
source /home/issp/materiapps/intel/openmx/openmxvars-3.9.9-1.sh

energy_list=( -1.5 -1.0 -0.9 -0.5 0.5 1.0 1.5 2.0 2.5 2.8 3.0)

for energy in ${energy_list[@]}
do
awk -v awk_energy=$energy 'FNR==175  {$2=awk_energy} {print $0}' stm.dat > stm.$energy.dat
srun  openmx  stm.$energy.dat -nt $OMP_NUM_THREADS > stm.$energy.std
echo $energy

prefix=rucl3-GR-ang_xy
mkdir pcharge/$energy
mv $prefix.pden.cube* pcharge/$energy
done


With best regards
Lingzhi Zhang
メンテ
Page: [1]

Re: Too many output of the cube files by mpi ( No.1 )
Date: 2023/04/14 18:50
Name: Naoya Yamaguchi

Dear Lingzhi Zhang,

>If I set mpi threads as 100, it gives 100 cube files.
>My calculation is conducted in ISSP ohtaka.

This issue may come from the recent update on the ISSP system B, and it seems that rebuilding the preinstall software including `openmx` with the new modules has not yet been completed.
cf. https://mdcl.issp.u-tokyo.ac.jp/scc/news/5242 (in Japanese).

You may avoid it by refraining from using the oneAPI for a while, and A temporary workaround I found is as follows.

1. Download `aocl-linux-gcc-4.0.tar.gz` from https://www.amd.com/en/developer/aocl.html

2. Upload `aocl-linux-gcc-4.0.tar.gz` to the system B and put it on the home directory

3. Extract `aocl-linux-gcc-4.0.tar.gz`
```
tar xzvf aocl-linux-gcc-4.0.tar.gz
```

4. Enter `~/aocl-linux-gcc-4.0`
```
cd ~/aocl-linux-gcc-4.0
```

5. Run `install.sh`
```
./install.sh
```

6. Press the enter key to set `LP64` as default when the following message appear
```
-----------------------------------------------------------------------------------------------------------------------------
Do you want to set LP64 or ILP64 libraries as default libraries? (Enter 1 for LP64 / 2 for ILP64 / Default option: 1)
-----------------------------------------------------------------------------------------------------------------------------
```

7. Run the following commands once
```
echo 'export AOCL_ROOT=${HOME}/amd/aocl/4.0; module purge; module load gcc/10.1.0 openmpi/4.0.4-gcc-10.1.0; export LD_LIBRARY_PATH="${AOCL_ROOT}/lib:$LD_LIBRARY_PATH"' >> ~/.bashrc
```

8. Run also
```
source ~/.bashrc
```

9. Enter `~/openmx3.9/source`
```
cd ~/openmx3.9/source
```

10. Download the latest patch
```
wget https://www.openmx-square.org/bugfixed/21Oct17/patch3.9.9.tar.gz
```

11. Extract the patch
```
tar xzvf patch3.9.9.tar.gz
```

12. Modify the `makefile` through the following commands
```
sed -i 's/^MKLROOT/#MKLROOT/' makefile
sed -i 's/ -lm / ${LIBM} /g' makefile
```

13. Set the following in the `makefile`
```
CC  = mpicc -Ofast -ffast-math -march=znver2 -mfma -fomit-frame-pointer -fopenmp -fcommon -I${AOCL_ROOT}/include
FC  = mpif90 -Ofast -ffast-math -march=znver2 -mfma -fomit-frame-pointer -fopenmp -fallow-argument-mismatch
LIB = -L${AOCL_ROOT}/lib -lscalapack -lflame -lblis-mt -lfftw3_mpi -lfftw3_omp -lfftw3 -lgomp -lgfortran -lmpi_mpifh
LIBM= -L${AOCL_ROOT}/lib -lamdlibm -lm
```

14. Rebuild `openmx`
```
make clean && make all
```

15. Insert the following commands instead of `source /home/issp/materiapps/intel/openmx/openmxvars-3.9.9-1.sh` in your job script before job submission
```
module purge
module load gcc/10.1.0 openmpi/4.0.4-gcc-10.1.0
export LD_LIBRARY_PATH="${HOME}/amd/aocl/4.0/lib:$LD_LIBRARY_PATH"
```

Regards,
Naoya Yamaguchi
メンテ
Re: Too many output of the cube files by mpi ( No.2 )
Date: 2023/04/14 23:35
Name: Lingzhi Zhang  <zy38431037@gmail.com>

Dear Yamaguchi-san,

Thank you very much for your help.
I'm very grateful for your detailed explanation.

I will try it.

With best regards
Lingzhi Zhang
メンテ

Page: [1]