This thread is locked.Only browsing is available.
Top Page > Browsing
PMPI_Comm_size: Invalid communicator, error
Date: 2020/03/26 22:08
Name: Garret Wong   <garretwong1@gmail.com>


Dear all:

I successfully installed OpenMX3.9 with this makefile

"""
LBSROOT=/opt/intel/mkl
FFTROOT=/public/username/software/fftw-3.3.8
openmp_flag=-openmp
fortran_lib=-lifcore

CC = mpicc -O3 -openmp -I/$(FFTROOT)/include -I/$(LBSROOT)/include
FC = mpiifort -O3 -I/$(LBSROOT)/include
LIB = -L/$(FFTROOT)/lib -lfftw3 -L/$(LBSROOT)/lib/intel64 -lmkl_scalapac k_lp64 -lmkl_sequential -lmkl_blacs_lp64 -lmkl_intel_lp64 -lmkl_intel_th read -lmkl_core -lpthread -lifcore

"""

But I encountered this error when I ran the following test command in directory of /openmx3.9/work
""
$ mpirun -np 1 openmx Methane.dat > met.std
Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
PMPI_Comm_size(124): MPI_Comm_size(comm=0x5b, size=0x154338c) failed
PMPI_Comm_size(78).: Invalid communicator
"""

Would you help me to solve this problem?
Very Many Thanks.

GW
メンテ
Page: [1]

Re: PMPI_Comm_size: Invalid communicator, error ( No.1 )
Date: 2020/03/26 22:32
Name: Garret Wong  <garretwong1@gmail.com>

Dear all:

I got some results before the problem appeared in met.dat.
Maybe it helps to solve the problem

"""


The number of threads in each node for OpenMP parallelization is 1.


*******************************************************
*******************************************************
Welcome to OpenMX Ver. 3.9
Copyright (C), 2002-2019, T. Ozaki
OpenMX comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to
redistribute it under the constitution of the GNU-GPL.
*******************************************************
*******************************************************



<Input_std> Your input file was normally read.
<Input_std> The system includes 2 species and 5 atoms.

*******************************************************
PAO and VPS
*******************************************************

<SetPara_DFT> PAOs of species H were normally found.
<SetPara_DFT> PAOs of species C were normally found.
<SetPara_DFT> VPSs of species H were normally found.
H_PBE19.vps is j-dependent.
In case of scf.SpinOrbit.Coupling=off,
j-dependent pseudo potentials are averaged by j-degeneracy,
which corresponds to a scalar relativistic treatment.
<SetPara_DFT> VPSs of species C were normally found.
C_PBE19.vps is j-dependent.
In case of scf.SpinOrbit.Coupling=off,
j-dependent pseudo potentials are averaged by j-degeneracy,
which corresponds to a scalar relativistic treatment.

*******************************************************
Fourier transform of PAO and projectors of VNL
*******************************************************

<FT_PAO> Fourier transform of pseudo atomic orbitals
<FT_NLP> Fourier transform of non-local projectors
<FT_ProExpn_VNA> Fourier transform of VNA separable projectors
<FT_VNA> Fourier transform of VNA potentials
<FT_ProductPAO> Fourier transform of product of PAOs

*******************************************************
Allocation of atoms to proccesors at MD_iter= 1
*******************************************************

proc = 0 # of atoms= 5 estimated weight= 5.00000




*******************************************************
Analysis of neighbors and setting of grids
*******************************************************

TFNAN= 20 Average FNAN= 4.00000
TSNAN= 0 Average SNAN= 0.00000
<truncation> CpyCell= 1 ct_AN= 1 FNAN SNAN 4 0
<truncation> CpyCell= 1 ct_AN= 2 FNAN SNAN 4 0
<truncation> CpyCell= 1 ct_AN= 3 FNAN SNAN 4 0
<truncation> CpyCell= 1 ct_AN= 4 FNAN SNAN 4 0
<truncation> CpyCell= 1 ct_AN= 5 FNAN SNAN 4 0
TFNAN= 20 Average FNAN= 4.00000
TSNAN= 0 Average SNAN= 0.00000
<truncation> CpyCell= 2 ct_AN= 1 FNAN SNAN 4 0
<truncation> CpyCell= 2 ct_AN= 2 FNAN SNAN 4 0
<truncation> CpyCell= 2 ct_AN= 3 FNAN SNAN 4 0
<truncation> CpyCell= 2 ct_AN= 4 FNAN SNAN 4 0
<truncation> CpyCell= 2 ct_AN= 5 FNAN SNAN 4 0
TFNAN= 20 Average FNAN= 4.00000
TSNAN= 0 Average SNAN= 0.00000
<truncation> CpyCell= 2 ct_AN= 1 FNAN SNAN 4 0
<truncation> CpyCell= 2 ct_AN= 2 FNAN SNAN 4 0
<truncation> CpyCell= 2 ct_AN= 3 FNAN SNAN 4 0
<truncation> CpyCell= 2 ct_AN= 4 FNAN SNAN 4 0
<truncation> CpyCell= 2 ct_AN= 5 FNAN SNAN 4 0
<Check_System> The system is molecule.
lattice vectors (bohr)
A = 18.897259885789, 0.000000000000, 0.000000000000
B = 0.000000000000, 18.897259885789, 0.000000000000
C = 0.000000000000, 0.000000000000, 18.897259885789
reciprocal lattice vectors (bohr^-1)
RA = 0.332491871581, 0.000000000000, 0.000000000000
RB = 0.000000000000, 0.332491871581, 0.000000000000
RC = 0.000000000000, 0.000000000000, 0.332491871581
Grid_Origin -9.300995100037 -9.300995100037 -9.300995100037
Cell_Volume = 6748.333037104149 (Bohr^3)
GridVol = 0.025742847584 (Bohr^3)
Grid_Origin -9.300995100037 -9.300995100037 -9.300995100037
Cell_Volume = 6748.333037104149 (Bohr^3)
GridVol = 0.025742847584 (Bohr^3)
<UCell_Box> Info. of cutoff energy and num. of grids
lattice vectors (bohr)
A = 18.897259885789, 0.000000000000, 0.000000000000
B = 0.000000000000, 18.897259885789, 0.000000000000
C = 0.000000000000, 0.000000000000, 18.897259885789
reciprocal lattice vectors (bohr^-1)
RA = 0.332491871581, 0.000000000000, 0.000000000000
RB = 0.000000000000, 0.332491871581, 0.000000000000
RC = 0.000000000000, 0.000000000000, 0.332491871581
Required cutoff energy (Ryd) for 3D-grids = 120.0000
Used cutoff energy (Ryd) for 3D-grids = 113.2041, 113.2041, 113.2041
Num. of grids of a-, b-, and c-axes = 64, 64, 64
Grid_Origin -9.300995100037 -9.300995100037 -9.300995100037
Cell_Volume = 6748.333037104149 (Bohr^3)
GridVol = 0.025742847584 (Bohr^3)
Cell vectors (bohr) of the grid cell (gtv)
gtv_a = 0.295269685715, 0.000000000000, 0.000000000000
gtv_b = 0.000000000000, 0.295269685715, 0.000000000000
gtv_c = 0.000000000000, 0.000000000000, 0.295269685715
|gtv_a| = 0.295269685715
|gtv_b| = 0.295269685715
|gtv_c| = 0.295269685715
Num. of grids overlapping with atom 1 = 20336
Num. of grids overlapping with atom 2 = 20346
Num. of grids overlapping with atom 3 = 20346
Num. of grids overlapping with atom 4 = 20346
Num. of grids overlapping with atom 5 = 20346

"""
メンテ
Re: PMPI_Comm_size: Invalid communicator, error ( No.2 )
Date: 2020/03/27 01:23
Name: Naoya Yamaguchi

Hi,

Your problem looks similar to that shown in
https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/611013

I guess that this may be solved by using the appropriate settings of makefile and compilers.

Regards,
Naoya Yamaguchi

メンテ
Re: PMPI_Comm_size: Invalid communicator, error ( No.3 )
Date: 2020/03/27 11:40
Name: Garret Wong  <garretwong1@gmail.com>

Thanks a lot.

I have solved this problem by modifying -lmkl_blacs_lp64 to -lmkl_blacs_intelmpi_lp64

Regards,
GW
メンテ

Page: [1]