-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cannot output quasiparticle data by -sdata #35
Comments
dear @zhangxiaoyu2046, I think that, as you say, the size of your trajectory does not fit into the available RAM and numpy cannot allocate the memory. https://abelcarreras.github.io/DynaPhoPy/special.html There, I describe different strategies that you can use to deal with large trajectories. About the second error I don't know. Segmentation fault is too vague to figure out what is happening. Maybe it is also some memory issue. Did you try to run some of the provided examples (which are much smaller) to make sure that the installation is correct?. The way dynaphopy work is to first read all the data from the trajectory in memory to compute the atomic velocities and then the power spectrum (this is where the memory usage is highest). Then, -sdata option just tells dynaphopy to store the results in a file. This latter part should not impact much to the memory usage. |
Thanks for your advice, I've try the Mapping RAM memory on disk (--memmap) approach, the temporary output file can be generated but the running process still terminated, seems still an Out-Of-Memory error. I wonder if the shape of power spectrum must be converged before doing any subsequent calculations? In my case of a graphene supercell with lammps potential, the size of the MD supercell need to be very large, if I want the shape of power spectrum to be very smooth, and this large supercell will cost lots of RAM and cause my OOM error. So my question is, if I use a smaller supercell, and the power spectrum has some spikes, is this power spectrum usable? how many inaccuracy will this cause to subsequent calculations? And what is a reasonable criterion for a "good" or "acceptable" power spectrum, like how smooth it should be? Other than changing the supercell size, is there any other factors that could influence the shape of power spectrum? Thanks again for your help. |
In principle the size of the supercell doesn't need to be so large. As long as you can capture most of the atomic correlations it should be fine. I would say that the size of the supercell for the MD can be the same size the supercell you would use for a lattice dynamics calculation to compute the force constants. To improve the smoothness of the power spectrum you can assume ergodicity and increase the length in time of your simulation instead of in size to get a better sampling. As you increase your sampling your spectrum will become smoother. How much? this will depend on the system. As a rule of thumb the more symmetric is the crystal the less sampling you need. You will need to check the convergence with the number of time steps for this. Also you can use the Maximum Entropy method to compute the power spectrum, this will give you smoother spectra but you may need quite a large sampling to work well. |
Hi everyone,
I want to obtain the lifetime of each phonon mode. When I tried to obtain the quasiparticle phonon data by running:
$ dynaphopy input_file TRAJECTORY -sdata
there was an error:
seems that the numpy array is too large to be created? my structure has 5000 atoms, and the velocity.lammpstrj is about 14 GB.
when I switched to a fat computation node with very large RAM, there is no output after 3 days while the process was stucked somewhere of dynaphopy code.
when I tried to test on a smaller structure with less atoms (162 atoms), there was another error:
I don't know where I was wrong, could someone explain a bit about the workflow of generating the quasiparticle info? Is it necessary to perform a peak analysis by $ dynaphopy input_file TRAJECTORY -pa, before I run the -sdata command?
Many thanks!
The text was updated successfully, but these errors were encountered: