Size of molecular systems for NTOs analysis in TPA

Dear All,

I am interested in using NTOs of response transition density matrices method to gain insight into TPA in FPs chromophores as well as other molecular systems. In the original work, the largest model utilized is p-nitroaniline.

Could you give an idea about the upper limit of molecular system size that can be investigated with NTOs analysis? How does the computations time scale with the number of final excited states considered?

Let’s say that I have 16-24 processors available on one node and up to 120 GB of memory but I would really prefer to use up to 55 GB.

Also, unrelated to the topic, but do you plan to include analogous NTOs analysis with TDDFT? As far as I can see, it is available in combination with EOM-CCSD method only.

Hi Dawid,
NTO analysis is definitely available with TDDFT, pertinent $rem is CIS_AMPL_ANAL = TRUE. The analysis itself is a simple (change of basis) so the overwhelming bottleneck will be the calculation of the excited states; NTO generation should be small post-processing step.

-jmh

Hi,
I don’t think we’re talking about the same thing. I refer to NTOs analysis that allows to get insight into virtual states involved in TPA process as implemented here:
https://pubs.acs.org/doi/abs/10.1021/acs.jpclett.7b01422

Ah, we are not talking about same thing but answer is the same: small post-processing step, cost should be dominated by EOM-CCSD cost. But then regarding your question about “can we get this for TDDFT”, answer is that two-photon properties with TDDFT requires quadratic response theory (as compared to linear response to get excitation energies), and I do not know whether anyone is working on this in Q-Chem so not in immediate future.

David, our EOM-CC code for 2PA cross-sections and for response NTOs can handle calculations with 500 basis functions (no symmetry). With symmetry, this can be pushed further. Special care is needed to set these calculations properly: I recommend using single precision and XM backend for the machines you have. For large calculations, using 16-24 cores is reasonable. When using XM backend, you do not need to worry about setting MEM_TOTAL, the algorithm is smart enough to figure out how to proceed.

Here is an example from my timings benchmark set: HBDI pyridinium chromophore (molecule from J. Chem. Phys. Lett. 8 , 1958 (2018)), Cs symmetry, 18 heavy atoms, 6-31+G* basis (383 b.f.). 2PA calculation for two A1 EOM-CC states takes 10 hours on 16 cores.

Here is $rem section for the input (this did not compute NTOs, but they should not create a big overhead):
$rem
BASIS = 6-31+G*
METHOD = eom-ccsd
EE_SINGLETS = [2,0] ! Compute two EOM-EE singlet A1 excited states
CC_EOM_PROP = true
CC_TRANS_PROP = true
CC_BACKEND = xm
CC_EOM_2PA = 1
CC_DIIS_SIZE = 15
EOM_DAVIDSON_MAXVECTORS = 120
cc_sp_t_conv = 4
cc_sp_e_conv = 6
cc_erase_dp_integrals = 1 ! set 1 to save disk space
cc_single_prec = 1
eom_single_prec = 1
CC_SP_DM = 1
cc_eom_2pa_single_prec = 1
cc_eom_2pa_econv = 5
cc_eom_2pa_xconv = 5
$end

120 GBs will limit the EOM-CCSD calculations to say about 300-400 basis functions or so, if you want a quick calculation and not use the disk much. For such EOM-CC calculations, the number of cores are not that important but the memory is. 2PA-NTOs calculations need 12 response states per 2PA transition, which is a lot of memory.

Also, the largest reported calculation in our 2PA-NTOs paper is not p-nitroaniline but stilbene. As Prof. Krylov said, we have done 2PA calculations on HBDI and PYPb chromophores previously. 55 GBs would practically be insufficient for such calculations with a decent basis set, but you can try with 120 GBs, single precision, and XM backend as Prof. Krylov suggested.

I do not think anyone is implementing 2PA NTOs for other methods though.

1 Like

Dear Prof. Krylov and Kaushik, thank you for your answers!