howto:pao-ml
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
howto:pao-ml [2018/01/25 00:23] – [Tuning the PAO Optimization] oschuett | howto:pao-ml [2018/07/13 13:04] – oschuett | ||
---|---|---|---|
Line 10: | Line 10: | ||
===== Step 2: Calculate reference data in primary basis ===== | ===== Step 2: Calculate reference data in primary basis ===== | ||
- | Choose a primary basis set, e.g. '' | + | Choose a primary basis set, e.g. '' |
===== Step 3: Optimize PAO basis for training structures ===== | ===== Step 3: Optimize PAO basis for training structures ===== | ||
- | Choose a [[inp> | + | Choose a [[inp> |
Most of the PAO settings are in the [[inp> | Most of the PAO settings are in the [[inp> | ||
Line 55: | Line 55: | ||
==== Tuning the PAO Optimization ===== | ==== Tuning the PAO Optimization ===== | ||
- | Find the optimal PAO basis poses an intricate minimization problem, because the rotation matrix U and the Kohn-Sham | + | Finding |
matrix H have to be optimized in a self-consistent manner. In order to speedup the optimization, | matrix H have to be optimized in a self-consistent manner. In order to speedup the optimization, | ||
* The frequency with which H is recalculated is determined by [[inp> | * The frequency with which H is recalculated is determined by [[inp> | ||
Line 76: | Line 76: | ||
For the simulation of larger systems the PAO-ML scheme infers new PAO basis sets from the training data. For this two heuristics are employed: A [[https:// | For the simulation of larger systems the PAO-ML scheme infers new PAO basis sets from the training data. For this two heuristics are employed: A [[https:// | ||
- | In order to obtain good results from the learning machinery a small number of so-called [[https:// | + | In order to obtain good results from the learning machinery a small number of so-called [[https:// |
For the optimization of the hyper-parameter exists no gradient, hence one has to use a derivative-free method like the one by [[https:// | For the optimization of the hyper-parameter exists no gradient, hence one has to use a derivative-free method like the one by [[https:// | ||
Line 133: | Line 133: | ||
Unfortunately, | Unfortunately, | ||
- | The most critical parameters for learnability are [[inp> | + | The most critical parameters for learnability are [[inp> |
howto/pao-ml.txt · Last modified: 2024/01/03 13:19 by oschuett