引用本文:
【打印本页】   【HTML】   【下载PDF全文】   View/Add Comment  【EndNote】   【RefMan】   【BibTex】
←前一篇|后一篇→ 过刊浏览    高级检索
本文已被:浏览 797次   下载 432 本文二维码信息
码上扫一扫!
分享到: 微信 更多
Accelerating the Construction of Neural Network Potential Energy Surfaces: A Fast Hybrid Training Algorithm
Yao-long Zhang,Xue-yao Zhou,Bin Jiang*
Author NameAffiliationE-mail
Yao-long Zhang Department of Chemical Physics, University of Science and Technology of China, Hefei 230026, China  
Xue-yao Zhou Department of Chemical Physics, University of Science and Technology of China, Hefei 230026, China  
Bin Jiang* Department of Chemical Physics, University of Science and Technology of China, Hefei 230026, China bjiangch@ustc.edu.cn 
Abstract:
Machine learning approaches have been promising in constructing high-dimensional potential energy surfaces (PESs) for molecules and materials. Neural networks (NNs) are one of the most popular such tools because of its simplicity and efficiency. The training algorithm for NNs becomes essential to achieve a fast and accurate fit with numerous data. The Levenberg-Marquardt (LM) algorithm has been recognized as one of the fastest and robust algorithms to train medium sized NNs and widely applied in recent NN based high quality PESs. However, when the number of ab initio data becomes large, the efficiency of LM is limited, making the training time consuming. Extreme learning machine (ELM) is a recently proposed algorithm which determines the weights and biases of a single hidden layer NN by a linear solution and is thus extremely fast. It, however, does not produce sufficiently small fitting error because of its random nature. Taking advantages of both algorithms, we report a generalized hybrid algorithm in training multilayer NNs. Tests on H+H2 and CH4+Ni(111) systems demonstrate the much higher efficiency of this hybrid algorithm (ELM-LM) over the original LM. We expect that ELM-LM will find its widespread applications in building up high-dimensional NN based PESs.
Key words:  Potential energy surface  Reaction dynamics  Neural networks
FundProject:This work was supported by the National Key R&D Program of China (No.2017YFA0303500), National Natural Science Foundation of China (No.91645202, No.21722306, and No.21573203), Fundamental Research Funds for the Central Universities of China (No.WK2060190082 and No.WK2340000078) Calculations have been done at the Supercomputing Center of USTC.
加速神经网络势能面的构建:一种杂化的训练算法
张耀龙,周雪瑶,蒋彬*
摘要:
关键词:  
DOI:10.1063/1674-0068/30/cjcp1711212
分类号: