Abstract:In view of the deep learning rate can not completely fit the running state model, resulting in slow convergence speed and large error, this paper proposes an adaptive learning rate strategy AdaDouL. Based on the learning rate of the last round, use the current gradient to adaptively adjust the learning rate, and according to the increment plus or minus value of the loss function, 2 learning rates of different descent rates are given. With the loss of function between the model output and the label as the evaluation index, the simulation is carried out, using convolutional neural network model on Vot2015 data sets. The results show that the depth model using learning strategies has faster convergence speed than learning strategies using AdaGrad and AadDec, convergence error is reduced. The test shows that center error accuracy increased by 4.5%, the detection rate increased by 2.1%.