0
点赞
收藏
分享

微信扫一扫

170719 Keras重复运行结果不一致问题


Github-2743
Github-2479
不同版本的keras,对同样的代码,得到不同结果的原因总结
用深度学习每次得到的结果都不一样,怎么办?
单独设置seed的方法适合于CPU,GPU不适用
指定随机数+Shuffle=False方法(认为模型结果的不确定性与采样训练时的随机种子有关)

同时设置numpy与tensorflow随机种子的方法

见《用深度学习每次得到的结果都不一样,怎么办?》

from numpy.random import seed 
seed(1) 
from tensorflow import set_random_seed 
set_random_seed(2)

万能重启大法(找不到原因,重启试试)

Thanks for the suggestion. I have set the seed before importing the Keras libraries. Here, just to show that I have put the seed, I have edited in that way. After a lot of prolonged analysis I found that, to get the consistency in results, we need to shutdown the ipynb file, restart once again and run the code. If I just interrupt and rerun the code once again, it is giving me inconsistent results. (However, I expect that the results should be consistent even if I forcefully interrupt and rerun the code once again. Please let me know your comments).

后端配置大法
cuDNN’s backward pass is by default non-deterministic. See http://deeplearning.net/software/theano/library/sandbox/cuda/dnn.html

The documentation of CUDNN tells that, for the 2 following operations, the reproducibility is not guaranteed with the default implementation: 
 cudnnConvolutionBackwardFilter and cudnnConvolutionBackwardData. Those correspond to the gradient wrt the weights and the gradient wrt the input of the convolution. They are also used sometimes in the forward pass, when they give a speed up. The Theano flag dnn.conv.algo_bwd can be use to force the use of a slower but deterministic convolution implementation.

确实解决不了,心灵安慰法

The model weights are initialised randomly according to the initialization type. In general stochastic optimisation is not known to yield the exact same result each time. That’s why people like to use ensembles of models to give more accurate predictions.


举报

相关推荐

0 条评论