Yolov3 中作者列出了使用coco数据集训练的数据结果 其中显示在输入图片大小为416x416的时候 模型的mAP大小为31% 当输入图片大小为608x608的时候 模型的mAP达到33% 论文中结果截图如下
我们做了个实验 目的是验证当我们自己训练的时候能否达到论文中的给出的结果 判断我们平时训练的方法是否有问题。
这里使用了3块TITAN X显卡进行实验 batch设置为128 subdivision为32 其他max_batches没有改 按原来的设为500200 其他的不变。计算mAP使用的是cocoAPI 估计作者也是用的这个 使用detector valid这个命令可以直接将测试结果生成为cocoAPI所需要的json格式 thresh设置为0.001
1、前期训练
./darknet detector train cfg/coco.data cfg/yolov3.cfg darknet53.conv.74 -gpus 0,1,2 | tee -a yolov3-coco-log.txt
PS: 保存log会拖慢训练的速度
然后提取log中的loss 解析并画出loss的曲线 曲线如下
可以看到训练到后面的时候loss 基本就不在降低了 但是模型的mAP 只有25左右 远远还没达到论文中写的31%
2、不保存log 边训练边测试mAP
测试的结果如下
StepsAPAP50AP75APSAPMAPL1057002650.124.89.327.340.512000026.75026.69.92841.312300026.450.325.49.32840.512600026.950.726.31028.841.312900026.149.325.29.62839.713200026.750.525.810.628.739.913500026.650.825.49.82841.313660025.848.524.98.927.840.113930026.850.725.99.928.741.813950026.650.225.81027.542.314100027.250.926.810.728.64214450026.251.524.21128.539.114850027.151.126.310.229.141.415250026.749.82610.427.242.515650026.950.825.810.427.842.216050026.749.926.39.828.14216450027.351.326.910.329.741.216850027.751.427.710.430.141.517250027.751.327.510.729.742.217650027.351.726.610.5304118050027.551.626.710.329.542.418450027.951.227.59.930.242.918850026.950.526.21028.840.91925002852.227.51129.442.119650026.950.226.210.529.440.520050027.150.7279.929.841.120450027.451.826.710.429.441.720850027.150.526.210.728.941.821250026.951.5261128.840.82165002851.528.110.930.141.722050027.350.22710.229.941.722450027.651.827.21130.141.922850028.452.228.110.73143.123250027.851.427.610.830.441.823650028.352.128.211.230.842.624050028.151.128.11130.842.32445002750.226.71029.241.424850027.952.227.311.429.342.425250027.452.126.311.33041.325650027.952.127.69.829.144.826250028.251.528.411.130.943.126650028.352.227.811.230.243.627050028.552.128.511.630.543.227450027.951.427.611.129.343.427850028.751.92910.931.443.428250027.751.227.21130.242.12865002952.429.311.43144.329050028.151.527.611.230.743.229450028.652.628.712.230.942.429850027.550.827.110.630.241.830250028.65228.712.230.842.830650028.25128.611.230.742.631050028.152.527.811.330.542.731450028.652.528.311.930.744.131750028.151.428.211.430.542.532350028.751.929.111.831.343.632650027.350.527.510.630.241.332950028.95229.311.330.645.13325002952.829.211.131.244.633550028.6522910.931.843.633850028.953.128.912.230.844.134150028.952.2291130.944.334450029.6533011.932.144.134700028.652.828.210.831.543.935100028.65228.911.231.343.435500028.753.228.111.630.443.435900029.353.329.31330.84436300028.151.327.811.130.242.536700029.552.629.911.731.744.136950028.752.828.811.331.743.43725002850.628.59.831.143.537550028.852.229.312.331.243.237850029.253.129.411.93144.438150028.150.828.310.930.842.238450028.852.828.911.431.743.138750028.152.828.211.531.641.139050029.452.6301132.14539350029.152.629.311.531.544.639650027.850.827.91129.84239950028.251.528.210.731.342.440250032.455.733.913.635.248.4由结果可以看到loss其实还是慢慢在降的 只是降的幅度较小 作者采用的是迂回往前的方式 慢慢爬出来的 并且可以看到基本在320000 iteration的时候 mAP就基本不在升高了 所以yolov3.cfg中作者设置在400000 的时候 learning rate 乘0.1 之后可以看到mAP一下就升高到32.4%。
以下是结果的对比
Steps
AP
AP50
AP75
APS
APM
APL
Paper
/
31
/
/
/
/
/
Paper 608x608
/
33
57.9
34.4
18.3
35.4
41.9
Author
/
31.6
56.3
31.8
14.3
34.3
46.6
416x416
402500
32.4
55.7
33.9
13.6
35.2
48.4
608x608
402500
34.3
58.1
36.3
18.7
37.7
46.0





本文链接: http://yoloes.immuno-online.com/view-751099.html