-
Notifications
You must be signed in to change notification settings - Fork 3
/
Copy pathCifarI64bits.log
executable file
·305 lines (305 loc) · 16.6 KB
/
CifarI64bits.log
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
2022-03-07 21:45:53,912 config: Namespace(K=256, M=8, T=0.4, alpha=10, batch_size=128, checkpoint_root='./checkpoints/CifarI64bits', dataset='CIFAR10', device='cuda:0', download_cifar10=False, epoch_num=50, eval_interval=1, feat_dim=128, final_lr=1e-05, hp_beta=0.001, hp_gamma=0.5, hp_lambda=0.05, is_asym_dist=True, lr=0.01, lr_scaling=0.001, mode='debias', momentum=0.9, monitor_counter=10, notes='CifarI64bits', num_workers=20, optimizer='SGD', pos_prior=0.1, protocal='I', queue_begin_epoch=3, seed=2021, start_lr=1e-05, topK=1000, trainable_layer_num=2, use_scheduler=True, use_writer=True, vgg_model_path='vgg16.pth', warmup_epoch_num=1).
2022-03-07 21:45:53,912 prepare CIFAR10 datatset.
2022-03-07 21:45:55,821 setup model.
2022-03-07 21:46:03,927 define loss function.
2022-03-07 21:46:03,928 setup SGD optimizer.
2022-03-07 21:46:03,929 prepare monitor and evaluator.
2022-03-07 21:46:03,930 begin to train model.
2022-03-07 21:46:03,931 register queue.
2022-03-07 21:49:21,999 epoch 0: avg loss=3.893124, avg quantization error=0.016370.
2022-03-07 21:49:22,001 begin to evaluate model.
2022-03-07 21:50:39,132 compute mAP.
2022-03-07 21:50:56,960 val mAP=0.573740.
2022-03-07 21:50:56,967 save the best model, db_codes and db_targets.
2022-03-07 21:50:59,708 finish saving.
2022-03-07 21:54:15,384 epoch 1: avg loss=3.115020, avg quantization error=0.013594.
2022-03-07 21:54:15,385 begin to evaluate model.
2022-03-07 21:55:32,243 compute mAP.
2022-03-07 21:55:50,147 val mAP=0.585341.
2022-03-07 21:55:50,147 save the best model, db_codes and db_targets.
2022-03-07 21:55:52,746 finish saving.
2022-03-07 21:59:10,181 epoch 2: avg loss=2.926660, avg quantization error=0.013336.
2022-03-07 21:59:10,182 begin to evaluate model.
2022-03-07 22:00:27,091 compute mAP.
2022-03-07 22:00:45,403 val mAP=0.579422.
2022-03-07 22:00:45,404 the monitor loses its patience to 9!.
2022-03-07 22:04:01,954 epoch 3: avg loss=5.257419, avg quantization error=0.016044.
2022-03-07 22:04:01,958 begin to evaluate model.
2022-03-07 22:05:19,263 compute mAP.
2022-03-07 22:05:36,905 val mAP=0.648618.
2022-03-07 22:05:36,905 save the best model, db_codes and db_targets.
2022-03-07 22:05:39,623 finish saving.
2022-03-07 22:08:55,645 epoch 4: avg loss=5.128205, avg quantization error=0.016244.
2022-03-07 22:08:55,646 begin to evaluate model.
2022-03-07 22:10:12,683 compute mAP.
2022-03-07 22:10:30,174 val mAP=0.655981.
2022-03-07 22:10:30,175 save the best model, db_codes and db_targets.
2022-03-07 22:10:33,086 finish saving.
2022-03-07 22:13:46,273 epoch 5: avg loss=5.084722, avg quantization error=0.016212.
2022-03-07 22:13:46,273 begin to evaluate model.
2022-03-07 22:15:02,662 compute mAP.
2022-03-07 22:15:20,390 val mAP=0.664048.
2022-03-07 22:15:20,390 save the best model, db_codes and db_targets.
2022-03-07 22:15:23,000 finish saving.
2022-03-07 22:18:38,277 epoch 6: avg loss=5.053029, avg quantization error=0.016138.
2022-03-07 22:18:38,277 begin to evaluate model.
2022-03-07 22:19:55,344 compute mAP.
2022-03-07 22:20:13,122 val mAP=0.665635.
2022-03-07 22:20:13,122 save the best model, db_codes and db_targets.
2022-03-07 22:20:15,740 finish saving.
2022-03-07 22:23:30,749 epoch 7: avg loss=5.033644, avg quantization error=0.015996.
2022-03-07 22:23:30,750 begin to evaluate model.
2022-03-07 22:24:47,913 compute mAP.
2022-03-07 22:25:05,428 val mAP=0.663672.
2022-03-07 22:25:05,429 the monitor loses its patience to 9!.
2022-03-07 22:28:21,646 epoch 8: avg loss=5.018476, avg quantization error=0.015863.
2022-03-07 22:28:21,647 begin to evaluate model.
2022-03-07 22:29:38,050 compute mAP.
2022-03-07 22:29:55,530 val mAP=0.666266.
2022-03-07 22:29:55,532 save the best model, db_codes and db_targets.
2022-03-07 22:29:58,163 finish saving.
2022-03-07 22:33:12,876 epoch 9: avg loss=5.004537, avg quantization error=0.015730.
2022-03-07 22:33:12,877 begin to evaluate model.
2022-03-07 22:34:30,008 compute mAP.
2022-03-07 22:34:47,985 val mAP=0.668269.
2022-03-07 22:34:47,986 save the best model, db_codes and db_targets.
2022-03-07 22:34:50,852 finish saving.
2022-03-07 22:38:04,937 epoch 10: avg loss=4.994173, avg quantization error=0.015623.
2022-03-07 22:38:04,938 begin to evaluate model.
2022-03-07 22:39:21,097 compute mAP.
2022-03-07 22:39:38,536 val mAP=0.669659.
2022-03-07 22:39:38,537 save the best model, db_codes and db_targets.
2022-03-07 22:39:41,281 finish saving.
2022-03-07 22:42:57,194 epoch 11: avg loss=4.982881, avg quantization error=0.015519.
2022-03-07 22:42:57,195 begin to evaluate model.
2022-03-07 22:44:14,454 compute mAP.
2022-03-07 22:44:32,295 val mAP=0.671849.
2022-03-07 22:44:32,296 save the best model, db_codes and db_targets.
2022-03-07 22:44:34,857 finish saving.
2022-03-07 22:47:51,244 epoch 12: avg loss=4.970806, avg quantization error=0.015470.
2022-03-07 22:47:51,245 begin to evaluate model.
2022-03-07 22:49:08,202 compute mAP.
2022-03-07 22:49:25,825 val mAP=0.673661.
2022-03-07 22:49:25,825 save the best model, db_codes and db_targets.
2022-03-07 22:49:28,458 finish saving.
2022-03-07 22:52:41,574 epoch 13: avg loss=4.957412, avg quantization error=0.015390.
2022-03-07 22:52:41,575 begin to evaluate model.
2022-03-07 22:53:58,103 compute mAP.
2022-03-07 22:54:15,820 val mAP=0.675373.
2022-03-07 22:54:15,820 save the best model, db_codes and db_targets.
2022-03-07 22:54:18,438 finish saving.
2022-03-07 22:57:34,429 epoch 14: avg loss=4.949562, avg quantization error=0.015295.
2022-03-07 22:57:34,436 begin to evaluate model.
2022-03-07 22:58:50,928 compute mAP.
2022-03-07 22:59:08,840 val mAP=0.676610.
2022-03-07 22:59:08,841 save the best model, db_codes and db_targets.
2022-03-07 22:59:11,507 finish saving.
2022-03-07 23:02:25,698 epoch 15: avg loss=4.943632, avg quantization error=0.015230.
2022-03-07 23:02:25,699 begin to evaluate model.
2022-03-07 23:03:42,527 compute mAP.
2022-03-07 23:04:00,196 val mAP=0.677217.
2022-03-07 23:04:00,197 save the best model, db_codes and db_targets.
2022-03-07 23:04:02,951 finish saving.
2022-03-07 23:07:20,336 epoch 16: avg loss=4.932669, avg quantization error=0.015235.
2022-03-07 23:07:20,337 begin to evaluate model.
2022-03-07 23:08:37,738 compute mAP.
2022-03-07 23:08:55,652 val mAP=0.678915.
2022-03-07 23:08:55,652 save the best model, db_codes and db_targets.
2022-03-07 23:08:58,542 finish saving.
2022-03-07 23:12:13,158 epoch 17: avg loss=4.924254, avg quantization error=0.015216.
2022-03-07 23:12:13,159 begin to evaluate model.
2022-03-07 23:13:30,083 compute mAP.
2022-03-07 23:13:47,614 val mAP=0.680242.
2022-03-07 23:13:47,615 save the best model, db_codes and db_targets.
2022-03-07 23:13:50,143 finish saving.
2022-03-07 23:17:07,360 epoch 18: avg loss=4.915601, avg quantization error=0.015182.
2022-03-07 23:17:07,361 begin to evaluate model.
2022-03-07 23:18:24,634 compute mAP.
2022-03-07 23:18:42,384 val mAP=0.681308.
2022-03-07 23:18:42,384 save the best model, db_codes and db_targets.
2022-03-07 23:18:45,133 finish saving.
2022-03-07 23:22:00,358 epoch 19: avg loss=4.910041, avg quantization error=0.015143.
2022-03-07 23:22:00,359 begin to evaluate model.
2022-03-07 23:23:16,850 compute mAP.
2022-03-07 23:23:34,924 val mAP=0.682950.
2022-03-07 23:23:34,925 save the best model, db_codes and db_targets.
2022-03-07 23:23:37,448 finish saving.
2022-03-07 23:26:49,220 epoch 20: avg loss=4.903714, avg quantization error=0.015125.
2022-03-07 23:26:49,220 begin to evaluate model.
2022-03-07 23:28:06,202 compute mAP.
2022-03-07 23:28:23,734 val mAP=0.683567.
2022-03-07 23:28:23,734 save the best model, db_codes and db_targets.
2022-03-07 23:28:26,379 finish saving.
2022-03-07 23:31:41,698 epoch 21: avg loss=4.894183, avg quantization error=0.015113.
2022-03-07 23:31:41,698 begin to evaluate model.
2022-03-07 23:32:58,945 compute mAP.
2022-03-07 23:33:16,797 val mAP=0.685153.
2022-03-07 23:33:16,798 save the best model, db_codes and db_targets.
2022-03-07 23:33:19,580 finish saving.
2022-03-07 23:36:36,083 epoch 22: avg loss=4.887142, avg quantization error=0.015097.
2022-03-07 23:36:36,084 begin to evaluate model.
2022-03-07 23:37:52,900 compute mAP.
2022-03-07 23:38:10,751 val mAP=0.683406.
2022-03-07 23:38:10,751 the monitor loses its patience to 9!.
2022-03-07 23:41:23,259 epoch 23: avg loss=4.883076, avg quantization error=0.015072.
2022-03-07 23:41:23,260 begin to evaluate model.
2022-03-07 23:42:41,060 compute mAP.
2022-03-07 23:42:58,859 val mAP=0.686866.
2022-03-07 23:42:58,860 save the best model, db_codes and db_targets.
2022-03-07 23:43:14,420 finish saving.
2022-03-07 23:46:29,990 epoch 24: avg loss=4.876261, avg quantization error=0.015035.
2022-03-07 23:46:30,000 begin to evaluate model.
2022-03-07 23:47:46,357 compute mAP.
2022-03-07 23:48:03,953 val mAP=0.688148.
2022-03-07 23:48:03,954 save the best model, db_codes and db_targets.
2022-03-07 23:48:06,527 finish saving.
2022-03-07 23:51:20,617 epoch 25: avg loss=4.869824, avg quantization error=0.015026.
2022-03-07 23:51:20,618 begin to evaluate model.
2022-03-07 23:52:37,884 compute mAP.
2022-03-07 23:52:55,560 val mAP=0.689209.
2022-03-07 23:52:55,561 save the best model, db_codes and db_targets.
2022-03-07 23:52:58,363 finish saving.
2022-03-07 23:56:12,463 epoch 26: avg loss=4.861597, avg quantization error=0.015011.
2022-03-07 23:56:12,463 begin to evaluate model.
2022-03-07 23:57:29,371 compute mAP.
2022-03-07 23:57:47,087 val mAP=0.690143.
2022-03-07 23:57:47,087 save the best model, db_codes and db_targets.
2022-03-07 23:57:49,942 finish saving.
2022-03-08 00:01:05,375 epoch 27: avg loss=4.860347, avg quantization error=0.015008.
2022-03-08 00:01:05,375 begin to evaluate model.
2022-03-08 00:02:22,305 compute mAP.
2022-03-08 00:02:39,976 val mAP=0.691157.
2022-03-08 00:02:39,977 save the best model, db_codes and db_targets.
2022-03-08 00:02:42,657 finish saving.
2022-03-08 00:05:57,471 epoch 28: avg loss=4.854854, avg quantization error=0.015003.
2022-03-08 00:05:57,472 begin to evaluate model.
2022-03-08 00:07:15,240 compute mAP.
2022-03-08 00:07:32,740 val mAP=0.691258.
2022-03-08 00:07:32,741 save the best model, db_codes and db_targets.
2022-03-08 00:07:35,538 finish saving.
2022-03-08 00:10:49,875 epoch 29: avg loss=4.849040, avg quantization error=0.014983.
2022-03-08 00:10:49,876 begin to evaluate model.
2022-03-08 00:12:07,238 compute mAP.
2022-03-08 00:12:24,831 val mAP=0.693134.
2022-03-08 00:12:24,832 save the best model, db_codes and db_targets.
2022-03-08 00:12:27,505 finish saving.
2022-03-08 00:15:40,485 epoch 30: avg loss=4.842891, avg quantization error=0.014970.
2022-03-08 00:15:40,486 begin to evaluate model.
2022-03-08 00:16:57,998 compute mAP.
2022-03-08 00:17:15,635 val mAP=0.692636.
2022-03-08 00:17:15,635 the monitor loses its patience to 9!.
2022-03-08 00:20:29,015 epoch 31: avg loss=4.836016, avg quantization error=0.014974.
2022-03-08 00:20:29,015 begin to evaluate model.
2022-03-08 00:21:46,045 compute mAP.
2022-03-08 00:22:03,595 val mAP=0.694979.
2022-03-08 00:22:03,597 save the best model, db_codes and db_targets.
2022-03-08 00:22:06,171 finish saving.
2022-03-08 00:25:20,848 epoch 32: avg loss=4.833015, avg quantization error=0.014981.
2022-03-08 00:25:20,849 begin to evaluate model.
2022-03-08 00:26:37,501 compute mAP.
2022-03-08 00:26:55,234 val mAP=0.693591.
2022-03-08 00:26:55,235 the monitor loses its patience to 9!.
2022-03-08 00:30:07,965 epoch 33: avg loss=4.827125, avg quantization error=0.014980.
2022-03-08 00:30:07,965 begin to evaluate model.
2022-03-08 00:31:25,373 compute mAP.
2022-03-08 00:31:43,341 val mAP=0.697650.
2022-03-08 00:31:43,342 save the best model, db_codes and db_targets.
2022-03-08 00:31:45,910 finish saving.
2022-03-08 00:34:58,787 epoch 34: avg loss=4.822377, avg quantization error=0.014966.
2022-03-08 00:34:58,788 begin to evaluate model.
2022-03-08 00:36:15,379 compute mAP.
2022-03-08 00:36:33,355 val mAP=0.697765.
2022-03-08 00:36:33,356 save the best model, db_codes and db_targets.
2022-03-08 00:36:35,955 finish saving.
2022-03-08 00:39:47,975 epoch 35: avg loss=4.821703, avg quantization error=0.014980.
2022-03-08 00:39:47,975 begin to evaluate model.
2022-03-08 00:41:05,134 compute mAP.
2022-03-08 00:41:23,205 val mAP=0.698475.
2022-03-08 00:41:23,206 save the best model, db_codes and db_targets.
2022-03-08 00:41:25,855 finish saving.
2022-03-08 00:44:39,979 epoch 36: avg loss=4.815079, avg quantization error=0.014953.
2022-03-08 00:44:39,980 begin to evaluate model.
2022-03-08 00:45:57,265 compute mAP.
2022-03-08 00:46:15,489 val mAP=0.699941.
2022-03-08 00:46:15,490 save the best model, db_codes and db_targets.
2022-03-08 00:46:18,229 finish saving.
2022-03-08 00:49:33,609 epoch 37: avg loss=4.810202, avg quantization error=0.014961.
2022-03-08 00:49:33,610 begin to evaluate model.
2022-03-08 00:50:50,700 compute mAP.
2022-03-08 00:51:08,710 val mAP=0.701098.
2022-03-08 00:51:08,710 save the best model, db_codes and db_targets.
2022-03-08 00:51:11,585 finish saving.
2022-03-08 00:54:26,718 epoch 38: avg loss=4.808798, avg quantization error=0.014935.
2022-03-08 00:54:26,719 begin to evaluate model.
2022-03-08 00:55:44,148 compute mAP.
2022-03-08 00:56:01,764 val mAP=0.700531.
2022-03-08 00:56:01,765 the monitor loses its patience to 9!.
2022-03-08 00:59:18,891 epoch 39: avg loss=4.805263, avg quantization error=0.014945.
2022-03-08 00:59:18,892 begin to evaluate model.
2022-03-08 01:00:35,667 compute mAP.
2022-03-08 01:00:53,449 val mAP=0.700202.
2022-03-08 01:00:53,449 the monitor loses its patience to 8!.
2022-03-08 01:04:10,673 epoch 40: avg loss=4.802113, avg quantization error=0.014937.
2022-03-08 01:04:10,674 begin to evaluate model.
2022-03-08 01:05:27,938 compute mAP.
2022-03-08 01:05:45,916 val mAP=0.701075.
2022-03-08 01:05:45,917 the monitor loses its patience to 7!.
2022-03-08 01:08:59,409 epoch 41: avg loss=4.801940, avg quantization error=0.014921.
2022-03-08 01:08:59,410 begin to evaluate model.
2022-03-08 01:10:15,888 compute mAP.
2022-03-08 01:10:33,908 val mAP=0.701702.
2022-03-08 01:10:33,908 save the best model, db_codes and db_targets.
2022-03-08 01:10:46,355 finish saving.
2022-03-08 01:14:02,180 epoch 42: avg loss=4.796775, avg quantization error=0.014923.
2022-03-08 01:14:02,180 begin to evaluate model.
2022-03-08 01:15:19,112 compute mAP.
2022-03-08 01:15:37,032 val mAP=0.701395.
2022-03-08 01:15:37,033 the monitor loses its patience to 9!.
2022-03-08 01:18:53,289 epoch 43: avg loss=4.795534, avg quantization error=0.014908.
2022-03-08 01:18:53,289 begin to evaluate model.
2022-03-08 01:20:10,542 compute mAP.
2022-03-08 01:20:27,941 val mAP=0.701728.
2022-03-08 01:20:27,942 save the best model, db_codes and db_targets.
2022-03-08 01:20:31,184 finish saving.
2022-03-08 01:23:44,596 epoch 44: avg loss=4.794161, avg quantization error=0.014906.
2022-03-08 01:23:44,596 begin to evaluate model.
2022-03-08 01:25:01,332 compute mAP.
2022-03-08 01:25:19,021 val mAP=0.702110.
2022-03-08 01:25:19,024 save the best model, db_codes and db_targets.
2022-03-08 01:25:21,593 finish saving.
2022-03-08 01:28:35,345 epoch 45: avg loss=4.795650, avg quantization error=0.014904.
2022-03-08 01:28:35,345 begin to evaluate model.
2022-03-08 01:29:52,724 compute mAP.
2022-03-08 01:30:10,302 val mAP=0.702230.
2022-03-08 01:30:10,303 save the best model, db_codes and db_targets.
2022-03-08 01:30:13,480 finish saving.
2022-03-08 01:33:29,121 epoch 46: avg loss=4.793934, avg quantization error=0.014901.
2022-03-08 01:33:29,121 begin to evaluate model.
2022-03-08 01:34:45,956 compute mAP.
2022-03-08 01:35:03,683 val mAP=0.702405.
2022-03-08 01:35:03,683 save the best model, db_codes and db_targets.
2022-03-08 01:35:06,356 finish saving.
2022-03-08 01:38:22,736 epoch 47: avg loss=4.792976, avg quantization error=0.014902.
2022-03-08 01:38:22,736 begin to evaluate model.
2022-03-08 01:39:39,792 compute mAP.
2022-03-08 01:39:57,433 val mAP=0.702284.
2022-03-08 01:39:57,434 the monitor loses its patience to 9!.
2022-03-08 01:43:13,155 epoch 48: avg loss=4.791055, avg quantization error=0.014899.
2022-03-08 01:43:13,155 begin to evaluate model.
2022-03-08 01:44:30,161 compute mAP.
2022-03-08 01:44:47,952 val mAP=0.702391.
2022-03-08 01:44:47,953 the monitor loses its patience to 8!.
2022-03-08 01:48:05,308 epoch 49: avg loss=4.792194, avg quantization error=0.014899.
2022-03-08 01:48:05,309 begin to evaluate model.
2022-03-08 01:49:21,961 compute mAP.
2022-03-08 01:49:39,849 val mAP=0.702362.
2022-03-08 01:49:39,849 the monitor loses its patience to 7!.
2022-03-08 01:49:39,850 free the queue memory.
2022-03-08 01:49:39,850 finish trainning at epoch 49.
2022-03-08 01:49:39,869 finish training, now load the best model and codes.
2022-03-08 01:49:41,412 begin to test model.
2022-03-08 01:49:41,412 compute mAP.
2022-03-08 01:49:58,929 test mAP=0.702405.
2022-03-08 01:49:58,929 compute PR curve and P@top1000 curve.
2022-03-08 01:50:35,131 finish testing.
2022-03-08 01:50:35,131 finish all procedures.