forked from PaddlePaddle/Perf
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathbase_bs48_fp32_gpu8.log
50 lines (50 loc) · 7.96 KB
/
base_bs48_fp32_gpu8.log
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
Namespace(adam_epsilon=1e-06, batch_size=48, device='gpu', enable_addto=False, gradient_merge_steps=1, input_dir='./wikicorpus_en_seqlen128', learning_rate=0.0001, logging_steps=10, max_grad_norm=1.0, max_predictions_per_seq=20, max_steps=400, model_name_or_path='bert-base-uncased', model_type='bert', output_dir='./tmp2/', save_steps=20000, scale_loss=32768, seed=42, use_amp=0, use_pure_fp16=False, warmup_steps=10000, weight_decay=0.01)
[32m[2021-12-27 14:52:24,788] [ INFO][0m - Already cached /root/.paddlenlp/models/bert-base-uncased/bert-base-uncased-vocab.txt[0m
server not ready, wait 3 sec to retry...
not ready endpoints:['127.0.0.1:41833', '127.0.0.1:54649', '127.0.0.1:57778', '127.0.0.1:52631', '127.0.0.1:43320', '127.0.0.1:37113', '127.0.0.1:49821']
server not ready, wait 3 sec to retry...
not ready endpoints:['127.0.0.1:41833', '127.0.0.1:54649', '127.0.0.1:57778', '127.0.0.1:52631', '127.0.0.1:43320', '127.0.0.1:37113', '127.0.0.1:49821']
W1227 14:52:32.485716 33747 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 11.2
W1227 14:52:32.490324 33747 device_context.cc:465] device: 0, cuDNN Version: 8.1.
W1227 14:52:44.768460 33747 build_strategy.cc:110] Currently, fuse_broadcast_ops only works under Reduce mode.
W1227 14:52:44.867215 33747 fuse_all_reduce_op_pass.cc:76] Find all_reduce operators: 206. To make the speed faster, some all_reduce ops are fused during training, after fusion, the number of all_reduce ops is 19.
tobal step: 10, epoch: 0, batch: 9, loss: 11.210960, avg_reader_cost: 0.06822 sec, avg_batch_cost: 0.60234 sec, avg_samples: 48.00000, ips: 79.68949 sequences/sec
tobal step: 20, epoch: 0, batch: 19, loss: 11.151236, avg_reader_cost: 0.00010 sec, avg_batch_cost: 0.31608 sec, avg_samples: 48.00000, ips: 151.86009 sequences/sec
tobal step: 30, epoch: 0, batch: 29, loss: 11.059410, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.31612 sec, avg_samples: 48.00000, ips: 151.84216 sequences/sec
tobal step: 40, epoch: 0, batch: 39, loss: 10.997466, avg_reader_cost: 0.00014 sec, avg_batch_cost: 0.31703 sec, avg_samples: 48.00000, ips: 151.40415 sequences/sec
tobal step: 50, epoch: 0, batch: 49, loss: 10.939682, avg_reader_cost: 0.00013 sec, avg_batch_cost: 0.31694 sec, avg_samples: 48.00000, ips: 151.44821 sequences/sec
tobal step: 60, epoch: 0, batch: 59, loss: 10.870118, avg_reader_cost: 0.00010 sec, avg_batch_cost: 0.31747 sec, avg_samples: 48.00000, ips: 151.19458 sequences/sec
tobal step: 70, epoch: 0, batch: 69, loss: 10.747655, avg_reader_cost: 0.00012 sec, avg_batch_cost: 0.31746 sec, avg_samples: 48.00000, ips: 151.20208 sequences/sec
tobal step: 80, epoch: 0, batch: 79, loss: 10.696268, avg_reader_cost: 0.00011 sec, avg_batch_cost: 0.31746 sec, avg_samples: 48.00000, ips: 151.20131 sequences/sec
tobal step: 90, epoch: 0, batch: 89, loss: 10.512126, avg_reader_cost: 0.00012 sec, avg_batch_cost: 0.31856 sec, avg_samples: 48.00000, ips: 150.67694 sequences/sec
tobal step: 100, epoch: 0, batch: 99, loss: 10.415959, avg_reader_cost: 0.00011 sec, avg_batch_cost: 0.31777 sec, avg_samples: 48.00000, ips: 151.05107 sequences/sec
tobal step: 110, epoch: 0, batch: 109, loss: 10.484406, avg_reader_cost: 0.00008 sec, avg_batch_cost: 0.31873 sec, avg_samples: 48.00000, ips: 150.59638 sequences/sec
tobal step: 120, epoch: 0, batch: 119, loss: 10.272726, avg_reader_cost: 0.00009 sec, avg_batch_cost: 0.31952 sec, avg_samples: 48.00000, ips: 150.22541 sequences/sec
tobal step: 130, epoch: 0, batch: 129, loss: 10.228614, avg_reader_cost: 0.00007 sec, avg_batch_cost: 0.31989 sec, avg_samples: 48.00000, ips: 150.04950 sequences/sec
tobal step: 140, epoch: 0, batch: 139, loss: 10.218575, avg_reader_cost: 0.00009 sec, avg_batch_cost: 0.31824 sec, avg_samples: 48.00000, ips: 150.82959 sequences/sec
tobal step: 150, epoch: 0, batch: 149, loss: 10.037222, avg_reader_cost: 0.00008 sec, avg_batch_cost: 0.31885 sec, avg_samples: 48.00000, ips: 150.54232 sequences/sec
tobal step: 160, epoch: 0, batch: 159, loss: 10.096970, avg_reader_cost: 0.00008 sec, avg_batch_cost: 0.31927 sec, avg_samples: 48.00000, ips: 150.34439 sequences/sec
tobal step: 170, epoch: 0, batch: 169, loss: 9.962990, avg_reader_cost: 0.00008 sec, avg_batch_cost: 0.31909 sec, avg_samples: 48.00000, ips: 150.42663 sequences/sec
tobal step: 180, epoch: 0, batch: 179, loss: 9.996327, avg_reader_cost: 0.00008 sec, avg_batch_cost: 0.31925 sec, avg_samples: 48.00000, ips: 150.35028 sequences/sec
tobal step: 190, epoch: 0, batch: 189, loss: 10.044672, avg_reader_cost: 0.00010 sec, avg_batch_cost: 0.31913 sec, avg_samples: 48.00000, ips: 150.40825 sequences/sec
tobal step: 200, epoch: 0, batch: 199, loss: 9.892224, avg_reader_cost: 0.00008 sec, avg_batch_cost: 0.31995 sec, avg_samples: 48.00000, ips: 150.02414 sequences/sec
tobal step: 210, epoch: 0, batch: 209, loss: 9.748217, avg_reader_cost: 0.00008 sec, avg_batch_cost: 0.31905 sec, avg_samples: 48.00000, ips: 150.44515 sequences/sec
tobal step: 220, epoch: 0, batch: 219, loss: 9.691060, avg_reader_cost: 0.00009 sec, avg_batch_cost: 0.31963 sec, avg_samples: 48.00000, ips: 150.17358 sequences/sec
tobal step: 230, epoch: 0, batch: 229, loss: 9.891567, avg_reader_cost: 0.00008 sec, avg_batch_cost: 0.31955 sec, avg_samples: 48.00000, ips: 150.21061 sequences/sec
tobal step: 240, epoch: 0, batch: 239, loss: 9.648299, avg_reader_cost: 0.00008 sec, avg_batch_cost: 0.32024 sec, avg_samples: 48.00000, ips: 149.88851 sequences/sec
tobal step: 250, epoch: 0, batch: 249, loss: 9.652188, avg_reader_cost: 0.00007 sec, avg_batch_cost: 0.31957 sec, avg_samples: 48.00000, ips: 150.20325 sequences/sec
tobal step: 260, epoch: 0, batch: 259, loss: 9.720120, avg_reader_cost: 0.00007 sec, avg_batch_cost: 0.32005 sec, avg_samples: 48.00000, ips: 149.97684 sequences/sec
tobal step: 270, epoch: 0, batch: 269, loss: 9.692377, avg_reader_cost: 0.00007 sec, avg_batch_cost: 0.31998 sec, avg_samples: 48.00000, ips: 150.01064 sequences/sec
tobal step: 280, epoch: 0, batch: 279, loss: 9.603109, avg_reader_cost: 0.00008 sec, avg_batch_cost: 0.32058 sec, avg_samples: 48.00000, ips: 149.73054 sequences/sec
tobal step: 290, epoch: 0, batch: 289, loss: 9.782703, avg_reader_cost: 0.00008 sec, avg_batch_cost: 0.32705 sec, avg_samples: 48.00000, ips: 146.76454 sequences/sec
tobal step: 300, epoch: 0, batch: 9, loss: 9.469406, avg_reader_cost: 0.00314 sec, avg_batch_cost: 0.32580 sec, avg_samples: 48.00000, ips: 147.33038 sequences/sec
tobal step: 310, epoch: 0, batch: 19, loss: 9.462468, avg_reader_cost: 0.00008 sec, avg_batch_cost: 0.32043 sec, avg_samples: 48.00000, ips: 149.80068 sequences/sec
tobal step: 320, epoch: 0, batch: 29, loss: 9.468302, avg_reader_cost: 0.00008 sec, avg_batch_cost: 0.32047 sec, avg_samples: 48.00000, ips: 149.78147 sequences/sec
tobal step: 330, epoch: 0, batch: 39, loss: 9.595419, avg_reader_cost: 0.00008 sec, avg_batch_cost: 0.32069 sec, avg_samples: 48.00000, ips: 149.67735 sequences/sec
tobal step: 340, epoch: 0, batch: 49, loss: 9.357909, avg_reader_cost: 0.00009 sec, avg_batch_cost: 0.32088 sec, avg_samples: 48.00000, ips: 149.58884 sequences/sec
tobal step: 350, epoch: 0, batch: 59, loss: 9.459148, avg_reader_cost: 0.00007 sec, avg_batch_cost: 0.32036 sec, avg_samples: 48.00000, ips: 149.83267 sequences/sec
tobal step: 360, epoch: 0, batch: 69, loss: 9.581911, avg_reader_cost: 0.00007 sec, avg_batch_cost: 0.32124 sec, avg_samples: 48.00000, ips: 149.42257 sequences/sec
tobal step: 370, epoch: 0, batch: 79, loss: 9.325497, avg_reader_cost: 0.00008 sec, avg_batch_cost: 0.32073 sec, avg_samples: 48.00000, ips: 149.65755 sequences/sec
tobal step: 380, epoch: 0, batch: 89, loss: 9.224298, avg_reader_cost: 0.00007 sec, avg_batch_cost: 0.32077 sec, avg_samples: 48.00000, ips: 149.64002 sequences/sec
tobal step: 390, epoch: 0, batch: 99, loss: 9.297720, avg_reader_cost: 0.00007 sec, avg_batch_cost: 0.32053 sec, avg_samples: 48.00000, ips: 149.74965 sequences/sec
tobal step: 400, epoch: 0, batch: 109, loss: 9.339700, avg_reader_cost: 0.00007 sec, avg_batch_cost: 0.32109 sec, avg_samples: 48.00000, ips: 149.49221 sequences/sec