Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【PaddlePaddle Hackathon 2】9、为 Paddle 新增 logspace API #40777

Closed
Closed
Changes from 1 commit
Commits
Show all changes
309 commits
Select commit Hold shift + click to select a range
0133943
test=document_fix (#40861)
JiabinYang Mar 24, 2022
d3a4347
[custom runtime] clear headers (#40845)
ronny1996 Mar 24, 2022
0bcb4f8
smaller the retry_times since random failure in windows is rare (#40857)
betterpig Mar 24, 2022
7fa3a72
[Infrt] upgrade kernel launcher fun generator (#40826)
DannyIsFunny Mar 24, 2022
68c9e3e
[Refactor] refactored eager_gen.py PR #1 (#40815)
jim19930609 Mar 24, 2022
f51a579
[Infrt] add method for automatically scanning pass and kernel info (#…
DannyIsFunny Mar 24, 2022
2e73653
[Phi] Migrate InferShape of multiplex, qr, tril_triu (#40102)
Caozhou1995 Mar 24, 2022
f95f3a6
modify communicator api (#40881)
zhaocaibei123 Mar 24, 2022
a8f8660
Add sparse convertion api and sparse creation api (#40780)
Mar 24, 2022
0443c6f
[Auto Parallel] Gradient merge pass support dist attribute (#40737)
xymyeah Mar 24, 2022
cc8e98c
Fix rnn, wmt16 docs;test=document_fix (#40783)
joey12300 Mar 24, 2022
2e8f988
test=document_fix , fix launch doc (#40848)
kuizhiqing Mar 24, 2022
36ee6dd
Refine events waiter (#40876)
liutiexing Mar 24, 2022
83906bc
Clean api workspace (#40885)
tianshuo78520a Mar 24, 2022
a916424
make vcvars64 and cuda_version can be set in xly pipe (#40870)
betterpig Mar 24, 2022
bff9e28
support dp for class_center_sample and margin_cross_entropy (#39852)
GuoxiaWang Mar 24, 2022
9d8cfc1
Wrap dist api for dygraph mode (#40408)
Mar 24, 2022
305f32d
[MoE]Assign pos op (#40580)
sljlp Mar 24, 2022
9954189
the `defaults` in FullArgSpec may be `None` (#40882)
wadefelix Mar 24, 2022
753964a
Correct MultipleQuantizeSquash (#40717)
wozna Mar 24, 2022
38d1fe3
[phi] Remove usless cmake message (#40884)
Aurelius84 Mar 24, 2022
c12f7d4
[AMP] Support amp for Intermediate_dygraph (#40623)
zhangbo9674 Mar 24, 2022
98244a9
Support intermediate for Sparse API (#40840)
zyfncg Mar 24, 2022
310b7db
fix build_cinn_pass internal var may be control var problem (#40812)
thisjiang Mar 24, 2022
22a5035
[new-exec] enable standalone_executor_test in coverage (#40846)
zhiqiu Mar 24, 2022
92afe14
p_norm transfer to phi kernels (#40819)
zhiboniu Mar 24, 2022
6d3db9c
[Phi] Move batch size like infershape into phi (#40847)
chenwhql Mar 24, 2022
8df9176
[Phi] Move mean op kernel into phi (#40872)
chenwhql Mar 24, 2022
0f5e90a
support get_item where the index is a bool scalar tensor (#40829)
FlyingQianMM Mar 25, 2022
0408701
Scalar support marking data_type in yaml (#40867)
zyfncg Mar 25, 2022
139a30e
modify unit test in bn, stack and split. *test=kunlun (#40880)
Zhangjingyu06 Mar 25, 2022
2b74b73
[NPU] add merged_momentum (#40875)
Aganlengzi Mar 25, 2022
c7b69fd
fix dependency (#40901)
seemingwang Mar 25, 2022
3228fc3
Fix loop index for FillZeroForEmptyGradInputs (#40909)
0x45f Mar 25, 2022
cfadf61
move elementwise_max/min/mod into phi (#40590)
FlyingQianMM Mar 25, 2022
6547833
[infrt] add phi_dt.create_inited_dense_tensor.cpu.f32 kernel. (#40902)
winter-wang Mar 25, 2022
d43e843
[OpTest] Polish optest (#40879)
2742195759 Mar 25, 2022
1db9cd4
fix xpu op test, *test=kunlun (#40862)
tangzhiyi11 Mar 25, 2022
3085d5e
Refactor Dygraph Flags (#40786)
JiabinYang Mar 25, 2022
236a3bc
fix paddle.vision.transforms.Resize en docs (#40719)
Liyulingyue Mar 25, 2022
1c01d1c
change CUDA implementation of dropout OP (#40874)
zhwesky2010 Mar 25, 2022
41f813e
[ROCm] fix compile error on DTK21.10, test=develop (#40893)
qili93 Mar 25, 2022
4ab8255
[Phi] Move part sum op kernel (#40873)
chenwhql Mar 25, 2022
609077e
move mul op infershape (#40917)
chenwhql Mar 25, 2022
608a5f5
add maximum limit for grid of reduce, elementwise, gather and scatter…
FlyingQianMM Mar 25, 2022
9ffedcf
support multi_dims for tril_triu, *test=kunlun (#40712)
helen88 Mar 25, 2022
aeae81a
Thread data registry (#40912)
liutiexing Mar 25, 2022
56cd340
[Phi] Migrate Adam and AdamW into Phi (#40351)
Aurelius84 Mar 25, 2022
b79c6a9
add cast_grad phi kernel (#40798)
zhangbo9674 Mar 25, 2022
961ef4d
test=document_fix (#40919)
LiuChiachi Mar 25, 2022
fd0c0e3
Add Coverage build size check (#40749)
tianshuo78520a Mar 25, 2022
c33b4f9
[Phi] Migrate strided_slice into Phi (#40708)
Aurelius84 Mar 25, 2022
be5918e
move activation (#40913)
YuanRisheng Mar 25, 2022
5f6038f
infrt update phi gpu register. (#40866)
jiweibo Mar 25, 2022
f027b2a
[Refactor] refactored eager_gen.py PR #2 (#40907)
jim19930609 Mar 25, 2022
02146ba
[Auto parallel] align infer accuracy for ernie generator mode (#40077)
JZ-LIANG Mar 25, 2022
09e5b00
Fix in dygraph mode doc (#40942)
JiabinYang Mar 25, 2022
54632b5
Fix param@grad type error for amp in run_program (#40938)
0x45f Mar 25, 2022
01b688c
Implement a common AlgorithmsCache for kernel auto-tune (#40793)
zhangting2020 Mar 25, 2022
9261dff
[MLU]add allreduce max/prod/min mlu kernel (#40792)
kangna-qi Mar 25, 2022
9ab3c76
fix sync_bn error in fp16 amp-o2 (#40943)
zhangbo9674 Mar 25, 2022
c006a60
fix lars optitmizer bug (#40892)
firestonelib Mar 25, 2022
afe2fdd
update eager code gen (#40924)
phlrain Mar 25, 2022
b94cf84
[Phi] Move mean infershape into phi (#40922)
chenwhql Mar 26, 2022
0ee76f9
add double grad op example (#40963)
chenwhql Mar 26, 2022
7e05680
Move the redundant numpy() (#40931)
Zjq9409 Mar 26, 2022
3a6201a
[infrt] add resnet50 unit test. test=develop (#40950)
winter-wang Mar 26, 2022
ea9684f
Optmize the CPU -> GPU memcpy and avoid explit sync in some operators…
Xreki Mar 26, 2022
3b89542
[AMP] add amp for final_status_dygraph (#40945)
zhangbo9674 Mar 26, 2022
0695e1a
Add StringTensor (#39830)
joey12300 Mar 26, 2022
52f07ab
Fix amp with optiontional api bug (#40980)
zhangbo9674 Mar 27, 2022
2559167
fix inplace bug in final_state eager_gen (#40979)
pangyoki Mar 27, 2022
f6b6b05
[NPU] fix npu cast ut (#40982)
Aganlengzi Mar 27, 2022
0ad2e19
Make StreamSafeCUDAAllocator compatible with NaiveBestFit strategy (#…
From00 Mar 27, 2022
b8236b7
Move slice to phi (#40736)
phlrain Mar 27, 2022
6a94adb
add check of data type and support mutable_data with compiled infos (…
CtfGo Mar 27, 2022
1c6dcfd
fix reshape+transpose+matmul (#40948)
sfraczek Mar 27, 2022
de8962b
add data_type support for phi embedding op. (#40964)
limin2021 Mar 27, 2022
afa0e82
[new-exec] fit for mkldnn and inplace op (#40955)
zhiqiu Mar 27, 2022
37f914c
[ Optest ] refactor optest check_output_with_place logic (#40928)
2742195759 Mar 27, 2022
b64f611
[Infrt]add enum from nvinfer namespace to trt lower pattern (#40799)
shangzhizhou Mar 28, 2022
3f4099e
Bug fix for intermediate support in Yaml (#40935)
jim19930609 Mar 28, 2022
0d0d76e
Fix bug while specifying target grad in high order gradient (#40940)
Aurelius84 Mar 28, 2022
c03186f
Refine test_lac.py for eager mode (#40951)
0x45f Mar 28, 2022
287cbde
[Dy2Stat] Fix ForLoop Transformation with single return (#40683)
Aurelius84 Mar 28, 2022
27996fd
[Phi] Move backward infershape of Reshape Op (#40914)
YuanRisheng Mar 28, 2022
8fe8039
Launch fix port (#40936)
kuizhiqing Mar 28, 2022
56dc8c7
Enabled eager_mode for complex unit tests, except for test_complex_op…
jim19930609 Mar 28, 2022
3d5a27f
add adaround post-quant method (#38460)
yghstill Mar 28, 2022
324b6b7
[Phi] Move assign value op kernel into phi (#40967)
chenwhql Mar 28, 2022
34f0704
update docs dtype(core.VarDesc.VarType)test=document_fix (#40947)
Ligoml Mar 28, 2022
5c5a2a8
[Eager] Support SelectedRows in eager mode (#40858)
veyron95 Mar 28, 2022
023d877
Update ResNet test cases (#40953)
veyron95 Mar 28, 2022
e73857a
Add string api python c code gen (#40992)
joey12300 Mar 28, 2022
cb18376
[Phi] Move warpctc OP to phi (#40023)
0x45f Mar 28, 2022
822a2d1
[Phi] Fix assign kernel bug (#40927)
chenwhql Mar 28, 2022
ea5b2f2
add fused_seqpool_cvm op (#37928)
danleifeng Mar 28, 2022
2b53c68
Fix output dtype of elementwise_div (#40890)
linjieccc Mar 28, 2022
cadc4e6
[HeterPS] So Parser (#40750)
Thunderbrook Mar 28, 2022
0c024cb
[Phi]Remove in_dtype, out_dtype in redcue grad (#40906)
MingMingShangTian Mar 28, 2022
86554d9
[DoubleGrad PR #1] Decoupled code generation logics for Dygraph Forwa…
jim19930609 Mar 28, 2022
d101334
[Auto Parallel] Update reshard (#40865)
Caozhou1995 Mar 28, 2022
1431305
fix kernel backend select bug (#41002)
zyfncg Mar 28, 2022
b6661d3
[phi] move infershape: flip/maxout/take_along_axis/put_along_axis (#4…
m3ngyang Mar 28, 2022
30b9cfb
调整代码格式
BrilliantYuKaimin Mar 28, 2022
c049a6b
Add window computation in stft op. (#40987)
KPatr1ck Mar 28, 2022
00448f0
Update CI-Coverage check build size words (#41000)
tianshuo78520a Mar 28, 2022
77a455c
Fix profiler package bug (#40888)
rainyfly Mar 28, 2022
5c5a366
[new-exec] update the dependency of op that has inplace_back_map (#4…
zhiqiu Mar 28, 2022
b99c1d0
[Auto parallel] Mixed Precision FP16 Pass (#40615)
JZ-LIANG Mar 28, 2022
29d2e94
make adam_w import _C_ops from global (#40960)
JiabinYang Mar 28, 2022
bf93050
[infrt] move graph op from pd dialect to infrt dialect. (#41003)
winter-wang Mar 28, 2022
62af590
[Dygraph] Add unittests for DataParallel in eager mode (#40709)
haohongxiang Mar 28, 2022
630f5b8
delete commonsparsetable and communicator from gpups (#40973)
esythan Mar 28, 2022
ae4f104
Update test_logspace.py
BrilliantYuKaimin Mar 28, 2022
e91292c
reduce graph-engine warnings (#41015)
seemingwang Mar 28, 2022
e77a947
Move some activation to phi (#40727)
phlrain Mar 28, 2022
ca87195
Move meshgrid to phi (#40994)
phlrain Mar 28, 2022
93a2f56
predictor supports phi, test=develop (#40856)
Shixiaowei02 Mar 28, 2022
3983c72
[DoubleGrad PR #2] Adjusted logics of GenerateNodeCreationCodes and G…
jim19930609 Mar 29, 2022
5de41ef
support env variable control flags (#41013)
JiabinYang Mar 29, 2022
55f9b71
[Eager]Switch new Eager mode (#40990)
Aurelius84 Mar 29, 2022
5728dff
fix assign typo (#41005)
chenwhql Mar 29, 2022
fe8acb6
Determine execution sequence of random OPs in new executor (#41012)
From00 Mar 29, 2022
3b381aa
Use _C_ops.yolov3_loss in eager mode for test_yolov3.py (#40831)
0x45f Mar 29, 2022
649948a
softmax_with_cross_entropy support fp16 on xpu, test=kunlun (#40869)
zhangyk0314 Mar 29, 2022
d1c1d73
[MLU]add reduce op mlu kernel (#41028)
kangna-qi Mar 29, 2022
4d198ac
pool2d support fp16 on xpu and update pool2d unittest, test=kunlun (#…
zhangyk0314 Mar 29, 2022
bea725b
fix lrn bug in export model
huangjun12 Mar 18, 2022
869287f
Add Identity module name in __init__ (#39615)
shiyutang Mar 29, 2022
5976536
add rewrite pattern form paddle mlir to trt mlir (#41011)
weishengying Mar 29, 2022
b532315
[Phi] Move elementwise_floordiv and elementwise_pow to phi (#40993)
wuyefeilin Mar 29, 2022
9c0eaad
[Phi] trans logsumexp op (#40790)
xingjing1 Mar 29, 2022
05f3d48
Revert "Move some activation to phi (#40727)" (#41056)
tianshuo78520a Mar 29, 2022
733d816
Fix test_reinforcement_learning.py for eager run_program OP (#41018)
0x45f Mar 29, 2022
63471c8
refine AsyncWorkQueue (#40977)
liutiexing Mar 29, 2022
9ace4ea
Para (#41067)
lelelelelez Mar 29, 2022
3a6f113
Revert "[Phi] Move elementwise_floordiv and elementwise_pow to phi (#…
tianshuo78520a Mar 29, 2022
054fc99
Revert "[Phi] trans logsumexp op (#40790)" (#41068)
tianshuo78520a Mar 29, 2022
c544a18
Add Sparse op sparse_relu (#40959)
Mar 29, 2022
f3022df
add elementwise sub and elementwise div in tensorrt op teller (#40806)
wangxinxin08 Mar 29, 2022
aeade53
[MoE] Moe apis (#40895)
sljlp Mar 29, 2022
9bb3744
[Infrt] delete custom_pdop.td and move op to infrt dialect. (#41021)
winter-wang Mar 29, 2022
e04493d
[Yaml] Refine yaml as order test=document_fix (#41098)
Aurelius84 Mar 29, 2022
35b96d4
Update of oneDNN to 2.5 (#39426)
jczaja Mar 29, 2022
23c3d96
[Phi] Unify kernel build targets (#41091)
chenwhql Mar 29, 2022
cc52501
[Eager]Add sort-simple-yaml for automatically sort api|backward.yaml …
Aurelius84 Mar 29, 2022
1840349
[Infrt] add skip method for inferShape codegen (#41014)
DannyIsFunny Mar 29, 2022
157c1a2
[Eager] Pylayer (#39989)
wanghuancoder Mar 30, 2022
17af293
Fix argsort cpu kernel when with input of NaN (#41070)
wawltor Mar 30, 2022
040d338
fix bug that some op has no op_role attr (#41040)
zhiqiu Mar 30, 2022
83efeea
Add timer tool to Profiler (#40386)
zhangting2020 Mar 30, 2022
775ddb5
fix double grad var judging (#41072)
chenwhql Mar 30, 2022
495ca4a
support view strategy in dygraph eager_final state (#40891)
pangyoki Mar 30, 2022
532eba9
add rewrite pattern form paddle mlir to trt mlir (#41087)
weishengying Mar 30, 2022
60c4c9c
[Infrt] add infer shape cache for kernel. (#41104)
winter-wang Mar 30, 2022
2089b48
change to new api in ssync mode (#41022)
yaoxuefeng6 Mar 30, 2022
9219495
[Phi]fix pad3d infermeta bug (#41020)
MingMingShangTian Mar 30, 2022
97cd0f5
Refactor code auto-gene for no_need_buffer (#41025)
zyfncg Mar 30, 2022
9fcb6a1
Enabled final state matmul at Python API level (#41089)
jim19930609 Mar 30, 2022
7170c68
suppor inplace in tensor_method_setitem (#40915)
pangyoki Mar 30, 2022
b1ee9d5
fix (#41083)
zhaocaibei123 Mar 30, 2022
f12b526
Optimize the onnxruntime code (#41044)
heliqi Mar 30, 2022
d951f3a
swish and pow op for xpu test=kunlun (#40654)
houj04 Mar 30, 2022
45078d9
Optimize the perf of top_k when k is too large (#40941)
ZzSean Mar 30, 2022
04325d2
Optest refactor (#40998)
2742195759 Mar 30, 2022
13f1641
move elementwise_mul selected rows input (#41042)
YuanRisheng Mar 30, 2022
4d30022
[Eager] dlpack (#40811)
wanghuancoder Mar 30, 2022
922e076
[Eager] Fix legacy always make sense (#41048)
Aurelius84 Mar 30, 2022
c761b48
Apply TransposeFolding & GemmRewriter passes. (#41084)
wzzju Mar 30, 2022
4e86dff
add bilinear interpolate v2 to xpu list and unitteset, *test=kunlun (…
ykkk2333 Mar 30, 2022
5c1631f
Add -rf in xpu_kp.cmake when cp .kps to .xpu (#41059)
AnnaTrainingG Mar 30, 2022
95265d5
[Yaml] Fix topk yaml compilation problem on Windows (#41082)
Aurelius84 Mar 30, 2022
1042f42
remove set_value numpy (#41017)
Zjq9409 Mar 30, 2022
a5bfa79
Switch some dy2st UT to eager mode (#41052)
0x45f Mar 30, 2022
cb8afc2
add _reset_grad_inplace_version (#41101)
pangyoki Mar 30, 2022
489a64e
fix cross_entropy when run static graph mode of mlu and npu (#40621)
qipengh Mar 30, 2022
abd2df4
[DoubleGrad PR #3] Supported higher-order GradNode generation (#41051)
jim19930609 Mar 30, 2022
91bb52c
Revert "Revert "Move some activation to phi (#40727)" (#41056)" (#41095)
phlrain Mar 30, 2022
ee8eeb4
Revert "Revert "[Phi] trans logsumexp op (#40790)" (#41068)" (#41109)
chenwhql Mar 30, 2022
e494b73
fix reshard bug (#41106)
Caozhou1995 Mar 30, 2022
3d39f5c
Fix unsqueeze op get wrong output shape in compile time infer shape. …
2742195759 Mar 30, 2022
afe02e9
Add new APIs for GPU memory monitoring (max_memory_allocated, max_mem…
From00 Mar 30, 2022
eef4677
Revert "Revert "[Phi] Move elementwise_floordiv and elementwise_pow t…
chenwhql Mar 30, 2022
8f7c02f
[Op] Fix uncontrolled randomness of index_select op (#41078)
haohongxiang Mar 30, 2022
aac7879
[MoE] Moe apis (#41092)
sljlp Mar 30, 2022
7d1bb6d
disable check of delete_ut;test=document_fix;test=windows_ci_inferenc…
betterpig Mar 30, 2022
5f7d129
dsiable scatter case in test_inplace, test=document_fix (#41152)
pangyoki Mar 30, 2022
a0e961c
delete ps env (#41079)
ziyoujiyi Mar 30, 2022
59c4fda
[AutoParallel] fix converter when sliced_shape is 1 (#41103)
zhaoyinglia Mar 30, 2022
66cf8b0
[Phi] Move Rnn Op from fluid to phi (#41007)
zyfncg Mar 30, 2022
4b61918
Fix test_jit_save_load (#41114)
0x45f Mar 30, 2022
4d6a3b9
Fix bug for UT test_calc_gradient (#41130)
From00 Mar 30, 2022
d006c7f
py36 Import error bug fix (#41135)
ziyoujiyi Mar 30, 2022
ec510bf
[Infrt] add result check for some infrt op. (#41167)
winter-wang Mar 31, 2022
92faeed
Pg heter cloud (#40911)
Mar 31, 2022
a09058b
move inplace_version_counter_ location (#41146)
Aganlengzi Mar 31, 2022
11d1a51
Support inplace strategy for pylayer (#41043)
pangyoki Mar 31, 2022
56493c9
fix eager_gen node bug (#41165)
pangyoki Mar 31, 2022
2f1c1ae
support view strategy in eager_fluid state (#40830)
pangyoki Mar 31, 2022
0d5c27b
fix adam is_sparse bug in final state dygraph (#41125)
zhangbo9674 Mar 31, 2022
4b9e748
remove shape check (#41143)
b3602sss Mar 31, 2022
bdef57c
add weight unfold pass and handle trt fc op (#41088)
jiweibo Mar 31, 2022
23a69bc
update elementwise unittest style, *test=kunlun (#40779)
ykkk2333 Mar 31, 2022
dea2454
Restrict compilation conditions of optimized topk kernel (#41153)
ZzSean Mar 31, 2022
7c5dca9
add_autotune_kernel_tool (#40658)
JamesLim-sy Mar 31, 2022
ac5548a
[KP] fix bug in phi kp (#41069)
Liu-xiandong Mar 31, 2022
b9da48d
Opt the compilation of sparse kernel (#41086)
Mar 31, 2022
6744754
Add time range duration display (#41029)
rainyfly Mar 31, 2022
7dfd384
Implement AutotuneCache class for Kernel AutoTune (#41169)
zhangting2020 Mar 31, 2022
6735a37
Add probability distribution transformation APIs (#40536)
cxxly Mar 31, 2022
4e3c733
Fix operator summary table (#41157)
rainyfly Mar 31, 2022
3b00dc9
add depend when doing fuse_all_optimizer on program (#41178)
zhiqiu Mar 31, 2022
47383dc
fix load bug and add distributed strategy from pslib (#40883)
esythan Mar 31, 2022
fb93bd5
[phi] move yolov3_loss to phi (#40944)
wuyefeilin Mar 31, 2022
1faefc9
[Phi] Fix kps compile failed (#41129)
chenwhql Mar 31, 2022
7c555f4
Fix test_run_program_op.py (#41141)
0x45f Mar 31, 2022
4974fdf
[FleetExecutor] Add source interceptor and test (#41122)
LiYuRio Mar 31, 2022
e7928a0
[New API]: miminize_bfgs and miminize_lbfgs (#40710)
betterpig Mar 31, 2022
7ef6920
add flatten2,reshape2,squueze2_trt_fuse_pass test cast (#41031)
heliqi Mar 31, 2022
a6bf221
Maintain old profiler (#41132)
rainyfly Mar 31, 2022
dc0702f
Using DistConfig in Paddle Inference (#41128)
TeslaZhao Mar 31, 2022
02cf676
[new-exec] fit mkldnn op (#41058)
zhiqiu Mar 31, 2022
08c3edb
add multiclass nms3 trt converter (#41181)
wangxinxin08 Mar 31, 2022
033b274
fix python c bug for eager tensor (#41158)
zhangbo9674 Mar 31, 2022
eac23db
fix some bug, test=develop (#41144)
wanghuancoder Mar 31, 2022
a8be9b6
Enhance eigh, eigvalsh unit tests (#40699)
zlsh80826 Mar 31, 2022
a54ec5a
Fix `parent_block.var(name)` error in static mode (#41162)
0x45f Mar 31, 2022
1055c6b
reduce gpu example samplecode (#41189)
tianshuo78520a Mar 31, 2022
2d69abd
[Yaml] Migrate sqrt/square/reciprocal yaml (#41164)
Aurelius84 Mar 31, 2022
e559fe4
[Phi] Rename ScalarArray to IntArray (#40975)
zyfncg Mar 31, 2022
74894cd
fix conflict (#40851)
csy0225 Mar 31, 2022
2003610
Switch some dy2st UT to eager (#41175)
0x45f Mar 31, 2022
608a749
add CUDA_TOOLKIT_ROOT_DIR option in cmake command (#41105)
betterpig Mar 31, 2022
3a7761a
remove comment yamls, test=document_fix (#41221)
chenwhql Mar 31, 2022
2f41f38
heter & multi-cloud brpc communication (#40965)
ziyoujiyi Mar 31, 2022
9830329
Add basic yaml backward (#40751)
phlrain Apr 1, 2022
9b6a02d
[Phi] Add shape and strided_slice yaml & Adapt eager mode (#41131)
chenwhql Apr 1, 2022
e6a19ae
add framework._non_static_mode temporarily for hackson; test=document…
zhiboniu Apr 1, 2022
087ab15
增加logspace的形状推断
BrilliantYuKaimin Mar 21, 2022
41a90c0
增加logspace核函数及其实现
BrilliantYuKaimin Mar 21, 2022
21c3021
增加logspace算子描述
BrilliantYuKaimin Mar 21, 2022
819d9ab
在python中增加logspace API
BrilliantYuKaimin Mar 21, 2022
15f5b0e
增加logspace单测
BrilliantYuKaimin Mar 21, 2022
d973175
调整代码格式
BrilliantYuKaimin Mar 22, 2022
6b2c763
Update logspace_kernel.cu
BrilliantYuKaimin Mar 22, 2022
edfa16b
调整代码格式
BrilliantYuKaimin Mar 28, 2022
a80879a
Update test_logspace.py
BrilliantYuKaimin Mar 28, 2022
5cd2b99
Update tensor.py
BrilliantYuKaimin Apr 1, 2022
61043a0
Merge branch 'logspace' of https://github.com/BrilliantYuKaimin/Paddl…
BrilliantYuKaimin Apr 1, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
test=document_fix , fix launch doc (#40848)
* test=document_fix , fix launch doc

* test=document_fix , fix typo
kuizhiqing authored Mar 24, 2022

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
commit 2e8f9882f17f2ca323bacc841d72ce83385a20bf
12 changes: 7 additions & 5 deletions python/paddle/distributed/launch/main.py
Original file line number Diff line number Diff line change
@@ -40,9 +40,9 @@ def launch():
- ``--rank``: The rank of the node, can be auto assigned by master. Default ``--rank=-1``.
- ``--log_level``: The log levl to set for logging.setLevel. Default ``--log_level=INFO``.
- ``--log_level``: The log level to set for logging.setLevel which can be CRITICAL/ERROR/WARNING/INFO/DEBUG/NOTSET, case insensitive. The rank 0 log will not print in the terminal by default, while you can enable it by adding --log_level=debug. Default ``--log_level=INFO``.
- ``--nnodes``: The number of nodes for a distributed job, it can be a range in elastic mode, e.g., ``--nnnodes=2:3``. Default ``--nnodes=1``.
- ``--nnodes``: The number of nodes for a distributed job, it can be a range in elastic mode, e.g., ``--nnodes=2:3``. Default ``--nnodes=1``.
- ``--nproc_per_node``: The number of processes to launch on a node. In gpu training, it should be less or equal to the gpus number of you system. e.g., ``--nproc_per_node=8``
@@ -93,9 +93,11 @@ def launch():
Returns:
``None``
- ``None``
Examples 0 (master, ip/port auto detection):
.. code-block:: bash
:name: code-block-example-bash0
# For training on multi node, run the following command in one of the nodes
@@ -171,7 +173,7 @@ def launch():
.. code-block:: bash
:name: code-block-example-bash5
# To simulate distributed environment using single node, e.g., 2 servers and 4 workers, each worker use single gpu.
# To simulate distributed environment using single node, e.g., 2 servers and 4 workers, each worker use single gpu.
export CUDA_VISIBLE_DEVICES=0,1,2,3
python -m paddle.distributed.launch --server_num=2 --worker_num=4 train.py --lr=0.01
@@ -226,7 +228,7 @@ def launch():
python -m paddle.distributed.launch --master etcd://10.0.0.1:2379 --nnodes 2:4 train.py
# once the number of nodes changes between 2:4 during training, the strategy holds
"""

# initialize the context to run