Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Paddle2.0.0-rc0的模型预测效率比Torch低很多 #28774

Closed
rical730 opened this issue Nov 20, 2020 · 13 comments
Closed

Paddle2.0.0-rc0的模型预测效率比Torch低很多 #28774

rical730 opened this issue Nov 20, 2020 · 13 comments
Assignees
Labels
status/close 已关闭

Comments

@rical730
Copy link

rical730 commented Nov 20, 2020

环境信息


Paddle version: 2.0.0-rc0
Paddle With CUDA: False
OS: macOs 10.15.3
Python version: 3.7.7
CUDA version: None
cuDNN version: None
Nvidia driver version: None


Paddle test (版本2.0.0-rc0)

import paddle
import paddle.nn as nn
import paddle.nn.functional as F
import numpy as np
from tqdm import tqdm

class LinearNet(nn.Layer):
    def __init__(self):
        super(LinearNet, self).__init__()
        self.fc1 = nn.Linear(4, 128)
        self.fc2 = nn.Linear(128, 2)

    def forward(self, obs):
        h1 = F.relu(self.fc1(obs))
        out = F.relu(self.fc2(h1))
        return out

mymodel = LinearNet()
data_np = np.random.rand(32, 4)
data_in = paddle.to_tensor(data_np.astype(np.float32))
for _ in tqdm(range(100000)):
    out = mymodel(data_in)

Torch test (版本 1.4.0)

import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
from tqdm import tqdm
torch.set_num_threads(1)

class LinearNet(nn.Module):
    def __init__(self):
        super(LinearNet, self).__init__()
        self.fc1 = nn.Linear(4, 128)
        self.fc2 = nn.Linear(128, 2)

    def forward(self, obs):
        h1 = F.relu(self.fc1(obs))
        out = F.relu(self.fc2(h1))
        return out

mymodel = LinearNet()
data_np = np.random.rand(32, 4)
data_in = torch.tensor(data_np, dtype=torch.float)
for _ in tqdm(range(100000)):
    out = mymodel(data_in)

效果评测:Paddle速度慢了3倍左右
Paddle2.0.0-rc0:3131.78 it/s
Torch:11345.81 it/s

建议提速,否则后面做很多算法复现都比较难对齐效果。

@paddle-bot-old
Copy link

您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网API文档常见问题历史IssueAI社区来寻求解答。祝您生活愉快~

Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the APIFAQGithub Issue and AI community to get the answer.Have a nice day!

@haozech
Copy link
Contributor

haozech commented Nov 20, 2020

你好,感谢反馈。请问测试数据是在相同环境下用cpu端测出的结果么?

@rical730
Copy link
Author

是的,是在同一台机器上测出的结果,CPU端,torch和paddle分别部署在两个Python环境里

@haozech
Copy link
Contributor

haozech commented Nov 20, 2020

是的,是在同一台机器上测出的结果,CPU端,torch和paddle分别部署在两个Python环境里

好的,反馈已经收到。我们会尽快排期进行性能的优化,谢谢~

@rical730
Copy link
Author

好的,辛苦~~

@chenwhql
Copy link
Contributor

预测的话,在执行之前,设成eval模式会好一些吗

mymodel = LinearNet()
mymodel.eval()

@rical730
Copy link
Author

会好一些,4937.93it/s,但跟torch比也还是差距很大

@jiweibo
Copy link
Contributor

jiweibo commented Nov 27, 2020

Paddle预测部署的话,需要先把动态图转为静态图,参考文档https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/guides/04_dygraph_to_static/index_cn.html

得到静态图部署模型后参考,部署api
https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/guides/05_inference_deployment/inference/native_infer.html

动态图训练前向性能比torch弱,这点后续需要优化。

@rical730
Copy link
Author

在训练过程中也需要使用模型前向预测,预测慢的话会大大降低训练效率。

@chenwhql
Copy link
Contributor

paddle的动态图继承自原先静态图基础框架,在框架调度上的开销是要比torch重一些的,这个case凸显的如此明显,也是由于case的计算量比较小,调度的耗时占比确实会比较明显

这个后续我们会逐渐优化的,会需要一些时间,感谢您的反馈

目前我们内部对动态图的测试主要还局限在应用比较广泛的模型(比如ResNet,transformer)+ GPU场景的对比,在这些场景中,计算耗时占了整个过程的较大比重,因此框架调度的影响就没那么明显,在这些场景中,总体上的开销还是基本持平的

@rical730
Copy link
Author

Paddle动态图在大规模网络的预测效率高于Torch,但小规模网络的预测效率比Torch差很多,强化学习常用网络规模不大,辛苦优化Paddle框架

import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
from tqdm import tqdm
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

class LinearNet(nn.Module):
    def __init__(self):
        super(LinearNet, self).__init__()
        self.fc1 = nn.Linear(4, 400)
        self.fc2 = nn.Linear(400, 300)
        self.fc3 = nn.Linear(300, 2)

    def forward(self, obs):
        h1 = F.relu(self.fc1(obs))
        h2 = F.relu(self.fc2(h1))
        out = F.relu(self.fc3(h2))
        return out

mymodel = LinearNet().to(device)
data_np = np.random.rand(32, 4)
data_in = torch.tensor(data_np, dtype=torch.float).to(device)
for _ in tqdm(range(100000)):
    out = mymodel(data_in)
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
import numpy as np
from tqdm import tqdm

class LinearNet(nn.Layer):
    def __init__(self):
        super(LinearNet, self).__init__()
        self.fc1 = nn.Linear(4, 400)
        self.fc2 = nn.Linear(400, 300)
        self.fc3 = nn.Linear(300, 2)

    def forward(self, obs):
        h1 = F.relu(self.fc1(obs))
        h2 = F.relu(self.fc2(h1))
        out = F.relu(self.fc3(h2))
        return out

mymodel = LinearNet()
data_np = np.random.rand(32, 4)
data_in = paddle.to_tensor(data_np.astype(np.float32))
for _ in tqdm(range(100000)):
    out = mymodel(data_in)

上面这个case的网络结构非常常见,测试均在GPU上进行,测试结果如下:
Torch预测速度:6733.66it/s
Paddle预测速度:3274.84it/s

Torch比Paddle快了一倍

@JiabinYang
Copy link
Contributor

该问题在开发分支的paddle已经解决

@paddle-bot paddle-bot bot added the status/close 已关闭 label Sep 5, 2022
@JiabinYang JiabinYang reopened this Sep 5, 2022
@paddle-bot paddle-bot bot added status/reopen 重新打开 and removed status/close 已关闭 labels Sep 5, 2022
@rical730
Copy link
Author

rical730 commented Sep 6, 2022

点赞!期待发版❤️

@paddle-bot paddle-bot bot added status/close 已关闭 and removed status/reopen 重新打开 labels Jan 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status/close 已关闭
Projects
None yet
Development

No branches or pull requests

5 participants