Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【Hackathon 7th No.28】为 paddle.clip 进行功能增强 #69269

Open
wants to merge 53 commits into
base: develop
Choose a base branch
from

Conversation

a162837
Copy link

@a162837 a162837 commented Nov 10, 2024

PR Category

Inference

PR Types

Others

Description

【Hackathon 7th No.28】为 paddle.clip 进行功能增强
PaddlePaddle/docs#6924

Copy link

paddle-bot bot commented Nov 10, 2024

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot paddle-bot bot added the contributor External developers label Nov 10, 2024
@a162837 a162837 changed the title Tensor clip 【Hackathon 7th No.28】为 paddle.clip 进行功能增强 Nov 10, 2024
@a162837
Copy link
Author

a162837 commented Nov 11, 2024

为什么我两个的ci都没有跑的? @luotao1

@a162837
Copy link
Author

a162837 commented Nov 11, 2024

@sunzhongkai588 你好,请问为什么我的ci都不跑的,已经一天了

@a162837 a162837 force-pushed the TensorClip branch 2 times, most recently from 0d8a99c to 81b1c78 Compare November 25, 2024 15:33
@a162837 a162837 force-pushed the TensorClip branch 2 times, most recently from 4c06f6d to 9196ed5 Compare December 25, 2024 08:03
@a162837 a162837 force-pushed the TensorClip branch 11 times, most recently from 164be97 to 2cfc88a Compare December 30, 2024 01:56
@a162837 a162837 force-pushed the TensorClip branch 6 times, most recently from cf17e35 to 9f98052 Compare January 1, 2025 08:17
@luotao1
Copy link
Contributor

luotao1 commented Jan 3, 2025

📢:请尽快完善锁定的PR,并确保在2025年1月10日(不再延期)前完成合入。逾期未合入PR将无法获得奖金发放。

@@ -3742,10 +3742,26 @@ def log10_(x: Tensor, name: str | None = None) -> Tensor:
return _C_ops.log10_(x)


def is_clip_tensor(value):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个函数不用封了

return False


def get_clip_tensor_shape(value1, value2, value3):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个函数也不用封了

min = min_ if min is None else min
max = max_ if max is None else max

if is_clip_tensor(min) or is_clip_tensor(max):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if paddle.is_tensor(min) and paddle.is_tensor(max):
    x_bcast, min_bcast, max_bcast = paddle.broadcast_tensors([x, min, max])

其他的逻辑与之前保持不变,这里就不用封装其他函数了

inputs=inputs,
outputs={'out': [output]},
)
return output
if in_dynamic_or_pir_mode():
if isinstance(min, Variable):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if paddle.is_tensor(min)

inputs=inputs,
outputs={'out': [output]},
)
return output
if in_dynamic_or_pir_mode():
if isinstance(min, Variable):
min = min.item(0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if paddle.is_tensor(max)

第3884、3885行应该不需要了吧

max.stop_gradient = True
return _C_ops.clip_tensor_(x, min, max)
else:
return _C_ops.clip_(x, min, max)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

在这个else分支下增加一个逻辑:

if paddle.is_tensor(min):
    min = min.item()
if paddle.is_tensor(max):
    max = max.item()

if isinstance(min, Variable):
min = min.item(0)
if isinstance(max, Variable):
max = max.item(0)
min = fmin if min is None else min
max = fmax if max is None else max

if in_dynamic_mode():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个if in_dynamic_mode(): 可以去掉了,这个API只能在动态图下跑

min = fmin if min is None else min
max = fmax if max is None else max

if in_dynamic_mode():
return _C_ops.clip_(x, min, max)
if is_clip_tensor(min) or is_clip_tensor(max):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

if paddle.is_tensor(min) and paddle.is_tensor(max):
    ....

另外这个分支下面做broadcase时,需要判断一点:x.shape==broadcast_shape(broadcast_shape(min.shape, max.shape), x.shape)

因为只能min或max向x广播,x不能向min或max广播

@@ -17,6 +17,7 @@
'instance_norm',
'affine_grid',
'clip',
'clip_tensor',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个为何要加到白名单?这个不是计算类OP,按道理误差不应该很大

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants