-
Notifications
You must be signed in to change notification settings - Fork 275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
【Hackathon No.29】add RFC for PixelUnshuffle #44
【Hackathon No.29】add RFC for PixelUnshuffle #44
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
方案整体完备,但是实现细节等需要步骤化说明,请修改方案后再提交代码。修改示例参照:https://github.com/PaddlePaddle/community/blob/master/rfcs/APIs/20200301_api_design_for_quantile.md
## 飞桨 | ||
|
||
飞桨目前没有直接提供`ChannelShuffle`的API,但是也可以通过组合API的方式实现该操作: | ||
|
||
```python | ||
class PixelUnshuffle(paddle.nn.Layer): | ||
def __init__(self, downscale_factor): | ||
super(PixelUnshuffle, self).__init__() | ||
self.factor = downscale_factor | ||
self.data_format = data_format | ||
|
||
def forward(self, x): | ||
n, c, h, w = x.shape | ||
x = paddle.reshape(x, [n, c, h / self.factor, self.factor, w / self.factor, self.factor]) | ||
x = paddle.transpose(x, [0, 1, 3, 5, 2, 4]) | ||
x = paddle.reshape(x, [n, c * self.factor * self.factor, h / self.factor, w / self.factor]) | ||
return x | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这部分可以放在飞桨现状
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这部分可以放在飞桨现状
完成
- 与PyTorch的结果的一致性; | ||
- 反向传播的正确性; | ||
- 错误检查:`downscale_factor`不是正整数或不同时整除宽度和高度时能正确抛出异常。 | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这部分需要修改,请参考这个PR中描述:#40
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这部分需要修改,请参考这个PR中描述:#40
完成
## API实现方案 | ||
|
||
参考`paddle.nn.PixelShuffle`来实现`paddle.nn.PixelUnshuffle`,顺便实现`paddle.nn.functional.pixel_unshuffle`。 | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这部分需要修改,请参考这个PR中描述:#40
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这部分需要修改,请参考这个PR中描述:#40
完成
# 四、对比分析 | ||
|
||
无论是C++实现还是组合API实现,其逻辑都是十分简单的,故考虑使用C++编写新的算子以期取得更高的效率。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这部分需要修改,请参考这个PR中描述:#40
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这部分需要修改,请参考这个PR中描述:#40
完成
其中的`F.pixel_unshuffle`是由C++实现的,主要代码为: | ||
|
||
```c++ | ||
Tensor pixel_unshuffle(const Tensor& self, int64_t downscale_factor) { | ||
TORCH_CHECK(self.dim() >= 3, | ||
"pixel_unshuffle expects input to have at least 3 dimensions, but got input with ", | ||
self.dim(), " dimension(s)"); | ||
TORCH_CHECK( | ||
downscale_factor > 0, | ||
"pixel_unshuffle expects a positive downscale_factor, but got ", | ||
downscale_factor); | ||
int64_t c = self.size(-3); | ||
int64_t h = self.size(-2); | ||
int64_t w = self.size(-1); | ||
constexpr auto NUM_NON_BATCH_DIMS = 3; | ||
const auto self_sizes_batch_end = self.sizes().end() - NUM_NON_BATCH_DIMS; | ||
|
||
TORCH_CHECK(h % downscale_factor == 0, | ||
"pixel_unshuffle expects height to be divisible by downscale_factor, but input.size(-2)=", h, | ||
" is not divisible by ", downscale_factor) | ||
TORCH_CHECK(w % downscale_factor == 0, | ||
"pixel_unshuffle expects width to be divisible by downscale_factor, but input.size(-1)=", w, | ||
" is not divisible by ", downscale_factor) | ||
int64_t downscale_factor_squared = downscale_factor * downscale_factor; | ||
int64_t oc = c * downscale_factor_squared; | ||
int64_t oh = h / downscale_factor; | ||
int64_t ow = w / downscale_factor; | ||
|
||
std::vector<int64_t> added_dims_shape( | ||
self.sizes().begin(), self_sizes_batch_end); | ||
added_dims_shape.insert( | ||
added_dims_shape.end(), {c, oh, downscale_factor, ow, downscale_factor}); | ||
const auto input_reshaped = self.reshape(added_dims_shape); | ||
|
||
std::vector<int64_t> permutation(self.sizes().begin(), self_sizes_batch_end); | ||
std::iota(permutation.begin(), permutation.end(), 0); | ||
permutation.insert(permutation.end(), {-5 /* c */, -3 /* 1st downscale_factor */, -1 /*2nd downscale_factor */, | ||
-4 /* oh */, -2 /* ow */}); | ||
const auto input_permuted = input_reshaped.permute(permutation); | ||
|
||
std::vector<int64_t> final_shape(self.sizes().begin(), self_sizes_batch_end); | ||
final_shape.insert(final_shape.end(), {oc, oh, ow}); | ||
return input_permuted.reshape(final_shape); | ||
} | ||
|
||
}} // namespace at::native | ||
|
||
``` | ||
|
||
## TensorFlow | ||
|
||
TensorFlow目前没有直接提供`ChannelShuffle`的API,但是也可以通过组合API的方式实现该操作: | ||
|
||
```python | ||
def shuffle_unit(self, x, downscale_factor): | ||
with tf.variable_scope('pixel_unshuffle_unit'): | ||
n, h, w, c = x.get_shape().as_list() | ||
x = tf.reshape(x, shape=tf.convert_to_tensor([tf.shape(x)[0], h / downscale_factor, downscale_factor, w / downscale_factor, downscale_factor, c])) | ||
x = tf.transpose(x, tf.convert_to_tensor([0, 1, 3, 5, 2, 4])) | ||
x = tf.reshape(x, shape=tf.convert_to_tensor([tf.shape(x)[0], h / downscale_factor, w / downscale_factor, c * downscale_factor * downscale_factor])) | ||
|
||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这部分需要增加内容,请参考这个PR中描述:#40
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这部分需要增加内容,请参考这个PR中描述:#40
完成
PR格式检查通过,你的PR将接受Paddle专家以及开源社区的review,请及时关注PR动态。 |
你的PR有最新反馈,请及时修改。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
参考#40 进行同步修改
|
||
}} // namespace at::native | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
需要进一步增加对torch方案的步骤化描述
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
完成
- 是否同时支持CPU和GPU平台; | ||
- 测试不同张量类型下的表现; | ||
- 对全部入参进行参数有效性和边界值测试,确定每个入参都可以正确生效; | ||
- 前向计算的正确性(与组合API实现比较、与PyTorch比较); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
本地torch,CI中使用numpy实现的算法进行对比
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
完成
- 当传入的`downscale_factor`不合法(不是正整数、不同时整除高度和宽度)时会抛出异常并有友好的提示。 | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
错误检查:输入维度,downscale_factor > 0等
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
完成
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This reverts commit 4a82d48.
实现内容可以存放在paddle/phi/kernels/cpu(gpu)/ 下,注册则在fluid/oprerator下。如下图所示: |
请对文档中的实现内容位置进行删除或者对应修改 |
因为cpu和gpu的kernel是一样的,所以参考pixel_shuffle把实现放在 |
请提交一个新的PR对相应内容进行修改 |
很抱歉,经过我们的反复讨论,你的PR暂未达到合入标准,请阅读飞桨社区贡献指南,你可以重新提交新的PR,我们先将此PR关闭,感谢你的贡献。 |
增加paddle.nn.PixelUnshuffle设计文档。