Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【Hackathon No.29】add RFC for PixelUnshuffle #44

Merged
merged 4 commits into from
Mar 22, 2022

Conversation

BrilliantYuKaimin
Copy link
Contributor

增加paddle.nn.PixelUnshuffle设计文档。

Copy link
Contributor

@shiyutang shiyutang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

方案整体完备,但是实现细节等需要步骤化说明,请修改方案后再提交代码。修改示例参照:https://github.com/PaddlePaddle/community/blob/master/rfcs/APIs/20200301_api_design_for_quantile.md

Comment on lines 105 to 122
## 飞桨

飞桨目前没有直接提供`ChannelShuffle`的API,但是也可以通过组合API的方式实现该操作:

```python
class PixelUnshuffle(paddle.nn.Layer):
def __init__(self, downscale_factor):
super(PixelUnshuffle, self).__init__()
self.factor = downscale_factor
self.data_format = data_format

def forward(self, x):
n, c, h, w = x.shape
x = paddle.reshape(x, [n, c, h / self.factor, self.factor, w / self.factor, self.factor])
x = paddle.transpose(x, [0, 1, 3, 5, 2, 4])
x = paddle.reshape(x, [n, c * self.factor * self.factor, h / self.factor, w / self.factor])
return x
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这部分可以放在飞桨现状

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这部分可以放在飞桨现状

完成

Comment on lines 146 to 149
- 与PyTorch的结果的一致性;
- 反向传播的正确性;
- 错误检查:`downscale_factor`不是正整数或不同时整除宽度和高度时能正确抛出异常。

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这部分需要修改,请参考这个PR中描述:#40

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这部分需要修改,请参考这个PR中描述:#40

完成

Comment on lines +139 to +142
## API实现方案

参考`paddle.nn.PixelShuffle`来实现`paddle.nn.PixelUnshuffle`,顺便实现`paddle.nn.functional.pixel_unshuffle`。

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这部分需要修改,请参考这个PR中描述:#40

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这部分需要修改,请参考这个PR中描述:#40

完成

Comment on lines 124 to 126
# 四、对比分析

无论是C++实现还是组合API实现,其逻辑都是十分简单的,故考虑使用C++编写新的算子以期取得更高的效率。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这部分需要修改,请参考这个PR中描述:#40

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这部分需要修改,请参考这个PR中描述:#40

完成

Comment on lines 42 to 103
其中的`F.pixel_unshuffle`是由C++实现的,主要代码为:

```c++
Tensor pixel_unshuffle(const Tensor& self, int64_t downscale_factor) {
TORCH_CHECK(self.dim() >= 3,
"pixel_unshuffle expects input to have at least 3 dimensions, but got input with ",
self.dim(), " dimension(s)");
TORCH_CHECK(
downscale_factor > 0,
"pixel_unshuffle expects a positive downscale_factor, but got ",
downscale_factor);
int64_t c = self.size(-3);
int64_t h = self.size(-2);
int64_t w = self.size(-1);
constexpr auto NUM_NON_BATCH_DIMS = 3;
const auto self_sizes_batch_end = self.sizes().end() - NUM_NON_BATCH_DIMS;

TORCH_CHECK(h % downscale_factor == 0,
"pixel_unshuffle expects height to be divisible by downscale_factor, but input.size(-2)=", h,
" is not divisible by ", downscale_factor)
TORCH_CHECK(w % downscale_factor == 0,
"pixel_unshuffle expects width to be divisible by downscale_factor, but input.size(-1)=", w,
" is not divisible by ", downscale_factor)
int64_t downscale_factor_squared = downscale_factor * downscale_factor;
int64_t oc = c * downscale_factor_squared;
int64_t oh = h / downscale_factor;
int64_t ow = w / downscale_factor;

std::vector<int64_t> added_dims_shape(
self.sizes().begin(), self_sizes_batch_end);
added_dims_shape.insert(
added_dims_shape.end(), {c, oh, downscale_factor, ow, downscale_factor});
const auto input_reshaped = self.reshape(added_dims_shape);

std::vector<int64_t> permutation(self.sizes().begin(), self_sizes_batch_end);
std::iota(permutation.begin(), permutation.end(), 0);
permutation.insert(permutation.end(), {-5 /* c */, -3 /* 1st downscale_factor */, -1 /*2nd downscale_factor */,
-4 /* oh */, -2 /* ow */});
const auto input_permuted = input_reshaped.permute(permutation);

std::vector<int64_t> final_shape(self.sizes().begin(), self_sizes_batch_end);
final_shape.insert(final_shape.end(), {oc, oh, ow});
return input_permuted.reshape(final_shape);
}

}} // namespace at::native

```

## TensorFlow

TensorFlow目前没有直接提供`ChannelShuffle`的API,但是也可以通过组合API的方式实现该操作:

```python
def shuffle_unit(self, x, downscale_factor):
with tf.variable_scope('pixel_unshuffle_unit'):
n, h, w, c = x.get_shape().as_list()
x = tf.reshape(x, shape=tf.convert_to_tensor([tf.shape(x)[0], h / downscale_factor, downscale_factor, w / downscale_factor, downscale_factor, c]))
x = tf.transpose(x, tf.convert_to_tensor([0, 1, 3, 5, 2, 4]))
x = tf.reshape(x, shape=tf.convert_to_tensor([tf.shape(x)[0], h / downscale_factor, w / downscale_factor, c * downscale_factor * downscale_factor]))

```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这部分需要增加内容,请参考这个PR中描述:#40

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这部分需要增加内容,请参考这个PR中描述:#40

完成

@paddle-bot-old
Copy link

PR格式检查通过,你的PR将接受Paddle专家以及开源社区的review,请及时关注PR动态。
The format inspection passed. Your PR will be reviewed by experts of Paddle and developers from the open-source community. Stay tuned.

@paddle-bot-old
Copy link

你的PR有最新反馈,请及时修改。
There’s the latest feedback about your PR. Please check.

Copy link
Contributor

@shiyutang shiyutang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

参考#40 进行同步修改

Comment on lines +100 to +102

}} // namespace at::native

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

需要进一步增加对torch方案的步骤化描述

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

完成

- 是否同时支持CPU和GPU平台;
- 测试不同张量类型下的表现;
- 对全部入参进行参数有效性和边界值测试,确定每个入参都可以正确生效;
- 前向计算的正确性(与组合API实现比较、与PyTorch比较);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

本地torch,CI中使用numpy实现的算法进行对比

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

完成

Comment on lines 215 to 216
- 当传入的`downscale_factor`不合法(不是正整数、不同时整除高度和宽度)时会抛出异常并有友好的提示。

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

错误检查:输入维度,downscale_factor > 0等

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

完成

按照评审意见修改了代码
Copy link
Contributor

@shiyutang shiyutang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@shiyutang shiyutang merged commit 4a82d48 into PaddlePaddle:master Mar 22, 2022
shiyutang added a commit that referenced this pull request Mar 22, 2022
@shiyutang
Copy link
Contributor

实现内容可以存放在paddle/phi/kernels/cpu(gpu)/ 下,注册则在fluid/oprerator下。如下图所示:
以TraceOp为例:
c++ op开发方式可以参见开发指南:https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/dev_guides/api_contributing_guides/new_cpp_op_cn.html

@shiyutang
Copy link
Contributor

请对文档中的实现内容位置进行删除或者对应修改

@BrilliantYuKaimin
Copy link
Contributor Author

请对文档中的实现内容位置进行删除或者对应修改

因为cpu和gpu的kernel是一样的,所以参考pixel_shuffle把实现放在paddle/phi/kernels/impl/pixel_unshuffle_kernel_impl.h中。

@shiyutang
Copy link
Contributor

如果实现和设备无关,应当放在kernel根目录下,如下所示:
image

image

@shiyutang
Copy link
Contributor

请提交一个新的PR对相应内容进行修改

@paddle-bot-old
Copy link

很抱歉,经过我们的反复讨论,你的PR暂未达到合入标准,请阅读飞桨社区贡献指南,你可以重新提交新的PR,我们先将此PR关闭,感谢你的贡献。
Sorry to inform you that through our discussion, your PR fails to meet the merging standard (Reference: Paddle API Design Standard Doc). You can also submit an new one. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants