Skip to content

Commit

Permalink
[cherry-pick]Fixed a bug of log_softmax: op input was modified to 'n…
Browse files Browse the repository at this point in the history
…an' (#32937) (#33436)

使用op benchmark时发现,当输入数据量小于某个值时,python 端 log_softmax 接口的输入值经过计算过后 会被改变为nan。输出正常。

cherry-pick自 #32937
AshburnLee authored Jun 11, 2021
1 parent 8461ab1 commit 61cae0d
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions paddle/fluid/operators/log_softmax_op.cu
Original file line number Diff line number Diff line change
@@ -104,7 +104,7 @@ __global__ void ComputeLogSoftmaxForwardInWarp(T *dst, const T *src,
#pragma unroll
for (int it = 0; it < warp_iter; ++it) {
int element_index = thread_in_warp_idx + it * kernel_warp_size;
if (element_index < element_count) {
if (element_index < effective_element_count) {
dst[batch_id * element_count + element_index] =
static_cast<T>(elements[it] - max_value - sum);
} else {
@@ -226,7 +226,7 @@ __global__ void ComputeLogSoftmaxBackwardInWarp(const T *output,
#pragma unroll
for (int iter = 0; iter < warp_iter; ++iter) {
int element_index = thread_in_warp_idx + iter * kernel_warp_size;
if (element_index < element_count) {
if (element_index < effective_element_count) {
grad_input[batch_id * element_count + element_index] = static_cast<T>(
(grad_output_register[iter] - std::exp(output_register[iter]) * sum));
}

0 comments on commit 61cae0d

Please sign in to comment.