Skip to content

Commit

Permalink
fix english doc
Browse files Browse the repository at this point in the history
  • Loading branch information
zhwesky2010 committed Aug 23, 2020
1 parent cef2787 commit ecfea6e
Showing 1 changed file with 10 additions and 10 deletions.
20 changes: 10 additions & 10 deletions python/paddle/optimizer/lr_scheduler.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,8 +55,8 @@ def __call__(self):

def step(self, epoch=None):
"""
step should be called after 'minimize' . It will Update the learning rate in optimizer according to 'epoch'.
The new learning rate will take effect on next optimize operation.
'step' should be called after 'minimize' . It will update the learning rate in optimizer according to 'epoch'.
The new learning rate will take effect on next epoch.
Args:
epoch (int, None): specify current epoch. Default: None. Auto-increment from last_epoch=-1.
Expand Down Expand Up @@ -247,7 +247,6 @@ class PiecewiseLR(_LRScheduler):
learning_rate = 0.1
Args:
learning_rate (float): The initial learning rate. It is a python float number.
boundaries(list): A list of steps numbers. The type of element in the list is python int.
values(list): A list of learning rate values that will be picked during different epoch boundaries.
The type of element in the list is python float.
Expand Down Expand Up @@ -493,25 +492,26 @@ class PolynomialLR(_LRScheduler):
.. math::
decay\_steps & = decay\_steps * math.ceil(\\frac{global\_step}{decay\_steps})
decay\_steps & = decay\_steps * math.ceil(\\frac{epoch}{decay\_steps})
new\_learning\_rate & = (learning\_rate-end\_lr)*(1-\\frac{global\_step}{decay\_steps})^{power}+end\_learning\_rate
new\_learning\_rate & = (learning\_rate-end\_lr)*(1-\\frac{epoch}{decay\_steps})^{power}+end\_lr
If cycle is set to False, then:
.. math::
global\_step & = min(global\_step, decay\_steps)
epoch & = min(epoch, decay\_steps)
new\_learning\_rate & = (learning\_rate-end\_learning\_rate)*(1-\\frac{global\_step}{decay\_steps})^{power}+end\_learning\_rate
new\_learning\_rate & = (learning\_rate-end\_lr)*(1-\\frac{epoch}{decay\_steps})^{power}+end\_lr
Args:
learning_rate (float): The initial learning rate. It is a python float number.
decay_steps(int): The decay step size. It determines the decay cycle.
end_lr(float, optional): The minimum final learning rate. Default: 0.0001.
power(float, optional): Power of polynomial. Default: 1.0.
cycle(bool, optional): If set true, decay the learning rate every decay_steps. Default: False.
cycle(bool, optional): Whether the learning rate rises again. If True, then the learning rate will rise when it decrease
to ``end_lr`` . If False, the learning rate is monotone decreasing. Default: False.
last_epoch (int, optional): The index of last epoch. Can be set to restart training. Default: -1, means initial learning rate.
verbose (bool): If ``True``, prints a message to stdout for each update. Default: ``False`` .
Expand Down Expand Up @@ -1249,8 +1249,8 @@ def _state_keys(self):

def step(self, metrics, epoch=None):
"""
step should be called after 'minimize' . It will Update the learning rate in optimizer according to ``metrics`` .
The new learning rate will take effect on next optimize operation.
step should be called after 'minimize' . It will update the learning rate in optimizer according to ``metrics`` .
The new learning rate will take effect on next epoch.
Args:
metrics (Tensor|numpy.ndarray|float): Which will be monitored to determine whether the learning rate will reduce.
Expand Down

1 comment on commit ecfea6e

@paddle-bot-old
Copy link

@paddle-bot-old paddle-bot-old bot commented on ecfea6e Aug 23, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🕵️ CI failures summary

🔍 Commit ID: ecfea6e contains failed CI.

Please sign in to comment.