Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix] update hubconf.py #81

Merged
merged 2 commits into from
Nov 1, 2022
Merged

[Fix] update hubconf.py #81

merged 2 commits into from
Nov 1, 2022

Conversation

Bing-su
Copy link
Contributor

@Bing-su Bing-su commented Nov 1, 2022

2.0.0 출시를 축하드립니다.

Problem (Why?)

  1. load schedulers

  2. torch.hub.list에서 func, optimizer 감추기

>>> import torch

>>> torch.hub.list("kozistr/pytorch_optimizer")
...
 'adan',
 'adapnm',
 'diffgrad',
 'diffrgrad',
 'func', # <<<<<<
 'lamb',
 'lars',
 'madgrad',
 'nero',
 'optimizer', # <<<<<<
 'pnm',
 'radam',
 'ralamb',
 'ranger',
 'ranger21',
 'sgdp',
 'shampoo']
  1. torch.load.help 타겟을 class 자체에서 __init__으로 변경
  • before
>>> print(torch.hub.help("kozistr/pytorch_optimizer", "ranger"))

    Reference : https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer
    Example :
        from pytorch_optimizer import Ranger
        ...
        model = YourModel()
        optimizer = Ranger(model.parameters())
        ...
        for input, output in data:
          optimizer.zero_grad()
          loss = loss_function(output, model(input))
          loss.backward()
          optimizer.step()
  • after
>>> print(torch.hub.help("Bing-su/pytorch_optimizer:hubconf", "ranger"))

Ranger optimizer
        :param params: PARAMETERS. iterable of parameters to optimize or dicts defining parameter groups
        :param lr: float. learning rate
        :param betas: BETAS. coefficients used for computing running averages of gradient and the squared hessian trace
        :param weight_decay: float. weight decay (L2 penalty)
        :param n_sma_threshold: int. (recommended is 5)
        :param use_gc: bool. use Gradient Centralization (both convolution & fc layers)
        :param gc_conv_only: bool. use Gradient Centralization (only convolution layer)
        :param adamd_debias_term: bool. Only correct the denominator to avoid inflating step sizes early in training
        :param eps: float. term added to the denominator to improve numerical stability

Solution (What/How?)

거창한 설명에 비해 코드 수정을 별로 많지 않습니다. 아래 변경점을 확인해주세요.

Notes

3번 help 타겟을 .__init__으로 변경한건 사람마다 생각이 다를 것 같네요...

@codecov-commenter
Copy link

Codecov Report

Merging #81 (becf7cd) into main (2258885) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##             main      #81   +/-   ##
=======================================
  Coverage   96.22%   96.22%           
=======================================
  Files          30       30           
  Lines        2171     2171           
=======================================
  Hits         2089     2089           
  Misses         82       82           

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

Copy link
Owner

@kozistr kozistr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks for your contribution :)

for _optimizer in _get_supported_optimizers():
name: str = _optimizer.__name__
_func = _partial(_load_optimizer, optimizer=name)
_update_wrapper(_func, _optimizer.__init__)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

저도 빙수님 생각처럼 class 자체 usage를 보여주는 것보단 parameter에 대한 설명을 보여주는 게 (init) 더 좋지 않을까 생각해요! (usage 자체는 직관적이고 README나 docs에서 확인할 수 있지(어야 하지) 않을까 생각해서요)

+) 제가 아직 class docstring를 어느 부분에 어떻게 쓸 지 고민을 많이 못해서 그런데, 더 고민해서 언젠가 바꿔볼게요!

@kozistr kozistr merged commit 3c491f3 into kozistr:main Nov 1, 2022
@kozistr kozistr added the feature New features label Nov 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New features size/S
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants