Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test_nvfusers.py tests failing RunTime error due to static_argnums argument to aot_function deprecated #514

Closed
Thomas-MMJ opened this issue Nov 9, 2022 · 4 comments
Labels
bug Something isn't working

Comments

@Thomas-MMJ
Copy link
Contributor

🐛 Bug

the parameter static_argnums is passed in memory_efficient_fusion in the following three files


xformers/components/nvfuser/bias_act_dropout.py:        aot_fn = memory_efficient_fusion(_fn, static_argnums=(2, 3))
xformers/components/nvfuser/bias_dropout_res.py:        aot_fn = memory_efficient_fusion(fn=_fn, static_argnums=(2))
xformers/components/nvfuser/bias_dropout_res_layernorm.py:        aot_fn = memory_efficient_fusion(fn=_fn, static_argnums=(2, 3, 4))


The variable static_argnums is deprecated for aot_function

See documentation here,

    if static_argnums is not None:
        raise RuntimeError("static_argnums has been deprecated - manually wrap your function or use torchdynamo.")

https://pytorch.org/functorch/nightly/_modules/functorch/_src/aot_autograd.html

Command

Only showing the first one, since it happens for
pytest ./tests/test_nvfuser.py

tests/test_nvfuser.py ssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 89%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 89%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 90%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 90%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 90%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 90%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 90%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 90%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 90%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 91%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 91%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 91%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 91%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 91%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 91%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 91%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 92%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 92%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 92%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 92%]
FssFssFssFssFssFssFssFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 92%]
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 92%]
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 92%]
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 92%]
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 93%]
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 93%]
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 93%]
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 93%]
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 93%]
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 93%]
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 93%]
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 94%]
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 94%]
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 94%]
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 94%]
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 94%]
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 94%]
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 94%]
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 94%]
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 95%]
FFFFFFFFFFFFFFFFFFFFFFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 95%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 95%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 95%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 95%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 95%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 95%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 96%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 96%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 96%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 96%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 96%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 96%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 96%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 97%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 97%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 97%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 97%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 97%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 97%]
FssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFssFss [ 97%]
FssFssFssFssFssFssFssFFFFFFFFFFFFFFFF                                    [ 97%]



tests/test_nvfuser.py:121: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/home/username/anaconda3/envs/diffusers/lib/python3.9/site-packages/torch/nn/modules/module.py:1423: in _call_impl
    return forward_call(*input, **kwargs)
xformers/components/nvfuser/bias_dropout_res_layernorm.py:79: in forward
    aot_fn = memory_efficient_fusion(fn=_fn, static_argnums=(2, 3, 4))
/home/letterrip/anaconda3/envs/diffusers/lib/python3.9/site-packages/functorch/_src/compilers.py:204: in memory_efficient_fusion
    return aot_function(fn, **config)

[snip]
        if static_argnums is not None:
>           raise RuntimeError("static_argnums has been deprecated - manually wrap your function or use torchdynamo.")
E           RuntimeError: static_argnums has been deprecated - manually wrap your function or use torchdynamo.

Environment

PyTorch version: 1.14.0.dev20221107
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.3
Libc version: glibc-2.31

Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:56:21)  [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.15.68.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
Nvidia driver version: 522.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] clip-anytorch==2.5.0
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.4
[pip3] pytorch-lightning==1.8.0.post1
[pip3] torch==1.14.0.dev20221107
[pip3] torchaudio==0.14.0.dev20221107
[pip3] torchdiffeq==0.2.2
[pip3] torchmetrics==0.10.2
[pip3] torchvision==0.15.0.dev20221107
[conda] blas                      1.0                         mkl
[conda] clip-anytorch             2.5.0                    pypi_0    pypi
[conda] cudatoolkit               11.7.0              hd8887f6_10    nvidia
[conda] libblas                   3.9.0            16_linux64_mkl    conda-forge
[conda] libcblas                  3.9.0            16_linux64_mkl    conda-forge
[conda] liblapack                 3.9.0            16_linux64_mkl    conda-forge
[conda] liblapacke                3.9.0            16_linux64_mkl    conda-forge
[conda] mkl                       2022.1.0           hc2b9512_224
[conda] numpy                     1.23.4           py39h3d75532_1    conda-forge
[conda] pytorch                   1.14.0.dev20221107 py3.9_cuda11.7_cudnn8.5.0_0    pytorch-nightly
[conda] pytorch-cuda              11.7                 h67b0de4_0    pytorch-nightly
[conda] pytorch-lightning         1.8.0.post1              pypi_0    pypi
[conda] pytorch-mutex             1.0                        cuda    pytorch
[conda] torchaudio                0.14.0.dev20221107      py39_cu117    pytorch-nightly
[conda] torchdiffeq               0.2.2              pyhd8ed1ab_0    conda-forge
[conda] torchmetrics              0.10.2                   pypi_0    pypi
[conda] torchvision               0.15.0.dev20221107      py39_cu117    pytorch-nightly
@Thomas-MMJ Thomas-MMJ changed the title aot_autograd tests failing RunTime error due to static_argnums argument to aot_function deprecated test_nvfusers.py tests failing RunTime error due to static_argnums argument to aot_function deprecated Nov 9, 2022
@0xdevalias
Copy link

0xdevalias commented Nov 10, 2022

Also get this running python3 xformers/benchmarks/benchmark_nvfuser.py

Traceback (most recent call last):
  File "/workspace/thelastben-diffusers/examples/dreambooth/xformers/xformers/benchmarks/benchmark_nvfuser.py", line 251, in <module>
    bench_nvfused(pattern, bias, bw, activation, style)  # type: ignore
  File "/workspace/thelastben-diffusers/examples/dreambooth/xformers/xformers/benchmarks/benchmark_nvfuser.py", line 172, in bench_nvfused
    time = triton.testing.do_bench(
  File "/opt/conda/envs/dreambooth/lib/python3.10/site-packages/triton/testing.py", line 140, in do_bench
    fn()
  File "/workspace/thelastben-diffusers/examples/dreambooth/xformers/xformers/benchmarks/benchmark_nvfuser.py", line 173, in <lambda>
    lambda: testcase.function(x=a), grad_to_none=[a, b]
  File "/workspace/thelastben-diffusers/examples/dreambooth/xformers/xformers/benchmarks/benchmark_nvfuser.py", line 123, in step
    y = fn(x=x, residual=x) if residual else fn(x)
  File "/opt/conda/envs/dreambooth/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/workspace/thelastben-diffusers/examples/dreambooth/xformers/xformers/components/nvfuser/bias_act_dropout.py", line 69, in forward
    aot_fn = memory_efficient_fusion(_fn, static_argnums=(2, 3))
  File "/opt/conda/envs/dreambooth/lib/python3.10/site-packages/functorch/_src/compilers.py", line 203, in memory_efficient_fusion
    return aot_function(fn, **config)
  File "/opt/conda/envs/dreambooth/lib/python3.10/site-packages/functorch/_src/aot_autograd.py", line 604, in aot_function
    raise RuntimeError("static_argnums has been deprecated - manually wrap your function or use torchdynamo.")
RuntimeError: static_argnums has been deprecated - manually wrap your function or use torchdynamo.

Seems as though this is the relevant usage location:

File "/workspace/thelastben-diffusers/examples/dreambooth/xformers/xformers/components/nvfuser/bias_act_dropout.py", line 69, in forward
    aot_fn = memory_efficient_fusion(_fn, static_argnums=(2, 3))

@fmassa
Copy link
Contributor

fmassa commented Nov 10, 2022

Looks like this is related to #468

Let's just remove memory_efficient_fusion from xformers. cc @dianaml0 @Chillee

@fmassa fmassa added the bug Something isn't working label Nov 10, 2022
@danthe3rd
Copy link
Contributor

Should be addressed in #521

danthe3rd pushed a commit that referenced this issue Nov 12, 2022
Closes #515
Closes #514

Note:
`static_argnums`'s argument to `memory_efficient_fusion` is now removed, so had to update some code

[ghstack-poisoned]
danthe3rd pushed a commit that referenced this issue Nov 14, 2022
Closes #515
Closes #514

Note:
`static_argnums`'s argument to `memory_efficient_fusion` is now removed, so had to update some code

[ghstack-poisoned]
danthe3rd pushed a commit that referenced this issue Nov 14, 2022
Closes #515
Closes #514

Note:
`static_argnums`'s argument to `memory_efficient_fusion` is now removed, so had to update some code

[ghstack-poisoned]
danthe3rd pushed a commit that referenced this issue Nov 14, 2022
Closes #515
Closes #514

Note:
`static_argnums`'s argument to `memory_efficient_fusion` is now removed, so had to update some code

[ghstack-poisoned]
danthe3rd pushed a commit that referenced this issue Nov 14, 2022
Closes #515
Closes #514

Note:
`static_argnums`'s argument to `memory_efficient_fusion` is now removed, so had to update some code

[ghstack-poisoned]
danthe3rd pushed a commit that referenced this issue Nov 14, 2022
… 1.13"


Closes #515
Closes #514

Note:
`static_argnums`'s argument to `memory_efficient_fusion` is now removed, so had to update some code

[ghstack-poisoned]
danthe3rd pushed a commit that referenced this issue Nov 14, 2022
Closes #515
Closes #514

Note:
`static_argnums`'s argument to `memory_efficient_fusion` is now removed, so had to update some code

[ghstack-poisoned]
danthe3rd pushed a commit that referenced this issue Nov 15, 2022
…torch 1.13"


Closes #515
Closes #514

Note:
`static_argnums`'s argument to `memory_efficient_fusion` is now removed, so had to update some code

[ghstack-poisoned]
danthe3rd pushed a commit that referenced this issue Nov 15, 2022
Closes #515
Closes #514

Note:
`static_argnums`'s argument to `memory_efficient_fusion` is now removed, so had to update some code

[ghstack-poisoned]
danthe3rd pushed a commit that referenced this issue Nov 15, 2022
…CI to pytorch 1.13"


Closes #515
Closes #514

Note:
`static_argnums`'s argument to `memory_efficient_fusion` is now removed, so had to update some code

[ghstack-poisoned]
danthe3rd pushed a commit that referenced this issue Nov 15, 2022
Closes #515
Closes #514

Note:
`static_argnums`'s argument to `memory_efficient_fusion` is now removed, so had to update some code

[ghstack-poisoned]
@fmassa
Copy link
Contributor

fmassa commented Sep 5, 2023

We removed NVfuser from xformers in #843, so closing this

@fmassa fmassa closed this as completed Sep 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants