Skip to content
This repository has been archived by the owner on Dec 18, 2019. It is now read-only.

Cannot use complex scalar for binary op #3

Open
Roger-luo opened this issue Sep 13, 2018 · 2 comments
Open

Cannot use complex scalar for binary op #3

Roger-luo opened this issue Sep 13, 2018 · 2 comments
Labels
help wanted Extra attention is needed

Comments

@Roger-luo
Copy link
Owner

from torch_complex import torch

a = torch.ones(2, 2, dtype=torch.complex128)
2j * a
* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
  * frame #0: 0x00007fff52cba1f6 libc++abi.dylib`__cxa_throw
    frame #1: 0x000000010f0bb6a7 _C.cpython-36m-darwin.so`torch::FunctionSignature::parse(this=0x000000010183eb60, args=0x0000000101982048, kwargs=0x0000000000000000, dst=0x00007ffeefbfee60, raise_exception=true) at python_arg_parser.cpp:486
    frame #2: 0x000000010f0bdb25 _C.cpython-36m-darwin.so`torch::PythonArgParser::raw_parse(this=0x000000010f306c10, args=0x0000000101982048, kwargs=0x0000000000000000, parsed_args=0x00007ffeefbfee60) at python_arg_parser.cpp:532
    frame #3: 0x000000010e6a0877 _C.cpython-36m-darwin.so`torch::PythonArgs torch::PythonArgParser::parse<2>(this=0x000000010f306c10, args=0x0000000101982048, kwargs=0x0000000000000000, dst=0x00007ffeefbfee60) at python_arg_parser.h:193
    frame #4: 0x000000010eaaee09 _C.cpython-36m-darwin.so`torch::autograd::THPVariable_mul(self_=0x00000001019f4948, args=0x0000000101982048, kwargs=0x0000000000000000) at python_variable_methods.cpp:2936
    frame #5: 0x00000001000e4e61 .Python`PyCFunction_Call + 76
    frame #6: 0x00000001000ac599 .Python`PyObject_Call + 101
    frame #7: 0x00000001000fa842 .Python`call_maybe + 181
    frame #8: 0x00000001000aa7b0 .Python`binary_op1 + 187
    frame #9: 0x00000001000aa7fa .Python`PyNumber_Multiply + 27
    frame #10: 0x000000010014436b .Python`_PyEval_EvalFrameDefault + 5045
    frame #11: 0x000000010014b876 .Python`_PyEval_EvalCodeWithName + 1747
    frame #12: 0x0000000100142f3c .Python`PyEval_EvalCode + 42
    frame #13: 0x000000010016bacf .Python`run_mod + 54
    frame #14: 0x000000010016aade .Python`PyRun_FileExFlags + 164
    frame #15: 0x000000010016a1c9 .Python`PyRun_SimpleFileExFlags + 283
    frame #16: 0x000000010017efaa .Python`Py_Main + 3466
    frame #17: 0x0000000100001e1d python`___lldb_unnamed_symbol1$$python + 227
    frame #18: 0x00007fff54cf8015 libdyld.dylib`start + 1
@Roger-luo Roger-luo added the help wanted Extra attention is needed label Nov 22, 2018
@alidiak
Copy link

alidiak commented Jan 28, 2019

I am having a similar issue while running python setup.py test. I get the following two errors:
It seems like the first error is the same as that brought up by Roger-Luo. I know that this module is still in progress, so I wonder if the test is not necessarily supposed to be passed yet. Thank you for making this module and for any suggestions.

===================================================================
ERROR: test (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: test
Traceback (most recent call last):
  File "/usr/lib/python3.5/unittest/loader.py", line 428, in _find_test_path
    module = self._get_module_from_name(name)
  File "/usr/lib/python3.5/unittest/loader.py", line 369, in _get_module_from_name
    __import__(name)
  File "/mnt/c/Users/HP/Documents/Quantum_Machine_Learning_Research/QuantaLearn/pytorch-complex-master/test.py", line 7, in <module>
    2j * a
TypeError: mul(): argument 'other' (position 1) must be Tensor, not complex
===================================================================
ERROR: test_scalar_binary_op (tests.test_tensor.TestComplexTensor)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/mnt/c/Users/HP/Documents/Quantum_Machine_Learning_Research/QuantaLearn/pytorch-complex-master/tests/test_tensor.py", line 22, in test_scalar_binary_op
    2 * a
RuntimeError: _th_tensor is not implemented for type CPUComplexType<double>
----------------------------------------------------------------------

@Roger-luo
Copy link
Owner Author

Yes, this is because the type promotion PR was not solved yet. And then the broadcast was not working either...

pytorch/pytorch#11641

This package is WIP, and requires a lot effort. It would be great if people could collaborate on this. As what's already in README, nothing is guaranteed to work... and you shouldn't use this for certain task unless you know what's happening.

BTW It will be great if you could use markdown format next time..

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants