Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Error while trying to run LeakyRelu layer with 1-d input #10588

Closed
anirudhacharya opened this issue Apr 17, 2018 · 6 comments
Closed

Error while trying to run LeakyRelu layer with 1-d input #10588

anirudhacharya opened this issue Apr 17, 2018 · 6 comments

Comments

@anirudhacharya
Copy link
Member

anirudhacharya commented Apr 17, 2018

Description

Not able to run a gluon.nn.LeakyReLU layer when the input to the network is 1-dimensional. Check the reproducible example at the bottom of the issue. I get the following error -

Check failed: this->shape_.Size() == shape.Size() (3 vs. 0) TBlob.get_with_shape: new and old shape do not match total elements

Environment info (Required)

----------Python Info----------
('Version      :', '2.7.14')
('Compiler     :', 'GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)')
('Build        :', ('default', 'Dec  7 2017 11:07:58'))
('Arch         :', ('64bit', ''))
------------Pip Info-----------
('Version      :', '9.0.3')
('Directory    :', '/Users/aanirud/anaconda2/envs/onnx/lib/python2.7/site-packages/pip')
----------MXNet Info-----------
/Users/aanirud/.local/lib/python2.7/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
objc[99942]: Class CaptureDelegate is implemented in both /usr/local/opt/opencv/lib/libopencv_videoio.3.4.dylib (0x114a44618) and /Users/aanirud/anaconda2/envs/onnx/lib/libopencv_videoio.3.3.dylib (0x1208325c8). One of the two will be used. Which one is undefined.
objc[99942]: Class CVWindow is implemented in both /usr/local/opt/opencv/lib/libopencv_highgui.3.4.dylib (0x114a1e1e8) and /Users/aanirud/anaconda2/envs/onnx/lib/libopencv_highgui.3.3.dylib (0x1208071e8). One of the two will be used. Which one is undefined.
objc[99942]: Class CVView is implemented in both /usr/local/opt/opencv/lib/libopencv_highgui.3.4.dylib (0x114a1e210) and /Users/aanirud/anaconda2/envs/onnx/lib/libopencv_highgui.3.3.dylib (0x120807210). One of the two will be used. Which one is undefined.
objc[99942]: Class CVSlider is implemented in both /usr/local/opt/opencv/lib/libopencv_highgui.3.4.dylib (0x114a1e238) and /Users/aanirud/anaconda2/envs/onnx/lib/libopencv_highgui.3.3.dylib (0x120807238). One of the two will be used. Which one is undefined.
('Version      :', '1.2.0')
('Directory    :', '/Users/aanirud/anaconda2/envs/onnx/lib/python2.7/site-packages/mxnet-1.2.0-py2.7.egg/mxnet')
Hashtag not found. Not installed from pre-built package.
----------System Info----------
('Platform     :', 'Darwin-16.7.0-x86_64-i386-64bit')
('system       :', 'Darwin')
('node         :', '8c85904b0bf4.ant.amazon.com')
('release      :', '16.7.0')
('version      :', 'Darwin Kernel Version 16.7.0: Tue Jan 30 11:27:06 PST 2018; root:xnu-3789.73.11~1/RELEASE_X86_64')
----------Hardware Info----------
('machine      :', 'x86_64')
('processor    :', 'i386')
machdep.cpu.extfeatures: SYSCALL XD 1GBPAGE EM64T LAHF LZCNT PREFETCHW RDTSCP TSCI
machdep.cpu.leaf7_features: SMEP ERMS RDWRFSGS TSC_THREAD_OFFSET BMI1 HLE AVX2 BMI2 INVPCID RTM SMAP RDSEED ADX IPT SGX FPU_CSDS MPX CLFSOPT
machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE SSE3 PCLMULQDQ DTES64 MON DSCPL VMX SMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC MOVBE POPCNT AES PCID XSAVE OSXSAVE SEGLIM64 TSCTMR AVX1.0 RDRAND F16C
machdep.cpu.brand_string: Intel(R) Core(TM) i7-7660U CPU @ 2.50GHz
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0162 sec, LOAD: 0.5096 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0147 sec, LOAD: 0.3571 sec.
Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0187 sec, LOAD: 0.1854 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0166 sec, LOAD: 0.0504 sec.
Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0158 sec, LOAD: 0.0632 sec.
Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0172 sec, LOAD: 0.1345 sec.

Package used (Python/R/Scala/Julia):
I'm using Python

Compiler (gcc/clang/mingw/visual studio):
clang

MXNet commit hash:
ceb810c

Error Message:

---------------------------------------------------------------------------
MXNetError                                Traceback (most recent call last)
/Users/aanirud/anaconda2/envs/onnx/lib/python2.7/site-packages/IPython/core/formatters.pyc in __call__(self, obj)
    697                 type_pprinters=self.type_printers,
    698                 deferred_pprinters=self.deferred_printers)
--> 699             printer.pretty(obj)
    700             printer.flush()
    701             return stream.getvalue()

/Users/aanirud/anaconda2/envs/onnx/lib/python2.7/site-packages/IPython/lib/pretty.pyc in pretty(self, obj)
    381                             if callable(meth):
    382                                 return meth(obj, self, cycle)
--> 383             return _default_pprint(obj, self, cycle)
    384         finally:
    385             self.end_group()

/Users/aanirud/anaconda2/envs/onnx/lib/python2.7/site-packages/IPython/lib/pretty.pyc in _default_pprint(obj, p, cycle)
    501     if _safe_getattr(klass, '__repr__', None) not in _baseclass_reprs:
    502         # A user-provided repr. Find newlines and replace them with p.break_()
--> 503         _repr_pprint(obj, p, cycle)
    504         return
    505     p.begin_group(1, '<')

/Users/aanirud/anaconda2/envs/onnx/lib/python2.7/site-packages/IPython/lib/pretty.pyc in _repr_pprint(obj, p, cycle)
    699     """A pprint that just redirects to the normal repr function."""
    700     # Find newlines and replace them with p.break_()
--> 701     output = repr(obj)
    702     for idx,output_line in enumerate(output.splitlines()):
    703         if idx:

/Users/aanirud/anaconda2/envs/onnx/lib/python2.7/site-packages/mxnet-1.2.0-py2.7.egg/mxnet/ndarray/ndarray.pyc in __repr__(self)
    187         """Returns a string representation of the array."""
    188         shape_info = 'x'.join(['%d' % x for x in self.shape])
--> 189         return '\n%s\n<%s %s @%s>' % (str(self.asnumpy()),
    190                                       self.__class__.__name__,
    191                                       shape_info, self.context)

/Users/aanirud/anaconda2/envs/onnx/lib/python2.7/site-packages/mxnet-1.2.0-py2.7.egg/mxnet/ndarray/ndarray.pyc in asnumpy(self)
   1874             self.handle,
   1875             data.ctypes.data_as(ctypes.c_void_p),
-> 1876             ctypes.c_size_t(data.size)))
   1877         return data
   1878 

/Users/aanirud/anaconda2/envs/onnx/lib/python2.7/site-packages/mxnet-1.2.0-py2.7.egg/mxnet/base.pyc in check_call(ret)
    147     """
    148     if ret != 0:
--> 149         raise MXNetError(py_str(_LIB.MXGetLastError()))
    150 
    151 

MXNetError: [16:01:07] include/mxnet/./tensor_blob.h:257: Check failed: this->shape_.Size() == shape.Size() (3 vs. 0) TBlob.get_with_shape: new and old shape do not match total elements

Stack trace returned 10 entries:
[bt] (0) 0   libmxnet.so                         0x00000001125c8194 dmlc::StackTrace() + 276
[bt] (1) 1   libmxnet.so                         0x00000001125c7f4f dmlc::LogMessageFatal::~LogMessageFatal() + 47
[bt] (2) 2   libmxnet.so                         0x00000001125f5327 mshadow::Tensor<mshadow::cpu, 3, float> mxnet::TBlob::get_with_shape<mshadow::cpu, 3, float>(mshadow::Shape<3> const&, mshadow::Stream<mshadow::cpu>*) const + 807
[bt] (3) 3   libmxnet.so                         0x0000000113848abe mxnet::op::LeakyReLUOp<mshadow::cpu, float>::Forward(mxnet::OpContext const&, std::__1::vector<mxnet::TBlob, std::__1::allocator<mxnet::TBlob> > const&, std::__1::vector<mxnet::OpReqType, std::__1::allocator<mxnet::OpReqType> > const&, std::__1::vector<mxnet::TBlob, std::__1::allocator<mxnet::TBlob> > const&, std::__1::vector<mxnet::TBlob, std::__1::allocator<mxnet::TBlob> > const&) + 446
[bt] (4) 4   libmxnet.so                         0x00000001137d9df3 mxnet::op::OperatorState::Forward(mxnet::OpContext const&, std::__1::vector<mxnet::TBlob, std::__1::allocator<mxnet::TBlob> > const&, std::__1::vector<mxnet::OpReqType, std::__1::allocator<mxnet::OpReqType> > const&, std::__1::vector<mxnet::TBlob, std::__1::allocator<mxnet::TBlob> > const&) + 1795
[bt] (5) 5   libmxnet.so                         0x00000001136d2a24 mxnet::imperative::PushOperator(mxnet::OpStatePtr const&, nnvm::Op const*, nnvm::NodeAttrs const&, mxnet::Context const&, std::__1::vector<mxnet::engine::Var*, std::__1::allocator<mxnet::engine::Var*> > const&, std::__1::vector<mxnet::engine::Var*, std::__1::allocator<mxnet::engine::Var*> > const&, std::__1::vector<mxnet::Resource, std::__1::allocator<mxnet::Resource> > const&, std::__1::vector<mxnet::NDArray*, std::__1::allocator<mxnet::NDArray*> > const&, std::__1::vector<mxnet::NDArray*, std::__1::allocator<mxnet::NDArray*> > const&, std::__1::vector<unsigned int, std::__1::allocator<unsigned int> > const&, std::__1::vector<mxnet::OpReqType, std::__1::allocator<mxnet::OpReqType> > const&, mxnet::DispatchMode)::'lambda0'(mxnet::RunContext, mxnet::engine::CallbackOnComplete)::operator()(mxnet::RunContext, mxnet::engine::CallbackOnComplete) const + 612
[bt] (6) 6   libmxnet.so                         0x00000001136d277f std::__1::__function::__func<mxnet::imperative::PushOperator(mxnet::OpStatePtr const&, nnvm::Op const*, nnvm::NodeAttrs const&, mxnet::Context const&, std::__1::vector<mxnet::engine::Var*, std::__1::allocator<mxnet::engine::Var*> > const&, std::__1::vector<mxnet::engine::Var*, std::__1::allocator<mxnet::engine::Var*> > const&, std::__1::vector<mxnet::Resource, std::__1::allocator<mxnet::Resource> > const&, std::__1::vector<mxnet::NDArray*, std::__1::allocator<mxnet::NDArray*> > const&, std::__1::vector<mxnet::NDArray*, std::__1::allocator<mxnet::NDArray*> > const&, std::__1::vector<unsigned int, std::__1::allocator<unsigned int> > const&, std::__1::vector<mxnet::OpReqType, std::__1::allocator<mxnet::OpReqType> > const&, mxnet::DispatchMode)::'lambda0'(mxnet::RunContext), std::__1::allocator<mxnet::imperative::PushOperator(mxnet::OpStatePtr const&, nnvm::Op const*, nnvm::NodeAttrs const&, mxnet::Context const&, std::__1::vector<mxnet::engine::Var*, std::__1::allocator<mxnet::engine::Var*> > const&, std::__1::vector<mxnet::engine::Var*, std::__1::allocator<mxnet::engine::Var*> > const&, std::__1::vector<mxnet::Resource, std::__1::allocator<mxnet::Resource> > const&, std::__1::vector<mxnet::NDArray*, std::__1::allocator<mxnet::NDArray*> > const&, std::__1::vector<mxnet::NDArray*, std::__1::allocator<mxnet::NDArray*> > const&, std::__1::vector<unsigned int, std::__1::allocator<unsigned int> > const&, std::__1::vector<mxnet::OpReqType, std::__1::allocator<mxnet::OpReqType> > const&, mxnet::DispatchMode)::'lambda0'(mxnet::RunContext)>, void (mxnet::RunContext)>::operator()(mxnet::RunContext&&) + 63
[bt] (7) 7   libmxnet.so                         0x00000001136689f4 std::__1::__function::__func<mxnet::engine::ThreadedEngine::PushSync(std::__1::function<void (mxnet::RunContext)>, mxnet::Context, std::__1::vector<mxnet::engine::Var*, std::__1::allocator<mxnet::engine::Var*> > const&, std::__1::vector<mxnet::engine::Var*, std::__1::allocator<mxnet::engine::Var*> > const&, mxnet::FnProperty, int, char const*)::$_1, std::__1::allocator<mxnet::engine::ThreadedEngine::PushSync(std::__1::function<void (mxnet::RunContext)>, mxnet::Context, std::__1::vector<mxnet::engine::Var*, std::__1::allocator<mxnet::engine::Var*> > const&, std::__1::vector<mxnet::engine::Var*, std::__1::allocator<mxnet::engine::Var*> > const&, mxnet::FnProperty, int, char const*)::$_1>, void (mxnet::RunContext, mxnet::engine::CallbackOnComplete)>::operator()(mxnet::RunContext&&, mxnet::engine::CallbackOnComplete&&) + 52
[bt] (8) 8   libmxnet.so                         0x000000011366b2cc mxnet::engine::ThreadedEngine::ExecuteOprBlock(mxnet::RunContext, mxnet::engine::OprBlock*) + 652
[bt] (9) 9   libmxnet.so                         0x000000011366e2c1 mxnet::engine::ThreadedEnginePerDevice::PushToExecute(mxnet::engine::OprBlock*, bool)::'lambda'()::operator()() const::'lambda'(std::__1::shared_ptr<dmlc::ManualEvent>)::operator()(std::__1::shared_ptr<dmlc::ManualEvent>) const + 129

Minimum reproducible example

import mxnet as mx
from mxnet.gluon import nn
import mxnet.ndarray as nd
import numpy as np
net = nn.Sequential()
with net.name_scope():
    net.add(nn.LeakyReLU(2))

net.collect_params().initialize()
for layer in net:
    print(layer)
    print(layer.params)

net_inputs = nd.array(np.random.randn(3).astype(np.float32), ctx=mx.cpu())
net_outputs = net(net_inputs)
print(net_outputs)

Steps to reproduce

  1. Run the above code snippet.

What have you tried to solve it?

  1. It works when the input to the network is 2-dimensional or more. For example, if
net_inputs = nd.array(np.random.randn(3,4).astype(np.float32), ctx=mx.cpu())

it works fine

@rajanksin
Copy link
Contributor

@nswamy Please label as : Gluon, Operator, Question

@anirudhacharya
Copy link
Member Author

@spidydev I think it is more of a bug than a question.

@rajanksin
Copy link
Contributor

@cjolivier01 Please tag this as : Operator, Bug

@haojin2
Copy link
Contributor

haojin2 commented Jul 21, 2018

Fix in #11850

@haojin2
Copy link
Contributor

haojin2 commented Jul 22, 2018

Fix merged, please check that out and close the issue when you think it's okay. @anirudhacharya

@anirudhacharya
Copy link
Member Author

@haojin2 thanks for the fix.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants