Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

getParameters can return empty tensor #3

Merged
merged 1 commit into from
Jan 7, 2014

Conversation

jameskirkpatrick
Copy link
Contributor

If parameters() is not defined or returns an empty table,
getParameters() will return an empty tensor.

If parameters() is not defined or returns an empty table,
getParameters() will return an empty tensor.
clementfarabet added a commit that referenced this pull request Jan 7, 2014
getParameters can return empty tensor
@clementfarabet clementfarabet merged commit 71fa2f8 into torch:master Jan 7, 2014
@clementfarabet
Copy link
Member

Thank you!

soumith pushed a commit that referenced this pull request Dec 29, 2015
curious-attempt-bunny added a commit to curious-attempt-bunny/nn that referenced this pull request May 28, 2016
Prior to this fix I was getting:

```
/Users/home/torch/install/bin/luajit: /Users/home/torch/install/share/lua/5.1/nn/THNN.lua:109: bad argument torch#3 to 'v' (cannot convert 'struct THByteTensor *' to 'struct THDoubleTensor *')
stack traceback:
	[C]: in function 'v'
	/Users/home/torch/install/share/lua/5.1/nn/THNN.lua:109: in function 'MSECriterion_updateOutput'
	/Users/home/torch/install/share/lua/5.1/nn/MSECriterion.lua:14: in function 'forward'
	sum_using_bit_represenation.lua:42: in function 'opfunc'
	/Users/home/torch/install/share/lua/5.1/optim/sgd.lua:44: in function 'sgd'
	sum_using_bit_represenation.lua:48: in main chunk
	[C]: in function 'dofile'
	...home/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
	[C]: at 0x010088d1f0
```

The composed snippets that were causing this error for me were:

```
require 'nn'

local model = nn.Sequential();  -- make a multi-layer perceptron
local inputs = 2; local outputs = 1; local HUs = 20; -- parameters
model:add(nn.Linear(inputs, HUs))
model:add(nn.Tanh())
model:add(nn.Linear(HUs, outputs))

local criterion = nn.MSECriterion()

local batchSize = 128
local batchInputs = torch.Tensor(batchSize, inputs)
local batchLabels = torch.ByteTensor(batchSize)

for i=1,batchSize do
  local input = torch.randn(2)     -- normally distributed example in 2d
  local label = 1
  if input[1]*input[2]>0 then     -- calculate label for XOR function
    label = -1;
  end
  batchInputs[i]:copy(input)
  batchLabels[i] = label
end

local params, gradParams = model:getParameters()

local optimState = {learningRate=0.01}

require 'optim'

for epoch=1,50 do
  -- local function we give to optim
  -- it takes current weights as input, and outputs the loss
  -- and the gradient of the loss with respect to the weights
  -- gradParams is calculated implicitly by calling 'backward',
  -- because the models weight and bias gradient tensors
  -- are simply views onto gradParams
  local function feval(params)
    gradParams:zero()

    local outputs = model:forward(batchInputs)
    local loss = criterion:forward(outputs, batchLabels)
    local dloss_doutput = criterion:backward(outputs, batchLabels)
    model:backward(batchInputs, dloss_doutput)

    return loss,gradParams
  end
  optim.sgd(feval, params, optimState)
end

x = torch.Tensor(2)
x[1] =  0.5; x[2] =  0.5; print(model:forward(x))
x[1] =  0.5; x[2] = -0.5; print(model:forward(x))
x[1] = -0.5; x[2] =  0.5; print(model:forward(x))
x[1] = -0.5; x[2] = -0.5; print(model:forward(x))
```
This was referenced Sep 27, 2016
@zaktab zaktab mentioned this pull request Dec 16, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants