0

For learning purposes, I'm trying to build a simple perceptron with pytorch which should not be trained but just give the output for set weights. Here's the code:

import torch.nn
from torch import tensor

class Net(torch.nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = torch.nn.Linear(3,1)
        self.relu = torch.nn.ReLU()
        # force weights to equal one
        with torch.no_grad():
            self.fc1.weight = torch.nn.Parameter(torch.ones_like(self.fc1.weight))

    def forward(self, x):
        x = self.fc1(x)
        output = self.relu(x)
        return output

net = Net()
test_tensor = tensor([1, 1, 1])
print(net(test_tensor.float()).item())

I expect this single layer neural network to output 3. And that is roughly(!) the output for every execution, but it ranges from 2.5 to 3.5. Where does randomness enter the model?

1 Answer 1

2

Q: Where does randomness enter the model?

It comes from the bias init. As you can see here, the bias is not initialized to zero as you expected.

You can fix it this way:

import torch
from torch import nn

class Net(torch.nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = torch.nn.Linear(3,1)
        self.relu = torch.nn.ReLU()
        # force weights to equal one
        with torch.no_grad():
            torch.nn.init.ones_(self.fc1.weight)
            torch.nn.init.zeros_(self.fc1.bias)

    def forward(self, x):
        x = self.fc1(x)
        output = self.relu(x)
        return output

x = torch.tensor([1., 1., 1.])
Net()(x)
# >>> tensor([3.], grad_fn=<ReluBackward0>)
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.