Is there any best practice or efficient way to have a random classifier in pytorch? My random classifier basically looks like this:
def forward(self, inputs):
# get a random tensor
logits = torch.rand(batch_size, num_targets, num_classes)
return logits
This should be fine in principle, but the optimizer raises a ValueError because the classifier - in contrast to all other classifiers / models in the system - does not have any parameters that can be optimized, obviously. Is there a torch built-in solution to this or must I change the system's architecture (to not perform optimization)?
Edit: If adding some arbitrary parameters to the model as shown below, the loss will raise an RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
def __init__(self, transformer_models: Dict, opt: Namespace):
super(RandomMulti, self).__init__()
self.num_classes = opt.polarities_dim
# add some parameters so that the optimizer doesn't raise an exception
self.some_params = nn.Linear(2, 2)
My assumption really would be that there is a simpler solution, since having a random baseline classifier is a rather common thing in machine learning.