Skip to content Skip to sidebar Skip to footer

Pytorch Nn Module Generalization

Let us take a look at the simple class: class Temp1(nn.Module): def __init__(self, stateSize, actionSize, layers=[10, 5], activations=[F.tanh, F.tanh] ): super(Temp1,

Solution 1:

The problem is that most of the nn.Linear layers in the "generalized" version are stored in a regular pythonic list (self.fcLayers). does not know to look for nn.Paramters inside regular pythonic members of nn.Module.

Answer : If you wish to store nn.Modules in a way that can manage them, you need to use specialized pytorch containers. For instance, if you use nn.ModuleList instead of a regular pythonic list:

self.fcLayers = nn.ModuleList([])

your example should work fine.

BTW, you need pytorch to know that members of your nn.Module are modules themselves not only to get their parameters, but also for other functions, such as moving them to gpu/cpu, setting their mode to eval/training etc.

Post a Comment for "Pytorch Nn Module Generalization"