torchnet.engine

Engines are a utility to wrap a training loop. They provide several hooks which allow users to define their own fucntions to run at specified points during the train/val loop.

Some people like engines, others do not. TNT is build modularly, so you can use the other modules with/without using an engine.

torchnet.engine.Engine

class torchnet.engine.Engine[source]

Bases: object

hook(name, state)[source]

Registers a backward hook.

The hook will be called every time a gradient with respect to the Tensor is computed. The hook should have the following signature:

hook (grad) -> Tensor or None

The hook should not modify its argument, but it can optionally return a new gradient which will be used in place of grad. This function returns a handle with a method handle.remove() that removes the hook from the module.

Example

>>> v = torch.tensor([0., 0., 0.], requires_grad=True)
>>> h = v.register_hook(lambda grad: grad * 2)  # double the gradient
>>> v.backward(torch.tensor([1., 2., 3.]))
>>> v.grad
 2
 4
 6
[torch.FloatTensor of size (3,)]
>>> h.remove()  # removes the hook
test(network, iterator)[source]
train(network, iterator, maxepoch, optimizer)[source]