# torchnet.meter¶

Meters provide a way to keep track of important statistics in an online manner. TNT also provides convenient ways to visualize and manage meters via the torchnet.logger.MeterLogger class.

class torchnet.meter.meter.Meter[source]

Meters provide a way to keep track of important statistics in an online manner.

This class is abstract, but provides a standard interface for all meters to follow.

add(value)[source]

Log a new value to the meter

Parameters: value – Next restult to include.
reset()[source]

Resets the meter to default settings.

value()[source]

Get the value of the meter in the current state.

## Classification Meters¶

### APMeter¶

class torchnet.meter.APMeter[source]

The APMeter measures the average precision per class.

The APMeter is designed to operate on NxK Tensors output and target, and optionally a Nx1 Tensor weight where (1) the output contains model output scores for N examples and K classes that ought to be higher when the model is more convinced that the example should be positively labeled, and smaller when the model believes the example should be negatively labeled (for instance, the output of a sigmoid function); (2) the target contains only values 0 (for negative examples) and 1 (for positive examples); and (3) the weight ( > 0) represents weight for each sample.

add(output, target, weight=None)[source]

Parameters: output (Tensor) – NxK tensor that for each of the N examples indicates the probability of the example belonging to each of the K classes, according to the model. The probabilities should sum to one over all classes target (Tensor) – binary NxK tensort that encodes which of the K classes are associated with the N-th input (eg: a row [0, 1, 0, 1] indicates that the example is associated with classes 2 and 4) weight (optional, Tensor) – Nx1 tensor representing the weight for each example (each weight > 0)
reset()[source]

Resets the meter with empty member variables

value()[source]

Returns the model’s average precision for each class

Returns: 1xK tensor, with avg precision for each class k ap (FloatTensor)

### mAPMeter¶

class torchnet.meter.mAPMeter[source]

The mAPMeter measures the mean average precision over all classes.

The mAPMeter is designed to operate on NxK Tensors output and target, and optionally a Nx1 Tensor weight where (1) the output contains model output scores for N examples and K classes that ought to be higher when the model is more convinced that the example should be positively labeled, and smaller when the model believes the example should be negatively labeled (for instance, the output of a sigmoid function); (2) the target contains only values 0 (for negative examples) and 1 (for positive examples); and (3) the weight ( > 0) represents weight for each sample.

### ClassErrorMeter¶

class torchnet.meter.ClassErrorMeter(topk=[1], accuracy=False)[source]

### ConfusionMeter¶

class torchnet.meter.ConfusionMeter(k, normalized=False)[source]

Maintains a confusion matrix for a given calssification problem.

The ConfusionMeter constructs a confusion matrix for a multi-class classification problems. It does not support multi-label, multi-class problems: for such problems, please use MultiLabelConfusionMeter.

Parameters: k (int) – number of classes in the classification problem normalized (boolean) – Determines whether or not the confusion matrix is normalized or not
add(predicted, target)[source]

Computes the confusion matrix of K x K size where K is no of classes

Parameters: predicted (tensor) – Can be an N x K tensor of predicted scores obtained from the model for N examples and K classes or an N-tensor of integer values between 0 and K-1. target (tensor) – Can be a N-tensor of integer values assumed to be integer values between 0 and K-1 or N x K tensor, where targets are assumed to be provided as one-hot vectors
value()[source]
Returns: Confustion matrix of K rows and K columns, where rows corresponds to ground-truth targets and columns corresponds to predicted targets.

## Regression/Loss Meters¶

### AverageValueMeter¶

class torchnet.meter.AverageValueMeter[source]

### AUCMeter¶

class torchnet.meter.AUCMeter[source]

The AUCMeter measures the area under the receiver-operating characteristic (ROC) curve for binary classification problems. The area under the curve (AUC) can be interpreted as the probability that, given a randomly selected positive example and a randomly selected negative example, the positive example is assigned a higher score by the classification model than the negative example.

The AUCMeter is designed to operate on one-dimensional Tensors output and target, where (1) the output contains model output scores that ought to be higher when the model is more convinced that the example should be positively labeled, and smaller when the model believes the example should be negatively labeled (for instance, the output of a signoid function); and (2) the target contains only values 0 (for negative examples) and 1 (for positive examples).

### MovingAverageValueMeter¶

class torchnet.meter.MovingAverageValueMeter(windowsize)[source]

### MSEMeter¶

class torchnet.meter.MSEMeter(root=False)[source]

## Miscellaneous Meters¶

### TimeMeter¶

class torchnet.meter.TimeMeter(unit)[source]

<a name=”TimeMeter”> #### tnt.TimeMeter(@ARGP) @ARGT

The tnt.TimeMeter is designed to measure the time between events and can be used to measure, for instance, the average processing time per batch of data. It is different from most other meters in terms of the methods it provides:

The tnt.TimeMeter provides the following methods:

• reset() resets the timer, setting the timer and unit counter to zero.
• value() returns the time passed since the last reset(); divided by the counter value when unit=true.