CollaborativeCoding.metrics =========================== .. py:module:: CollaborativeCoding.metrics Submodules ---------- .. toctree:: :maxdepth: 1 /autoapi/CollaborativeCoding/metrics/EntropyPred/index /autoapi/CollaborativeCoding/metrics/F1/index /autoapi/CollaborativeCoding/metrics/accuracy/index /autoapi/CollaborativeCoding/metrics/precision/index /autoapi/CollaborativeCoding/metrics/recall/index Classes ------- .. autoapisummary:: CollaborativeCoding.metrics.Accuracy CollaborativeCoding.metrics.EntropyPrediction CollaborativeCoding.metrics.F1Score CollaborativeCoding.metrics.Precision CollaborativeCoding.metrics.Recall Package Contents ---------------- .. py:class:: Accuracy(num_classes, macro_averaging=False) Bases: :py:obj:`torch.nn.Module` Computes the accuracy of a model's predictions. Args ---------- num_classes : int The number of classes in the classification task. macro_averaging : bool, optional If True, computes macro-average accuracy. Otherwise, computes micro-average accuracy. Default is False. Methods ------- forward(y_true, y_pred) Stores the true and predicted labels. Typically called for each batch during the forward pass of a model. _macro_acc() Computes the macro-average accuracy. _micro_acc() Computes the micro-average accuracy. __returnmetric__() Returns the computed accuracy based on the averaging method for all stored predictions. __reset__() Resets the stored true and predicted labels. Examples -------- >>> y_true = torch.tensor([0, 1, 2, 3, 3]) >>> y_pred = torch.tensor([0, 1, 2, 3, 0]) >>> accuracy = Accuracy(num_classes=4) >>> accuracy(y_true, y_pred) >>> accuracy.__returnmetric__() 0.8 >>> accuracy.__reset__() >>> accuracy.macro_averaging = True >>> accuracy(y_true, y_pred) >>> accuracy.__returnmetric__() 0.875 .. py:attribute:: num_classes .. py:attribute:: macro_averaging :value: False .. py:attribute:: y_true :value: [] .. py:attribute:: y_pred :value: [] .. py:method:: forward(y_true, y_pred) Store the true and predicted labels. Parameters ---------- y_true : torch.Tensor True labels. y_pred : torch.Tensor Predicted labels. Either a 1D tensor of shape (batch_size,) or a 2D tensor of shape (batch_size, num_classes). .. py:method:: _macro_acc() Compute the macro-average accuracy on the stored predictions. Returns ------- float Macro-average accuracy score. .. py:method:: _micro_acc() Compute the micro-average accuracy on the stored predictions. Returns ------- float Micro-average accuracy score. .. py:method:: __returnmetric__() Return the computed accuracy based on the averaging method for all stored predictions. Returns ------- float Computed accuracy score. .. py:method:: __reset__() Reset the stored true and predicted labels. .. py:class:: EntropyPrediction(num_classes, macro_averaging=None) Bases: :py:obj:`torch.nn.Module` Base class for all neural network modules. Your models should also subclass this class. Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:: import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self) -> None: super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x)) Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:`to`, etc. .. note:: As per the example above, an ``__init__()`` call to the parent class must be made before assignment on the child. :ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool .. py:attribute:: stored_entropy_values :value: [] .. py:attribute:: num_classes .. py:method:: __call__(y_true: torch.Tensor, y_logits: torch.Tensor) Computes the Shannon Entropy of the predicted logits and stores the results. Args: y_true: The true labels. This parameter is not used in the computation but is included for compatibility with certain interfaces. y_logits: The predicted logits from which entropy is calculated. Returns: torch.Tensor: The aggregated entropy value(s) based on the specified method ('mean', 'sum', or 'none'). .. py:method:: __returnmetric__() .. py:method:: __reset__() .. py:class:: F1Score(num_classes, macro_averaging=False) Bases: :py:obj:`torch.nn.Module` Computes the F1 score for classification tasks with support for both macro and micro averaging. This class allows you to compute the F1 score during training or evaluation. You can select between two methods of averaging: - **Micro Averaging**: Computes the F1 score globally, treating each individual prediction as equally important. - **Macro Averaging**: Computes the F1 score for each class individually and then averages the scores. Parameters ---------- num_classes : int The number of classes in the classification task. macro_averaging : bool, optional, default=False If True, computes the macro-averaged F1 score. If False, computes the micro-averaged F1 score. Default is micro averaging. Attributes ---------- num_classes : int The number of classes in the classification task. macro_averaging : bool A flag to determine whether to compute the macro-averaged or micro-averaged F1 score. y_true : list A list to store true labels for the current batch. y_pred : list A list to store predicted labels for the current batch. Methods ------- forward(target, preds) Stores predictions and true labels for computing the F1 score during training or evaluation. compute_f1() Computes and returns the F1 score based on the stored predictions and true labels. _micro_F1(target, preds) Computes the micro-averaged F1 score based on the global true positive, false positive, and false negative counts. _macro_F1(target, preds) Computes the macro-averaged F1 score by calculating the F1 score per class and then averaging across all classes. __returnmetric__() Computes and returns the F1 score (Micro or Macro) as specified. __reset__() Resets the stored predictions and true labels, preparing for the next batch or epoch. .. py:attribute:: num_classes .. py:attribute:: macro_averaging :value: False .. py:attribute:: y_true :value: [] .. py:attribute:: y_pred :value: [] .. py:method:: forward(target, preds) Stores the true labels and predictions to compute the F1 score. Parameters ---------- target : torch.Tensor True labels (shape: [batch_size]). preds : torch.Tensor Predicted logits (shape: [batch_size, num_classes]). .. py:method:: _micro_F1(target, preds) Computes the Micro-averaged F1 score (global TP, FP, FN). .. py:method:: _macro_F1(target, preds) Computes the Macro-averaged F1 score. .. py:method:: __returnmetric__() Computes and returns the F1 score (Micro or Macro) based on the stored predictions and targets. Returns ------- torch.Tensor The computed F1 score. Returns NaN if no predictions or targets are available. .. py:method:: __reset__() Resets the stored predictions and targets for the next batch or epoch. .. py:class:: Precision(num_classes: int, macro_averaging: bool = False) Bases: :py:obj:`torch.nn.Module` Metric module for precision. Can calculate both the micro- and macro-averaged precision. Parameters ---------- num_classes : int Number of classes in the dataset. macro_averaging : bool Performs macro-averaging if True, otherwise micro-averaging. .. py:attribute:: num_classes .. py:attribute:: macro_averaging :value: False .. py:attribute:: y_true :value: [] .. py:attribute:: y_pred :value: [] .. py:method:: forward(y_true: torch.tensor, logits: torch.tensor) -> torch.tensor Add true and predicted values to the class-global lists. Parameters ---------- y_true : torch.tensor True labels logits : torch.tensor Predicted labels .. py:method:: _micro_avg_precision(y_true: torch.tensor, y_pred: torch.tensor) -> torch.tensor Compute micro-average precision by first calculating true/false positive across all classes and then find the precision. Parameters ---------- y_true : torch.tensor True labels y_pred : torch.tensor Predicted labels Returns ------- torch.tensor Micro-averaged precision .. py:method:: _macro_avg_precision(y_true: torch.tensor, y_pred: torch.tensor) -> torch.tensor Compute macro-average precision by finding true/false positives of each class separately then averaging across all classes. Parameters ---------- y_true : torch.tensor True labels y_pred : torch.tensor Predicted labels Returns ------- torch.tensor Macro-averaged precision .. py:method:: __returnmetric__() Return the micro- or macro-averaged precision. Returns ------- torch.tensor Micro- or macro-averaged precision .. py:method:: __reset__() Resets the class-global lists of true and predicted values to empty lists. Returns ------- None Returns None .. py:class:: Recall(num_classes, macro_averaging=False) Bases: :py:obj:`torch.nn.Module` Recall metric. Args ---- num_classes : int Number of classes in the dataset. macro_averaging : bool If True, calculate the recall for each class and return the average. If False, calculate the recall for the entire dataset. Methods ------- forward(y_true, y_pred) Compute the recall metric. Examples -------- >>> y_true = torch.tensor([0, 1, 2, 3, 4]) >>> y_pred = torch.randn(5, 5).argmax(dim=-1) >>> recall = Recall(num_classes=5) >>> recall(y_true, y_pred) 0.2 >>> recall = Recall(num_classes=5, macro_averaging=True) >>> recall(y_true, y_pred) 0.2 .. py:attribute:: num_classes .. py:attribute:: macro_averaging :value: False .. py:attribute:: __y_true :value: [] .. py:attribute:: __y_pred :value: [] .. py:method:: forward(true, logits) .. py:method:: compute(y_true, y_pred) .. py:method:: __compute_macro_averaging(y_true, y_pred) .. py:method:: __compute_micro_averaging(y_true, y_pred) .. py:method:: __returnmetric__() .. py:method:: __reset__()