Skip to contents

Computes overall accuracy, balanced accuracy, and sensitivity for predicted vs. true class labels. Optionally excludes samples assigned to the "Other" class from accuracy calculations.

Usage

report_accuracy(
  predictions,
  truth = "lymphgen",
  pred = "DLBCLone_io",
  per_group = FALSE,
  metric = "accuracy",
  verbose = FALSE,
  drop_other = TRUE,
  skip_F1 = FALSE
)

Arguments

predictions

Data frame containing predicted and true class labels.

truth

Name of the column with true class labels (default: "lymphgen").

pred

Name of the column with predicted class labels (default: "predicted_label").

per_group

Logical; if TRUE, computes per-group accuracy metrics.

metric

Character; type of accuracy to report ("accuracy" supported).

Value

A list with:

no_other

Accuracy excluding samples assigned to "Other"

per_class

Average of balanced accuracy values for each class

per_class_sensitivity

Sensitivity per class

overall

Overall accuracy including all samples

Details

  • Uses confusion matrices to compute accuracy metrics.

  • Excludes "Other" class for no_other accuracy.

  • Returns per-class metrics for further analysis.

Examples

if (FALSE) { # \dontrun{
result <- report_accuracy(predictions_df)
result$overall
result$per_class
} # }