stable_pretraining.callbacks

stable_pretraining.callbacks#

The callbacks module provides various monitoring and evaluation tools for self-supervised learning training.

Online Monitoring#

OnlineProbe(name, input, target, probe, loss_fn)

Online probe for evaluating learned representations during self-supervised training.

OnlineKNN(name, input, target, queue_length, ...)

Weighted K-Nearest Neighbors online evaluator using queue discovery.

OnlineWriter(names, path, during[, ...])

Writes specified batch data to disk during training and validation.

RankMe(name, target, queue_length, target_shape)

RankMe (effective rank) monitor using queue discovery.

LiDAR(name, target, queue_length, target_shape)

LiDAR (Linear Discriminant Analysis Rank) monitor using queue discovery.

Training Utilities#

EarlyStopping([mode, milestones, ...])

Early stopping mechanism with support for metric milestones and patience.

TrainerInfo()

Links the trainer to the DataModule for enhanced functionality.

LoggingCallback()

Displays validation metrics in a color-coded formatted table.

ModuleSummary()

Logs detailed module parameter statistics in a formatted table.

Model Persistence#

SklearnCheckpoint()

Callback for saving and loading sklearn models in PyTorch Lightning checkpoints.

Evaluation#

ImageRetrieval(pl_module, name, input, ...)

Image Retrieval evaluator for self-supervised learning.