OnlineProbe#
- class stable_pretraining.callbacks.OnlineProbe(module: LightningModule, name: str, input: str, target: str, probe: Module, loss_fn: callable = None, optimizer: str | dict | partial | Optimizer | None = None, scheduler: str | dict | partial | LRScheduler | None = None, accumulate_grad_batches: int = 1, gradient_clip_val: float = None, gradient_clip_algorithm: str = 'norm', metrics: dict | tuple | list | Metric | None = None)[source]#
Bases:
TrainableCallbackOnline probe for evaluating learned representations during self-supervised training.
This callback implements the standard linear evaluation protocol by training a probe (typically a linear classifier) on top of frozen features from the main model. The probe is trained simultaneously with the main model but maintains its own optimizer, scheduler, and training loop. This allows monitoring representation quality throughout training without modifying the base model.
Key features: - Automatic gradient detachment to prevent probe gradients affecting the main model - Independent optimizer and scheduler management - Support for gradient accumulation - Mixed precision training compatibility through automatic dtype conversion - Metric tracking and logging
- Parameters:
module – The spt.LightningModule to probe.
name – Unique identifier for this probe instance. Used for logging and storing metrics/modules.
input – Key in batch dict or outputs dict containing input features to probe.
target – Key in batch dict containing ground truth target labels.
probe – The probe module to train. Can be a nn.Module instance, callable that returns a module, or Hydra config to instantiate.
loss_fn – Loss function for probe training (e.g., nn.CrossEntropyLoss()).
optimizer –
Optimizer configuration for the probe. Can be: - str: optimizer name (e.g., “AdamW”, “SGD”, “LARS”) - dict: {“type”: “AdamW”, “lr”: 1e-3, …} - partial: pre-configured optimizer factory - optimizer instance or callable - None: uses LARS(lr=0.1, clip_lr=True, eta=0.02, exclude_bias_n_norm=True,
weight_decay=0), which is the standard for SSL linear probes (default)
scheduler – Learning rate scheduler configuration. Can be: - str: scheduler name (e.g., “CosineAnnealingLR”, “StepLR”) - dict: {“type”: “CosineAnnealingLR”, “T_max”: 1000, …} - partial: pre-configured scheduler factory - scheduler instance or callable - None: uses ConstantLR(factor=1.0), maintaining constant learning rate (default)
accumulate_grad_batches – Number of batches to accumulate gradients before optimizer step. Default is 1 (no accumulation).
metrics – Metrics to track during training/validation. Can be dict, list, tuple, or single metric instance.
Note
The probe module is stored in pl_module.callbacks_modules[name]
Metrics are stored in pl_module.callbacks_metrics[name]
Predictions are stored in batch dict with key ‘{name}_preds’
Loss is logged as ‘train/{name}_loss’
Metrics are logged with prefix ‘train/{name}_’ and ‘eval/{name}_’