Module#
- class stable_pretraining.module.Module(*args, forward: callable = None, hparams: dict = None, **kwargs)[source]#
Bases:
LightningModulePyTorch Lightning module using manual optimization with multi-optimizer support.
Core usage - Provide a custom forward(self, batch, stage) via the forward argument at init. - During training, forward must return a dict with state[“loss”] (a single joint loss).
When multiple optimizers are configured, this joint loss is used for all optimizers.
Optimizer configuration (self.optim) - Single optimizer:
{“optimizer”: str|dict|partial|Class, “scheduler”: <see below>, “interval”: “step”|”epoch”, “frequency”: int} - Optimizer accepted forms:
string name (e.g., “AdamW”, “SGD”) from torch.optim
dict: {“type”: “AdamW”, “lr”: 1e-3, …}
functools.partial: partial(torch.optim.AdamW, lr=1e-3)
optimizer class: torch.optim.AdamW
Multiple optimizers: {
- name: {
“modules”: “regex”, # assign params by module-name pattern (children inherit) “optimizer”: str|dict|partial|Class, # optimizer factory (same accepted forms as above) “scheduler”: str|dict|partial|Class, # flexible scheduler config (see below) “interval”: “step”|”epoch”, # scheduler interval “frequency”: int, # optimizer step frequency “monitor”: str # (optional) for ReduceLROnPlateau; alternatively set inside scheduler dict
}, …
}
Parameter assignment (multi-optimizer) - Modules are matched by regex on their qualified name. Children inherit the parent’s assignment
unless they match a more specific pattern. Only direct parameters of each module are collected to avoid duplication.
Schedulers (flexible) - Accepted forms: string name (e.g., “CosineAnnealingLR”, “StepLR”), dict with {“type”: “…”, …},
functools.partial, or a scheduler class. Smart defaults are applied when params are omitted for common schedulers (CosineAnnealingLR, OneCycleLR, StepLR, ExponentialLR, ReduceLROnPlateau, LinearLR, ConstantLR). For ReduceLROnPlateau, a monitor key is added (default: “val_loss”). You may specify monitor either alongside the optimizer config (top level) or inside the scheduler dict itself.
The resulting Lightning scheduler dict includes interval and frequency (or scheduler_frequency).
Training loop behavior - Manual optimization (automatic_optimization = False). - Gradient accumulation: scales loss by 1/N where N = Trainer.accumulate_grad_batches and steps on the boundary. - Per-optimizer step frequency: each optimizer steps only when its frequency boundary is met (in addition to accumulation boundary). - Gradient clipping: uses Trainer’s gradient_clip_val and gradient_clip_algorithm before each step. - Returns the state dict from forward unchanged for logging/inspection.
- configure_optimizers()[source]#
Configure optimizers and schedulers for manual optimization.
- Returns:
Optimizer configuration with optional learning rate scheduler. For single optimizer: Returns a dict with optimizer and lr_scheduler. For multiple optimizers: Returns a tuple of (optimizers, schedulers).
- Return type:
Example
Multi-optimizer configuration with module pattern matching and schedulers:
>>> # Simple single optimizer with scheduler >>> self.optim = { ... "optimizer": partial(torch.optim.AdamW, lr=1e-3), ... "scheduler": "CosineAnnealingLR", # Uses smart defaults ... "interval": "step", ... "frequency": 1, ... }
>>> # Multi-optimizer with custom scheduler configs >>> self.optim = { ... "encoder_opt": { ... "modules": "encoder", # Matches 'encoder' and all children ... "optimizer": {"type": "AdamW", "lr": 1e-3}, ... "scheduler": { ... "type": "OneCycleLR", ... "max_lr": 1e-3, ... "total_steps": 10000, ... }, ... "interval": "step", ... "frequency": 1, ... }, ... "head_opt": { ... "modules": ".*head$", # Matches modules ending with 'head' ... "optimizer": "SGD", ... "scheduler": { ... "type": "ReduceLROnPlateau", ... "mode": "max", ... "patience": 5, ... "factor": 0.5, ... }, ... "monitor": "val_accuracy", # Required for ReduceLROnPlateau ... "interval": "epoch", ... "frequency": 2, ... }, ... }
With model structure: - encoder -> encoder_opt (matches “encoder”) - encoder.layer1 -> encoder_opt (inherits from parent) - encoder.layer1.conv -> encoder_opt (inherits from encoder.layer1) - classifier_head -> head_opt (matches “.*head$”) - classifier_head.linear -> head_opt (inherits from parent) - decoder -> None (no match, no parameters collected)
- forward(*args, **kwargs)[source]#
Same as
torch.nn.Module.forward().- Parameters:
*args – Whatever you decide to pass into the forward method.
**kwargs – Keyword arguments are also possible.
- Returns:
Your model’s output
- named_parameters(with_callbacks=True, prefix: str = '', recurse: bool = True)[source]#
Override to globally exclude callback-related parameters.
Excludes parameters that belong to
self.callbacks_modulesorself.callbacks_metrics. This prevents accidental optimization of callback/metric internals, even if external code callsself.parameters()orself.named_parameters()directly.- Parameters:
with_callbacks (bool, optional) – If False, excludes callback parameters. Defaults to True.
prefix (str, optional) – Prefix to prepend to parameter names. Defaults to “”.
recurse (bool, optional) – If True, yields parameters of this module and all submodules. If False, yields only direct parameters. Defaults to True.
- Yields:
tuple[str, torch.nn.Parameter] – Name and parameter pairs.
- on_save_checkpoint(checkpoint)[source]#
Offload checkpoint tensors to CPU to reduce GPU memory usage during save.
This method intercepts the checkpoint saving process and recursively moves all PyTorch tensors (model weights, optimizer states, scheduler states) from GPU to CPU before writing to disk. This prevents GPU OOM issues when checkpointing large models (e.g., 2B+ parameters with optimizer states).
- Parameters:
checkpoint (dict) – Lightning checkpoint dictionary containing: - state_dict: Model parameters (moved to CPU) - optimizer_states: Optimizer state dicts (moved to CPU) - lr_schedulers: LR scheduler states (moved to CPU) - Other keys: Custom objects, metadata (left unchanged)
- Behavior:
Processes standard Lightning checkpoint keys (state_dict, optimizer_states, lr_schedulers)
Recursively traverses dicts, lists, and tuples to find tensors
Moves all torch.Tensor objects to CPU
Skips custom objects (returns unchanged)
Logs GPU memory freed and processing time
Non-destructive: Checkpoint loading/resuming works normally
- Side Effects:
Modifies checkpoint dict in-place (tensors moved to CPU)
Temporarily increases CPU memory during offload
Adds ~2-5 seconds to checkpoint save time for 2B models
Frees ~8-12GB GPU memory for 2B model + optimizer states
- Custom Objects:
Custom objects in the checkpoint are NOT modified and will be logged as warnings. These include: custom classes, numpy arrays, primitives, etc. They are safely skipped and preserved in the checkpoint.
- Raises:
Exception – If tensor offload fails for any checkpoint key, logs error but allows checkpoint save to proceed (non-fatal).
Example
For a 2B parameter model with AdamW optimizer: - Before: ~12GB GPU memory spike on rank 0 during checkpoint save - After: ~0.2GB GPU memory spike, ~10-12GB freed - Checkpoint save time: +2-3 seconds - Resume from checkpoint: Works normally, tensors auto-loaded to GPU
Notes
Only rank 0 saves checkpoints in DDP, so only rank 0 sees memory benefit
Does not affect checkpoint contents or ability to resume training
Safe for standard PyTorch/Lightning use cases
If using FSDP/DeepSpeed, consider strategy-specific checkpointing instead
See also
PyTorch Lightning ModelCheckpoint callback
torch.Tensor.cpu() for device transfer behavior
- parameters(with_callbacks=True, recurse: bool = True)[source]#
Override to route through the filtered
named_parametersimplementation.
- predict_step(batch, batch_idx)[source]#
Step function called during
predict(). By default, it callsforward(). Override to add any processing logic.The
predict_step()is used to scale inference on multi-devices.To prevent an OOM error, it is possible to use
BasePredictionWritercallback to write the predictions to disk or database after each batch or on epoch end.The
BasePredictionWritershould be used while using a spawn based accelerator. This happens forTrainer(strategy="ddp_spawn")or training on 8 TPU cores withTrainer(accelerator="tpu", devices=8)as predictions won’t be returned.- Parameters:
batch – The output of your data iterable, normally a
DataLoader.batch_idx – The index of this batch.
dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)
- Returns:
Predicted output (optional).
Example
class MyModel(LightningModule): def predict_step(self, batch, batch_idx, dataloader_idx=0): return self(batch) dm = ... model = MyModel() trainer = Trainer(accelerator="gpu", devices=2) predictions = trainer.predict(model, dm)
- test_step(batch, batch_idx)[source]#
Operates on a single batch of data from the test set. In this step you’d normally generate examples or calculate anything of interest such as accuracy.
- Parameters:
batch – The output of your data iterable, normally a
DataLoader.batch_idx – The index of this batch.
dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)
- Returns:
Tensor- The loss tensordict- A dictionary. Can include any keys, but must include the key'loss'.None- Skip to the next batch.
# if you have one test dataloader: def test_step(self, batch, batch_idx): ... # if you have multiple test dataloaders: def test_step(self, batch, batch_idx, dataloader_idx=0): ...
Examples:
# CASE 1: A single test dataset def test_step(self, batch, batch_idx): x, y = batch # implement your own out = self(x) loss = self.loss(out, y) # log 6 example images # or generated text... or whatever sample_imgs = x[:6] grid = torchvision.utils.make_grid(sample_imgs) self.logger.experiment.add_image('example_images', grid, 0) # calculate acc labels_hat = torch.argmax(out, dim=1) test_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0) # log the outputs! self.log_dict({'test_loss': loss, 'test_acc': test_acc})
If you pass in multiple test dataloaders,
test_step()will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.# CASE 2: multiple test dataloaders def test_step(self, batch, batch_idx, dataloader_idx=0): # dataloader_idx tells you which dataset this is. x, y = batch # implement your own out = self(x) if dataloader_idx == 0: loss = self.loss0(out, y) else: loss = self.loss1(out, y) # calculate acc labels_hat = torch.argmax(out, dim=1) acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0) # log the outputs separately for each dataloader self.log_dict({f"test_loss_{dataloader_idx}": loss, f"test_acc_{dataloader_idx}": acc})
Note
If you don’t need to test you don’t need to implement this method.
Note
When the
test_step()is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of the test epoch, the model goes back to training mode and gradients are enabled.
- training_step(batch, batch_idx)[source]#
Manual optimization training step with support for multiple optimizers.
Expected output from forward during training (stage=”fit”): - state[“loss”]: torch.Tensor - Single joint loss for all optimizers
When multiple optimizers are configured, the same loss is used for all of them. Each optimizer updates its assigned parameters based on gradients from this joint loss.
- validation_step(batch, batch_idx)[source]#
Operates on a single batch of data from the validation set. In this step you’d might generate examples or calculate anything of interest like accuracy.
- Parameters:
batch – The output of your data iterable, normally a
DataLoader.batch_idx – The index of this batch.
dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)
- Returns:
Tensor- The loss tensordict- A dictionary. Can include any keys, but must include the key'loss'.None- Skip to the next batch.
# if you have one val dataloader: def validation_step(self, batch, batch_idx): ... # if you have multiple val dataloaders: def validation_step(self, batch, batch_idx, dataloader_idx=0): ...
Examples:
# CASE 1: A single validation dataset def validation_step(self, batch, batch_idx): x, y = batch # implement your own out = self(x) loss = self.loss(out, y) # log 6 example images # or generated text... or whatever sample_imgs = x[:6] grid = torchvision.utils.make_grid(sample_imgs) self.logger.experiment.add_image('example_images', grid, 0) # calculate acc labels_hat = torch.argmax(out, dim=1) val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0) # log the outputs! self.log_dict({'val_loss': loss, 'val_acc': val_acc})
If you pass in multiple val dataloaders,
validation_step()will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.# CASE 2: multiple validation dataloaders def validation_step(self, batch, batch_idx, dataloader_idx=0): # dataloader_idx tells you which dataset this is. x, y = batch # implement your own out = self(x) if dataloader_idx == 0: loss = self.loss0(out, y) else: loss = self.loss1(out, y) # calculate acc labels_hat = torch.argmax(out, dim=1) acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0) # log the outputs separately for each dataloader self.log_dict({f"val_loss_{dataloader_idx}": loss, f"val_acc_{dataloader_idx}": acc})
Note
If you don’t need to validate you don’t need to implement this method.
Note
When the
validation_step()is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of validation, the model goes back to training mode and gradients are enabled.