Utilities & Logging¶
Utility functions for model inspection, discovery, reproducibility, and library-wide logging.
logger¶
Shared loguru logger instance used throughout AutoTimm. All internal logging (trainer messages, export progress, interpretation output) goes through this logger.
Usage Examples¶
Basic Usage¶
from autotimm import logger
logger.info("Training started")
logger.warning("Low GPU memory")
logger.success("Model exported successfully")
logger.error("Checkpoint not found")
Adjusting Log Level¶
from autotimm.core.logging import logger
# Suppress info messages (show only warnings and above)
logger.remove()
import sys
logger.add(sys.stderr, level="WARNING")
# Enable debug messages
logger.remove()
logger.add(sys.stderr, level="DEBUG")
Default Format¶
log_table¶
Log a formatted ASCII table using loguru.
Usage Examples¶
from autotimm.core.logging import log_table
log_table(
title="Model Comparison",
headers=["Model", "Params", "Accuracy"],
rows=[
["ResNet-18", "11.7M", "94.2%"],
["ResNet-50", "25.6M", "96.1%"],
["EfficientNet-B0", "5.3M", "95.8%"],
],
)
Parameters¶
| Parameter | Type | Description |
|---|---|---|
title |
str |
Table title displayed above the table |
headers |
list[str] |
Column header names |
rows |
list[list[str]] |
List of rows, each row is a list of string values |
seed_everything¶
Set random seeds for reproducibility across all libraries.
API Reference¶
autotimm.seed_everything ¶
Set random seeds for reproducibility across all libraries.
Seeds Python's random, NumPy, PyTorch (CPU and CUDA), and sets environment variables for deterministic behavior.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
seed
|
int
|
Random seed value. Default is 42. |
42
|
deterministic
|
bool
|
If |
False
|
Returns:
| Type | Description |
|---|---|
int
|
The seed value that was set. |
Example
seed_everything(42) 42
For fully deterministic training (slower but reproducible)¶
seed_everything(42, deterministic=True) 42
Note
Setting deterministic=True may reduce performance. Use it only when
full reproducibility is required (e.g., for research or debugging).
Source code in src/autotimm/core/utils.py
Usage Examples¶
Default Seed¶
Deterministic Mode¶
Custom Seed¶
Parameters¶
| Parameter | Type | Default | Description |
|---|---|---|---|
seed |
int |
42 |
Random seed value |
deterministic |
bool |
False |
Enable deterministic algorithms (may reduce performance) |
Returns¶
| Type | Description |
|---|---|
int |
The seed value that was set |
What Gets Seeded¶
- Python's
randommodule - NumPy's random number generator
- PyTorch (CPU & all CUDA devices)
PYTHONHASHSEEDenvironment variable- cuDNN backend settings (when
deterministic=True) torch.use_deterministic_algorithms()(whendeterministic=True)
Notes¶
- Setting
deterministic=Truemay reduce performance due to slower deterministic algorithm implementations. - When
deterministic=False(default), cuDNN benchmark mode is enabled for faster training. - All task classes support
seedanddeterministicparameters. By defaultseed=None(no seeding). Setseed=42explicitly for reproducibility. - If
seed=Noneis passed to a task class,seed_everything()is not called and a warning is emitted ifdeterministic=True.
count_parameters¶
Count the number of parameters in a model.
API Reference¶
autotimm.count_parameters ¶
Return the number of parameters in a model.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
Module
|
A |
required |
trainable_only
|
bool
|
If |
True
|
Source code in src/autotimm/core/utils.py
Usage Examples¶
Trainable Parameters¶
import autotimm as at # recommended alias
model = at.ImageClassifier(
backbone="resnet50",
num_classes=10,
metrics=metrics,
)
trainable = at.count_parameters(model)
print(f"Trainable parameters: {trainable:,}")
All Parameters¶
Backbone Only¶
backbone = at.create_backbone("resnet50")
print(f"Backbone parameters: {at.count_parameters(backbone):,}")
Parameters¶
| Parameter | Type | Default | Description |
|---|---|---|---|
model |
nn.Module |
Required | PyTorch model |
trainable_only |
bool |
True |
Count only trainable params |
Returns¶
| Type | Description |
|---|---|
int |
Number of parameters |
list_optimizers¶
List available optimizers from torch and timm.
API Reference¶
autotimm.list_optimizers ¶
List available optimizers from torch and optionally timm.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_timm
|
bool
|
If |
True
|
Returns:
| Type | Description |
|---|---|
dict[str, list[str]]
|
Dictionary with keys |
dict[str, list[str]]
|
a list of optimizer names. |
Example
optimizers = list_optimizers() print(optimizers["torch"]) ['adadelta', 'adagrad', 'adam', 'adamax', 'adamw', 'asgd', ...] print(optimizers.get("timm", [])) ['adabelief', 'adafactor', 'adahessian', 'adamp', ...]
Source code in src/autotimm/core/utils.py
Usage Examples¶
All Optimizers¶
import autotimm as at # recommended alias
optimizers = at.list_optimizers()
print("Torch optimizers:", optimizers["torch"])
print("Timm optimizers:", optimizers.get("timm", []))
Torch Only¶
Parameters¶
| Parameter | Type | Default | Description |
|---|---|---|---|
include_timm |
bool |
True |
Include timm optimizers |
Returns¶
| Type | Description |
|---|---|
dict[str, list[str]] |
Dict with "torch" and "timm" keys |
Available Optimizers¶
Torch:
adamw- AdamWadam- Adamsgd- SGDrmsprop- RMSpropadagrad- Adagrad
Timm:
adamp- AdamPsgdp- SGDPadabelief- AdaBeliefradam- RAdamadahessian- Adahessianlamb- LAMBlars- LARSmadgrad- MADGRADnovograd- NovoGrad
list_schedulers¶
List available learning rate schedulers from torch and timm.
API Reference¶
autotimm.list_schedulers ¶
List available learning rate schedulers from torch and optionally timm.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_timm
|
bool
|
If |
True
|
Returns:
| Type | Description |
|---|---|
dict[str, list[str]]
|
Dictionary with keys |
dict[str, list[str]]
|
a list of scheduler names. |
Example
schedulers = list_schedulers() print(schedulers["torch"]) ['chainedscheduler', 'constantlr', 'cosineannealinglr', ...] print(schedulers.get("timm", [])) ['cosinelrscheduler', 'multisteplrscheduler', ...]
Source code in src/autotimm/core/utils.py
142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 | |
Usage Examples¶
All Schedulers¶
import autotimm as at # recommended alias
schedulers = at.list_schedulers()
print("Torch schedulers:", schedulers["torch"])
print("Timm schedulers:", schedulers.get("timm", []))
Torch Only¶
Parameters¶
| Parameter | Type | Default | Description |
|---|---|---|---|
include_timm |
bool |
True |
Include timm schedulers |
Returns¶
| Type | Description |
|---|---|
dict[str, list[str]] |
Dict with "torch" and "timm" keys |
Available Schedulers¶
Schedulers are dynamically discovered from PyTorch and timm. Common schedulers include:
PyTorch (15 total):
chainedscheduler- ChainedSchedulerconstantlr- ConstantLRcosineannealinglr- CosineAnnealingLRcosineannealingwarmrestarts- CosineAnnealingWarmRestartscycliclr- CyclicLRexponentiallr- ExponentialLRlambdalr- LambdaLRlinearlr- LinearLRmultiplicativelr- MultiplicativeLRmultisteplr- MultiStepLRonecyclelr- OneCycleLRpolynomiallr- PolynomialLRreducelronplateau- ReduceLROnPlateausequentiallr- SequentialLRsteplr- StepLR
Timm (6 total):
cosinelrscheduler- CosineLRSchedulermultisteplrscheduler- MultiStepLRSchedulerplateaulrscheduler- PlateauLRSchedulerpolylrscheduler- PolyLRSchedulersteplrscheduler- StepLRSchedulertanhlrscheduler- TanhLRScheduler
Full Example¶
import autotimm as at # recommended alias
# List available options
print("=== Available Optimizers ===")
optimizers = at.list_optimizers()
for source, names in optimizers.items():
print(f"{source}: {', '.join(names)}")
print("\n=== Available Schedulers ===")
schedulers = at.list_schedulers()
for source, names in schedulers.items():
print(f"{source}: {', '.join(names)}")
print("\n=== Available Backbones ===")
# Search patterns
patterns = ["*resnet*", "*efficientnet*", "*vit*", "*convnext*"]
for pattern in patterns:
models = at.list_backbones(pattern, pretrained_only=True)
print(f"{pattern}: {len(models)} models")
print("\n=== Model Parameters ===")
for backbone_name in ["resnet18", "resnet50", "efficientnet_b0", "vit_base_patch16_224"]:
backbone = at.create_backbone(backbone_name)
params = at.count_parameters(backbone, trainable_only=False)
features = backbone.num_features
print(f"{backbone_name}: {params:,} params, {features} features")
Output:
=== Available Optimizers ===
torch: adamw, adam, sgd, rmsprop, adagrad
timm: adamp, sgdp, adabelief, radam, adahessian, lamb, lars, madgrad, novograd
=== Available Schedulers ===
torch: cosine, step, multistep, exponential, onecycle, plateau
timm: cosine_with_restarts
=== Available Backbones ===
*resnet*: 48 models
*efficientnet*: 64 models
*vit*: 98 models
*convnext*: 36 models
=== Model Parameters ===
resnet18: 11,689,512 params, 512 features
resnet50: 23,508,032 params, 2048 features
efficientnet_b0: 4,007,548 params, 1280 features
vit_base_patch16_224: 85,798,656 params, 768 features