Loggers¶
Logger configuration and management for experiment tracking.
LoggerConfig¶
Configuration for a single logger backend.
API Reference¶
autotimm.LoggerConfig
dataclass
¶
Configuration for a single logger backend.
All parameters are required - no defaults are provided to ensure explicit configuration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend
|
str
|
Logger backend type. One of |
required |
params
|
dict[str, Any]
|
Parameters passed to the logger constructor. Required keys depend on the backend. |
dict()
|
Example
config = LoggerConfig( ... backend="tensorboard", ... params={"save_dir": "logs", "name": "experiment_1"}, ... )
Source code in src/autotimm/core/loggers.py
Usage Examples¶
TensorBoard¶
from autotimm import LoggerConfig
tb = LoggerConfig(
backend="tensorboard",
params={"save_dir": "logs", "name": "experiment_1"},
)
Weights & Biases¶
wandb = LoggerConfig(
backend="wandb",
params={
"project": "my-project",
"name": "run-1",
"tags": ["resnet", "cifar10"],
},
)
MLflow¶
mlflow = LoggerConfig(
backend="mlflow",
params={
"experiment_name": "cifar10-classification",
"tracking_uri": "http://localhost:5000",
},
)
CSV Logger¶
Parameters¶
| Parameter | Type | Default | Description |
|---|---|---|---|
backend |
str |
Required | Logger type |
params |
dict |
{} |
Backend-specific params |
Supported Backends¶
| Backend | Required Params | Install |
|---|---|---|
tensorboard |
save_dir |
pip install autotimm[tensorboard] |
csv |
save_dir |
Built-in |
wandb |
project |
pip install autotimm[wandb] |
mlflow |
experiment_name |
pip install autotimm[mlflow] |
LoggerManager¶
Manages multiple PyTorch Lightning loggers.
API Reference¶
autotimm.LoggerManager ¶
Manages multiple PyTorch Lightning loggers.
This class creates and manages multiple logger instances from explicit configurations. No default values are provided - all configuration must be specified by the user.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
configs
|
list[LoggerConfig]
|
List of |
required |
Attributes:
| Name | Type | Description |
|---|---|---|
loggers |
list[Logger]
|
List of instantiated PyTorch Lightning logger objects. |
Example
manager = LoggerManager( ... configs=[ ... LoggerConfig( ... backend="tensorboard", ... params={"save_dir": "logs/tb", "name": "run_1"}, ... ), ... LoggerConfig( ... backend="wandb", ... params={"project": "my_project", "name": "run_1"}, ... ), ... ] ... ) trainer = pl.Trainer(logger=manager.loggers)
Source code in src/autotimm/core/loggers.py
46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 | |
loggers
property
¶
Return list of instantiated loggers for use with pl.Trainer.
configs
property
¶
Return the configurations used to create the loggers.
__init__ ¶
get_logger_by_backend ¶
Get the first logger matching the given backend type.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend
|
str
|
Backend name to search for. |
required |
Returns:
| Type | Description |
|---|---|
Logger | None
|
The first matching logger, or None if not found. |
Source code in src/autotimm/core/loggers.py
Usage Examples¶
Basic Usage¶
from autotimm import LoggerConfig, LoggerManager
manager = LoggerManager(configs=[
LoggerConfig(backend="tensorboard", params={"save_dir": "logs/tb"}),
LoggerConfig(backend="csv", params={"save_dir": "logs/csv"}),
])
With AutoTrainer¶
Access Loggers¶
# Get all loggers
all_loggers = manager.loggers
# Get by backend
tb_logger = manager.get_logger_by_backend("tensorboard")
csv_logger = manager.get_logger_by_backend("csv")
# Iterate
for logger in manager:
print(type(logger))
# Length
print(f"Number of loggers: {len(manager)}")
Parameters¶
| Parameter | Type | Description |
|---|---|---|
configs |
list[LoggerConfig] |
List of logger configs |
Methods¶
| Method | Returns | Description |
|---|---|---|
loggers |
list[Logger] |
All instantiated loggers |
configs |
list[LoggerConfig] |
Original configs |
get_logger_by_backend(name) |
Logger \| None |
Find logger by backend |
len(manager) |
int |
Number of loggers |
iter(manager) |
Iterator | Iterate over loggers |
manager[i] |
Logger |
Get logger by index |
Backend Parameters¶
TensorBoard¶
LoggerConfig(
backend="tensorboard",
params={
"save_dir": "logs", # Required
"name": "experiment", # Subdirectory
"version": "v1", # Version string
"log_graph": True, # Log model graph
"default_hp_metric": False, # HP metric logging
"prefix": "", # Metric prefix
"sub_dir": None, # Additional subdirectory
},
)
Weights & Biases¶
LoggerConfig(
backend="wandb",
params={
"project": "my-project", # Required
"name": "run-1", # Run name
"id": None, # Run ID (for resuming)
"tags": ["tag1", "tag2"], # Tags
"notes": "Experiment notes", # Description
"group": "experiment-group", # Group runs
"job_type": "training", # Job type
"entity": None, # Team/user
"save_dir": "wandb_logs", # Local save directory
"offline": False, # Offline mode
"log_model": False, # Log model artifacts
"prefix": "", # Metric prefix
},
)
MLflow¶
LoggerConfig(
backend="mlflow",
params={
"experiment_name": "exp", # Required
"run_name": "run-1", # Run name
"tracking_uri": None, # MLflow server URL
"tags": {"env": "dev"}, # Tags
"save_dir": "mlruns", # Local artifacts
"log_model": False, # Log model
"prefix": "", # Metric prefix
"artifact_location": None, # Artifact storage
"run_id": None, # For resuming
},
)
CSV Logger¶
LoggerConfig(
backend="csv",
params={
"save_dir": "logs", # Required
"name": "metrics", # Subdirectory
"version": None, # Auto-increment if None
"prefix": "", # Metric prefix
"flush_logs_every_n_steps": 100,
},
)
Full Example¶
from autotimm import (
AutoTrainer,
ImageClassifier,
ImageDataModule,
LoggerConfig,
LoggerManager,
MetricConfig,
)
# Data
data = ImageDataModule(
data_dir="./data",
dataset_name="CIFAR10",
image_size=224,
batch_size=64,
)
# Metrics
metrics = [
MetricConfig(
name="accuracy",
backend="torchmetrics",
metric_class="Accuracy",
params={"task": "multiclass"},
stages=["train", "val", "test"],
prog_bar=True,
),
]
# Model
model = ImageClassifier(
backbone="resnet50",
num_classes=10,
metrics=metrics,
)
# Multiple loggers
logger_manager = LoggerManager(configs=[
LoggerConfig(
backend="tensorboard",
params={"save_dir": "logs/tb", "name": "cifar10"},
),
LoggerConfig(
backend="csv",
params={"save_dir": "logs/csv"},
),
LoggerConfig(
backend="wandb",
params={"project": "cifar10-experiments", "name": "resnet50-run"},
),
])
# Trainer
trainer = AutoTrainer(
max_epochs=10,
logger=logger_manager,
checkpoint_monitor="val/accuracy",
)
# Train
trainer.fit(model, datamodule=data)
# Access specific logger after training
tb = logger_manager.get_logger_by_backend("tensorboard")
print(f"TensorBoard log dir: {tb.log_dir}")