Skip to content

Utilities & Logging

Utility functions for model inspection, discovery, reproducibility, and library-wide logging.

logger

Shared loguru logger instance used throughout AutoTimm. All internal logging (trainer messages, export progress, interpretation output) goes through this logger.

Usage Examples

Basic Usage

from autotimm import logger

logger.info("Training started")
logger.warning("Low GPU memory")
logger.success("Model exported successfully")
logger.error("Checkpoint not found")

Adjusting Log Level

from autotimm.core.logging import logger

# Suppress info messages (show only warnings and above)
logger.remove()
import sys
logger.add(sys.stderr, level="WARNING")

# Enable debug messages
logger.remove()
logger.add(sys.stderr, level="DEBUG")

Default Format

HH:mm:ss | LEVEL    | module:function - message

log_table

Log a formatted ASCII table using loguru.

Usage Examples

from autotimm.core.logging import log_table

log_table(
    title="Model Comparison",
    headers=["Model", "Params", "Accuracy"],
    rows=[
        ["ResNet-18", "11.7M", "94.2%"],
        ["ResNet-50", "25.6M", "96.1%"],
        ["EfficientNet-B0", "5.3M", "95.8%"],
    ],
)

Parameters

Parameter Type Description
title str Table title displayed above the table
headers list[str] Column header names
rows list[list[str]] List of rows, each row is a list of string values

seed_everything

Set random seeds for reproducibility across all libraries.

API Reference

autotimm.seed_everything

seed_everything(seed: int = 42, deterministic: bool = False) -> int

Set random seeds for reproducibility across all libraries.

Seeds Python's random, NumPy, PyTorch (CPU and CUDA), and sets environment variables for deterministic behavior.

Parameters:

Name Type Description Default
seed int

Random seed value. Default is 42.

42
deterministic bool

If True, enables deterministic algorithms in PyTorch. This may impact performance but ensures fully reproducible results. Default is False.

False

Returns:

Type Description
int

The seed value that was set.

Example

seed_everything(42) 42

For fully deterministic training (slower but reproducible)

seed_everything(42, deterministic=True) 42

Note

Setting deterministic=True may reduce performance. Use it only when full reproducibility is required (e.g., for research or debugging).

Source code in src/autotimm/core/utils.py
def seed_everything(seed: int = 42, deterministic: bool = False) -> int:
    """Set random seeds for reproducibility across all libraries.

    Seeds Python's random, NumPy, PyTorch (CPU and CUDA), and sets environment
    variables for deterministic behavior.

    Parameters:
        seed: Random seed value. Default is 42.
        deterministic: If ``True``, enables deterministic algorithms in PyTorch.
            This may impact performance but ensures fully reproducible results.
            Default is ``False``.

    Returns:
        The seed value that was set.

    Example:
        >>> seed_everything(42)
        42
        >>> # For fully deterministic training (slower but reproducible)
        >>> seed_everything(42, deterministic=True)
        42

    Note:
        Setting ``deterministic=True`` may reduce performance. Use it only when
        full reproducibility is required (e.g., for research or debugging).
    """
    # Python random
    random.seed(seed)

    # NumPy
    np.random.seed(seed)

    # PyTorch
    torch.manual_seed(seed)
    torch.cuda.manual_seed(seed)
    torch.cuda.manual_seed_all(seed)  # For multi-GPU

    # Environment variables for additional reproducibility
    os.environ["PYTHONHASHSEED"] = str(seed)

    # PyTorch backends
    if deterministic:
        torch.backends.cudnn.deterministic = True
        torch.backends.cudnn.benchmark = False
        # Enable deterministic algorithms (PyTorch 1.8+)
        try:
            torch.use_deterministic_algorithms(True)
        except AttributeError:
            # Fallback for older PyTorch versions
            torch.set_deterministic(True)
    else:
        # Enable cuDNN benchmark for faster training (default)
        torch.backends.cudnn.benchmark = True
        torch.backends.cudnn.deterministic = False

    return seed

Usage Examples

Default Seed

from autotimm import seed_everything

# Seed all RNGs with default value (42)
seed_everything()

Deterministic Mode

# Full deterministic training (slower but reproducible)
seed_everything(42, deterministic=True)

Custom Seed

seed_everything(123)

Parameters

Parameter Type Default Description
seed int 42 Random seed value
deterministic bool False Enable deterministic algorithms (may reduce performance)

Returns

Type Description
int The seed value that was set

What Gets Seeded

  • Python's random module
  • NumPy's random number generator
  • PyTorch (CPU & all CUDA devices)
  • PYTHONHASHSEED environment variable
  • cuDNN backend settings (when deterministic=True)
  • torch.use_deterministic_algorithms() (when deterministic=True)

Notes

  • Setting deterministic=True may reduce performance due to slower deterministic algorithm implementations.
  • When deterministic=False (default), cuDNN benchmark mode is enabled for faster training.
  • All task classes support seed and deterministic parameters. By default seed=None (no seeding). Set seed=42 explicitly for reproducibility.
  • If seed=None is passed to a task class, seed_everything() is not called and a warning is emitted if deterministic=True.

count_parameters

Count the number of parameters in a model.

API Reference

autotimm.count_parameters

count_parameters(model: Module, trainable_only: bool = True) -> int

Return the number of parameters in a model.

Parameters:

Name Type Description Default
model Module

A torch.nn.Module.

required
trainable_only bool

If True, count only parameters with requires_grad=True.

True
Source code in src/autotimm/core/utils.py
def count_parameters(model: nn.Module, trainable_only: bool = True) -> int:
    """Return the number of parameters in a model.

    Parameters:
        model: A ``torch.nn.Module``.
        trainable_only: If ``True``, count only parameters with
            ``requires_grad=True``.
    """
    if trainable_only:
        return sum(p.numel() for p in model.parameters() if p.requires_grad)
    return sum(p.numel() for p in model.parameters())

Usage Examples

Trainable Parameters

import autotimm as at  # recommended alias

model = at.ImageClassifier(
    backbone="resnet50",
    num_classes=10,
    metrics=metrics,
)

trainable = at.count_parameters(model)
print(f"Trainable parameters: {trainable:,}")

All Parameters

total = at.count_parameters(model, trainable_only=False)
print(f"Total parameters: {total:,}")

Backbone Only

backbone = at.create_backbone("resnet50")
print(f"Backbone parameters: {at.count_parameters(backbone):,}")

Parameters

Parameter Type Default Description
model nn.Module Required PyTorch model
trainable_only bool True Count only trainable params

Returns

Type Description
int Number of parameters

list_optimizers

List available optimizers from torch and timm.

API Reference

autotimm.list_optimizers

list_optimizers(include_timm: bool = True) -> dict[str, list[str]]

List available optimizers from torch and optionally timm.

Parameters:

Name Type Description Default
include_timm bool

If True, include timm optimizers (requires timm).

True

Returns:

Type Description
dict[str, list[str]]

Dictionary with keys "torch" and optionally "timm", each containing

dict[str, list[str]]

a list of optimizer names.

Example

optimizers = list_optimizers() print(optimizers["torch"]) ['adadelta', 'adagrad', 'adam', 'adamax', 'adamw', 'asgd', ...] print(optimizers.get("timm", [])) ['adabelief', 'adafactor', 'adahessian', 'adamp', ...]

Source code in src/autotimm/core/utils.py
def list_optimizers(include_timm: bool = True) -> dict[str, list[str]]:
    """List available optimizers from torch and optionally timm.

    Parameters:
        include_timm: If ``True``, include timm optimizers (requires timm).

    Returns:
        Dictionary with keys ``"torch"`` and optionally ``"timm"``, each containing
        a list of optimizer names.

    Example:
        >>> optimizers = list_optimizers()
        >>> print(optimizers["torch"])
        ['adadelta', 'adagrad', 'adam', 'adamax', 'adamw', 'asgd', ...]
        >>> print(optimizers.get("timm", []))
        ['adabelief', 'adafactor', 'adahessian', 'adamp', ...]
    """
    import inspect
    import torch.optim as torch_optim

    # Dynamically discover PyTorch optimizers
    torch_optimizers = []
    for name, obj in inspect.getmembers(torch_optim):
        if (
            inspect.isclass(obj)
            and issubclass(obj, torch_optim.Optimizer)
            and obj is not torch_optim.Optimizer
        ):
            torch_optimizers.append(name.lower())

    torch_optimizers.sort()
    result = {"torch": torch_optimizers}

    if include_timm:
        try:
            import timm.optim as timm_optim

            # Dynamically discover timm optimizers
            timm_optimizers = []
            for name, obj in inspect.getmembers(timm_optim):
                if (
                    inspect.isclass(obj)
                    and issubclass(obj, torch_optim.Optimizer)
                    and obj.__module__.startswith("timm.optim")
                ):
                    # Use lowercase name without 'Optimizer' suffix
                    clean_name = name.replace("Optimizer", "").lower()
                    if clean_name and clean_name not in timm_optimizers:
                        timm_optimizers.append(clean_name)

            timm_optimizers.sort()
            result["timm"] = timm_optimizers
        except ImportError:
            result["timm"] = []

    return result

Usage Examples

All Optimizers

import autotimm as at  # recommended alias

optimizers = at.list_optimizers()
print("Torch optimizers:", optimizers["torch"])
print("Timm optimizers:", optimizers.get("timm", []))

Torch Only

optimizers = at.list_optimizers(include_timm=False)
print(optimizers["torch"])

Parameters

Parameter Type Default Description
include_timm bool True Include timm optimizers

Returns

Type Description
dict[str, list[str]] Dict with "torch" and "timm" keys

Available Optimizers

Torch:

  • adamw - AdamW
  • adam - Adam
  • sgd - SGD
  • rmsprop - RMSprop
  • adagrad - Adagrad

Timm:

  • adamp - AdamP
  • sgdp - SGDP
  • adabelief - AdaBelief
  • radam - RAdam
  • adahessian - Adahessian
  • lamb - LAMB
  • lars - LARS
  • madgrad - MADGRAD
  • novograd - NovoGrad

list_schedulers

List available learning rate schedulers from torch and timm.

API Reference

autotimm.list_schedulers

list_schedulers(include_timm: bool = True) -> dict[str, list[str]]

List available learning rate schedulers from torch and optionally timm.

Parameters:

Name Type Description Default
include_timm bool

If True, include timm schedulers (requires timm).

True

Returns:

Type Description
dict[str, list[str]]

Dictionary with keys "torch" and optionally "timm", each containing

dict[str, list[str]]

a list of scheduler names.

Example

schedulers = list_schedulers() print(schedulers["torch"]) ['chainedscheduler', 'constantlr', 'cosineannealinglr', ...] print(schedulers.get("timm", [])) ['cosinelrscheduler', 'multisteplrscheduler', ...]

Source code in src/autotimm/core/utils.py
def list_schedulers(include_timm: bool = True) -> dict[str, list[str]]:
    """List available learning rate schedulers from torch and optionally timm.

    Parameters:
        include_timm: If ``True``, include timm schedulers (requires timm).

    Returns:
        Dictionary with keys ``"torch"`` and optionally ``"timm"``, each containing
        a list of scheduler names.

    Example:
        >>> schedulers = list_schedulers()
        >>> print(schedulers["torch"])
        ['chainedscheduler', 'constantlr', 'cosineannealinglr', ...]
        >>> print(schedulers.get("timm", []))
        ['cosinelrscheduler', 'multisteplrscheduler', ...]
    """
    import inspect
    import torch.optim.lr_scheduler as torch_scheduler

    # Classes to exclude (not actual schedulers)
    exclude_classes = {
        "LRScheduler",
        "_LRScheduler",
        "Optimizer",
        "Counter",
        "Tensor",
        "Any",
        "SupportsFloat",
        "partial",
        "ref",
    }

    # Dynamically discover PyTorch schedulers
    torch_schedulers = []
    for name, obj in inspect.getmembers(torch_scheduler):
        if (
            inspect.isclass(obj)
            and not name.startswith("_")
            and name not in exclude_classes
            and obj.__module__ == "torch.optim.lr_scheduler"
            and (
                # Match common scheduler patterns
                "LR" in name
                or "Scheduler" in name
                or "Cyclic" in name
                or "Annealing" in name
                or "Warm" in name
            )
        ):
            # Convert to lowercase for consistency
            torch_schedulers.append(name.lower())

    torch_schedulers.sort()
    result = {"torch": torch_schedulers}

    if include_timm:
        try:
            import timm.scheduler as timm_scheduler

            # Dynamically discover timm schedulers
            timm_schedulers = []
            for name, obj in inspect.getmembers(timm_scheduler):
                if (
                    inspect.isclass(obj)
                    and hasattr(obj, "step")
                    and obj.__module__.startswith("timm.scheduler")
                    and not name.startswith("_")
                ):
                    # Use lowercase name
                    clean_name = name.lower()
                    if clean_name and clean_name not in timm_schedulers:
                        timm_schedulers.append(clean_name)

            timm_schedulers.sort()
            result["timm"] = timm_schedulers
        except ImportError:
            result["timm"] = []

    return result

Usage Examples

All Schedulers

import autotimm as at  # recommended alias

schedulers = at.list_schedulers()
print("Torch schedulers:", schedulers["torch"])
print("Timm schedulers:", schedulers.get("timm", []))

Torch Only

schedulers = at.list_schedulers(include_timm=False)
print(schedulers["torch"])

Parameters

Parameter Type Default Description
include_timm bool True Include timm schedulers

Returns

Type Description
dict[str, list[str]] Dict with "torch" and "timm" keys

Available Schedulers

Schedulers are dynamically discovered from PyTorch and timm. Common schedulers include:

PyTorch (15 total):

  • chainedscheduler - ChainedScheduler
  • constantlr - ConstantLR
  • cosineannealinglr - CosineAnnealingLR
  • cosineannealingwarmrestarts - CosineAnnealingWarmRestarts
  • cycliclr - CyclicLR
  • exponentiallr - ExponentialLR
  • lambdalr - LambdaLR
  • linearlr - LinearLR
  • multiplicativelr - MultiplicativeLR
  • multisteplr - MultiStepLR
  • onecyclelr - OneCycleLR
  • polynomiallr - PolynomialLR
  • reducelronplateau - ReduceLROnPlateau
  • sequentiallr - SequentialLR
  • steplr - StepLR

Timm (6 total):

  • cosinelrscheduler - CosineLRScheduler
  • multisteplrscheduler - MultiStepLRScheduler
  • plateaulrscheduler - PlateauLRScheduler
  • polylrscheduler - PolyLRScheduler
  • steplrscheduler - StepLRScheduler
  • tanhlrscheduler - TanhLRScheduler

Full Example

import autotimm as at  # recommended alias

# List available options
print("=== Available Optimizers ===")
optimizers = at.list_optimizers()
for source, names in optimizers.items():
    print(f"{source}: {', '.join(names)}")

print("\n=== Available Schedulers ===")
schedulers = at.list_schedulers()
for source, names in schedulers.items():
    print(f"{source}: {', '.join(names)}")

print("\n=== Available Backbones ===")
# Search patterns
patterns = ["*resnet*", "*efficientnet*", "*vit*", "*convnext*"]
for pattern in patterns:
    models = at.list_backbones(pattern, pretrained_only=True)
    print(f"{pattern}: {len(models)} models")

print("\n=== Model Parameters ===")
for backbone_name in ["resnet18", "resnet50", "efficientnet_b0", "vit_base_patch16_224"]:
    backbone = at.create_backbone(backbone_name)
    params = at.count_parameters(backbone, trainable_only=False)
    features = backbone.num_features
    print(f"{backbone_name}: {params:,} params, {features} features")

Output:

=== Available Optimizers ===
torch: adamw, adam, sgd, rmsprop, adagrad
timm: adamp, sgdp, adabelief, radam, adahessian, lamb, lars, madgrad, novograd

=== Available Schedulers ===
torch: cosine, step, multistep, exponential, onecycle, plateau
timm: cosine_with_restarts

=== Available Backbones ===
*resnet*: 48 models
*efficientnet*: 64 models
*vit*: 98 models
*convnext*: 36 models

=== Model Parameters ===
resnet18: 11,689,512 params, 512 features
resnet50: 23,508,032 params, 2048 features
efficientnet_b0: 4,007,548 params, 1280 features
vit_base_patch16_224: 85,798,656 params, 768 features