Overview

NightFlow is a native desktop application purpose-built for automated deep learning on images. It wraps a complete image model training pipeline — from dataset ingestion and augmentation through training, metric visualization, and model export — inside a fast, privacy-first interface that runs entirely on your own hardware.

💡 Privacy First

No cloud accounts, no telemetry, no tracking. All your data — datasets, models, metrics — stays on your machine. The entire application works offline.

Whether you're training an image classifier, an object detector, or a segmentation model, NightFlow handles the full lifecycle:

  • Automated model training with auto-tuning of learning rate and batch size
  • Experiment lifecycle management — project organization, run history, metric tracking, and comparison
  • Model interpretation and explainability — GradCAM, Integrated Gradients, Attention Rollout, and more
  • Remote GPU training via SSH — run jobs on powerful servers without leaving the app
  • Production export — TorchScript and ONNX output for deployment anywhere

Core Concepts

Understanding these concepts will help you navigate NightFlow effectively.

📁 Projects

A project is the top-level container. It holds your dataset configuration, model settings, training parameters, and all associated runs.

🏃 Runs

Each time you train a model, NightFlow creates a run. Runs capture every hyperparameter, metric, and checkpoint so experiments are reproducible.

📊 Metrics

Training and validation metrics (loss, accuracy, custom metrics) are streamed in real time and persisted for later comparison across runs.

🧠 Models

NightFlow supports 1,000+ backbones from the timm library — ResNet, EfficientNet, ViT, ConvNeXt, Swin, and many more.

Dashboard

The Dashboard is your command center. It provides an overview of your current project and quick access to training and monitoring.

Summary Cards

The dashboard displays key statistics for the active project:

  • Total runs, running count, and best accuracy achieved
  • Test metrics from evaluated runs

SSH Connection Management

Connect to remote GPU servers directly from the dashboard. The connection banner shows status and uptime, and you can sync project data between local and remote machines.

Training Panel

Launch training runs, monitor live progress with epoch and batch indicators, and view estimated time remaining. A training queue lets you line up multiple runs for sequential execution.

System Metrics

Real-time monitoring of CPU, memory, disk, and GPU utilization with color-coded usage bars. Metrics refresh every 3 seconds during training.

Sync & Logs

Sync project configurations and training data with remote servers. A sync logs panel shows the progress and history of all sync operations.

Project Wizard

The guided project wizard walks you through every step of setting up a new experiment — no code required.

Step 1: Dataset Configuration

Point NightFlow to your image directory. The wizard will auto-detect your folder structure and class labels. Supported structures include ImageFolder (one subfolder per class) and common annotation formats for detection and segmentation.

Step 2: Model Selection

Choose from 1,000+ backbone architectures. The wizard provides recommendations based on your dataset size and task. You can filter by model family, parameter count, and accuracy benchmarks.

Step 3: Augmentation

Select from built-in augmentation presets or configure individual transforms. The wizard includes a live preview showing augmented samples before training begins.

Step 4: Training Configuration

Configure hyperparameters including:

  • Learning rate — set manually or use auto-discovery (LR Finder)
  • Batch size — set manually or let NightFlow find the optimal size for your GPU
  • Epochs — with optional early stopping
  • Optimizer — SGD, Adam, AdamW, and more
  • Scheduler — Cosine, Step, OneCycle, etc.
🔧 Auto-Tuning

Enable auto-tuning to let NightFlow automatically discover the optimal learning rate and batch size before training begins. This typically improves convergence and final accuracy.

Training

Once configured, launch training from the dashboard or the project wizard. NightFlow orchestrates the full pipeline.

Local Training

Training runs directly on your machine using your local GPU (NVIDIA CUDA or Apple Metal) or CPU. NightFlow shows real-time system metrics — CPU, memory, and GPU utilization — throughout the process.

Remote Training via SSH

Connect to any remote server with SSH and train on powerful GPUs without leaving the app. NightFlow handles:

  • SSH connection and authentication (password or key-based)
  • Syncing your project configuration to the remote server
  • Launching and monitoring the training process
  • Streaming metrics back to your desktop in real time
  • Downloading checkpoints when training completes

Real-Time Metrics

During training, live metrics are streamed to the charts view — loss, accuracy, and any custom metrics. You can monitor training progress without waiting for it to complete.

Experiment Tracking

NightFlow provides built-in experiment management — no external tools like Weights & Biases or MLflow needed.

Runs Table

Every experiment is recorded in a searchable, sortable table showing:

  • Run ID and status (completed, running, failed)
  • Model backbone and task type
  • All hyperparameters (learning rate, batch size, optimizer, etc.)
  • Final metrics (loss, accuracy, custom metrics)
  • Training duration and timestamps

Run Detail View

Click any run to open a detailed view with:

  • Full metric charts organized by stage (train, validation, test)
  • Confusion matrices and per-class metrics for all three splits
  • Hyperparameter display with augmentation pipeline summary
  • Tagging, notes, and run metadata
  • Deploy panel with model export, Hugging Face Hub push, and inference script generation

The deploy button only appears when training is complete and a checkpoint file exists.

Run Comparison

Select multiple runs and compare them side-by-side with overlaid metric charts. Easily identify which hyperparameter changes led to improvements.

Charts & Visualization

NightFlow renders high-fidelity interactive charts for all your training metrics.

Available Charts

  • Loss curves — training and validation loss over epochs
  • Accuracy plots — top-1 and top-5 accuracy trends
  • Learning rate schedule — visualize LR changes across training
  • Custom metrics — any metric logged during training

Confusion Matrix

NightFlow displays confusion matrices for all three dataset splits — Train, Validation, and Test — in a dedicated Classification tab. Each matrix shows:

  • Per-cell counts and percentages with a heatmap color scale
  • Diagonal highlighting for correct predictions
  • Full class names displayed via rotated column headers (no truncation)
  • Automatic class name detection from the dataset folder structure

Per-Class Metrics

Alongside each confusion matrix, a full-width per-class metrics panel shows Precision, Recall, and F1 for every class, with sortable columns and visual bar charts. Available for train, validation, and test splits.

Interaction

Charts support hover tooltips, zoom, and pan. You can also overlay multiple runs on the same chart for direct comparison. Charts gracefully handle NaN and Infinity values by breaking lines at gaps rather than rendering artifacts.

Model Interpretation

Understand what your model learned and why it makes specific predictions. NightFlow includes six built-in interpretation methods.

Method Type Description
GradCAM Gradient Class-discriminative localization using gradients flowing into the final convolutional layer
GradCAM++ Gradient Weighted variant that better localizes multiple instances of the same class
Integrated Gradients Attribution Axiomatic pixel-level attribution by integrating gradients along the path from a baseline
SmoothGrad Attribution Noise-averaged gradient saliency for sharper, less noisy attribution maps
Attention Rollout Attention Aggregated attention across all layers of a Vision Transformer
Attention Flow Attention Graph-based maximum-flow computation through the attention graph

How It Works

Upload an image by dragging and dropping or clicking to browse (supports PNG, JPG, WEBP). Select a completed run and an interpretation method, and NightFlow generates a side-by-side comparison of the original image and the interpretation heatmap. You can also preview augmentation effects on your uploaded image before running interpretation.

💡 When to Use What

Use GradCAM/GradCAM++ for convolutional models (ResNet, EfficientNet). Use Attention Rollout/Flow for Vision Transformers (ViT, Swin). Integrated Gradients and SmoothGrad work with any architecture.

Dataset Browser

Visually explore your dataset before and after training. The browser provides:

  • Grid view — browse all images in your dataset with paginated thumbnails (50 per page)
  • Split detection — automatically detects train/val/test folder structure and shows tabs to switch between splits
  • Class filtering — interactive class distribution sidebar with clickable bar charts to filter by label
  • Full-text search — search images by filename or label
  • Image preview — click any thumbnail for a full-resolution lightbox view with metadata
  • Dataset statistics — total images, number of classes, and average samples per class

Supported Formats

The browser supports three dataset formats:

  • Folder (ImageFolder) — class-per-subfolder structure, with automatic train/val/test split detection
  • CSV — image path and label columns, with separate train/val/test file paths
  • JSONL — one JSON object per line with image path and label fields

Netron Integration

NightFlow includes a built-in Netron viewer for visualizing neural network architectures directly inside the app.

  • Load any exported model (TorchScript .pt or ONNX .onnx)
  • Explore layer-by-layer architecture with detailed operator information
  • Inspect tensor shapes, parameter counts, and layer properties
  • No browser or external tool needed

Terminal

A full-featured pseudo-terminal is built into NightFlow, powered by xterm.js with WebGL-accelerated rendering.

  • Local shell — run any command without leaving the app
  • SSH sessions — connect to remote servers for training or file management
  • Hardware-accelerated — WebGL rendering for smooth, high-performance output
  • Full PTY support — interactive programs, command completion, scroll history

Training Queue

Queue multiple training configurations for sequential execution. Instead of manually starting each run, add them to the queue and NightFlow will train them one after another.

  • Sequential execution — runs are launched automatically when the previous one finishes
  • Per-project queues — each project maintains its own queue
  • Status tracking — see which runs are queued, running, completed, or failed
  • Crash recovery — orphaned training sessions are detected on app restart so you can resume or clean up

Model Export & Deployment

Export your trained models for production deployment in three industry-standard formats:

Format Extension Use Case
TorchScript .pt PyTorch-native deployment, mobile (iOS/Android via PyTorch Mobile)
ONNX .onnx Universal format — ONNX Runtime, TensorRT, CoreML conversion, edge devices
TensorRT .engine NVIDIA-optimized inference for maximum GPU throughput

Export is available from the run detail view once training completes and a checkpoint file exists. Select the best checkpoint and choose your output format.

Push to Hugging Face Hub

Deploy your trained model directly to the Hugging Face Hub from within NightFlow. The deploy panel lets you:

  • Specify a repository ID (username / model-name format)
  • Set your Hugging Face API token
  • Choose between public or private repositories
  • Add model card metadata — model name, description, license, and tags

NightFlow automatically strips optimizer state and sensitive paths from the checkpoint before uploading, and generates a model card with your metadata, task type, backbone, and accuracy metrics.

Inference Script Generation

NightFlow generates ready-to-use Python inference scripts for your exported model, complete with preprocessing, model loading, prediction code, and syntax highlighting. Class names from the project are auto-populated into the script parameters.

Settings

Configure NightFlow to match your workflow:

  • Theme — switch between dark and light mode
  • Project configuration — edit dataset format, paths, task type, model selection, and all training hyperparameters per project
  • Training hyperparameters — max epochs, learning rate, batch size, optimizer, scheduler, weight decay, precision, gradient clipping, image size, augmentation preset, early stopping, and backbone freezing
  • Task types — classification, multi-label classification, object detection (FCOS/YOLOX), semantic and instance segmentation
  • SSH key management — manage SSH keys for remote server connections
  • GPU device configuration — select which GPU device to use for training
  • Import / Export — back up and restore your entire NightFlow configuration, projects, and run history
  • Database reset — clear all local data and start fresh

Sidebar

The sidebar provides quick access to all views: Dashboard, Experiments, Dataset Browser, Interpretation, Model Viewer, Terminal, and Settings. It also includes a project switcher and a Buy Me a Coffee link at the bottom to support the project.

Keyboard Shortcuts

Navigate NightFlow faster with built-in keyboard shortcuts:

Shortcut Action
Ctrl/⌘ + 1 Dashboard
Ctrl/⌘ + 2 Experiments
Ctrl/⌘ + 3 Dataset Browser
Ctrl/⌘ + 4 Interpretation
Ctrl/⌘ + 5 Model Viewer (Netron)
Ctrl/⌘ + 6 Terminal
Ctrl/⌘ + 7 Settings
Ctrl/⌘ + K Show Keyboard Shortcuts