Root cause: torch.load() reads 6.9GB .ckpt into Python heap + model params
in CPU RAM = ~14GB peak, exceeding 16GB system RAM → OOM Killer.
Fix 1 - mmap=True on all torch.load() calls (torch 2.7 supports this):
With mmap, checkpoint storage is file-backed (not heap). Only the model
parameters (also ~7GB) exist in physical RAM during loading. Peak RAM
drops from ~14GB to ~7GB — within safe limits on 16GB machines.
Files changed: pipelines.py, hunyuan3ddit.py, model.py (×2), flow_matching_sit.py
Fix 2 - malloc_trim(0) after every gc.collect():
Forces glibc to return freed heap pages to OS immediately, so Python's
memory pool doesn't hoard freed model memory before the next load.
Fix 3 - PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True:
Prevents CUDA allocator fragmentation between model switches.
Fix 4 - Adaptive threshold recalculated:
With mmap loading, loading a model requires ~7.5GB (model params) not
14GB. CPU offload threshold lowered from 16GB → 10.5GB, enabling fast
path on machines with more headroom.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Two root causes of CUDA OOM fixed:
1. onnxruntime-gpu CUDAExecutionProvider pre-allocated ~12GB VRAM arena
for bria-rmbg background removal, starving PyTorch models.
Fix: force CPUExecutionProvider in BackgroundRemover (rembg is
lightweight, runs fine on CPU, frees all VRAM for shape/tex).
2. Previous 'always delete' strategy was wasteful on high-RAM machines.
New adaptive strategy checks available system RAM at runtime:
- RAM >= 16GB free: offload i23d to CPU (.to('cpu')) — fast, ~1s
- RAM < 16GB free: full del + reload from disk — safe, ~20-30s
This gives instant model switching on 32GB+ machines while keeping
16GB machines safe from OOM Killer.
Helper functions:
- _prepare_for_tex(): adaptive offload/delete based on RAM check
- _ensure_i23d_worker(): restore from CPU (fast) or disk (slow)
- _get_available_ram_gb(): reads /proc/meminfo
- _can_offload_to_cpu(): threshold check with logging
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Previous hybrid strategy (i23d in CPU RAM, tex del'd) still caused OOM:
- i23d in CPU RAM: ~7GB
- tex loading from disk: ~7GB peak in RAM before GPU transfer
- Total: ~14GB > 16GB system RAM → OOM Killer
New strategy: fully delete both models between uses.
Neither model persists in CPU RAM between requests.
Peak RAM during any load: ~7GB (one model staging to GPU).
Changes:
- Replace _offload_i23d_to_cpu/_restore_i23d_to_gpu with
_unload_i23d_worker/_ensure_i23d_worker (full del + reload)
- Add double gc.collect() + empty_cache before each load
- Skip i23d startup load in low_vram_mode (load on first request)
- Both models reload from local HF cache (~20-30s each)
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Instead of .to('cpu') / .to('cuda'), models are now fully del'd from
GPU (no CPU intermediate) and reloaded on demand:
- _unload_i23d_worker(): del + gc.collect() + empty_cache()
- _ensure_i23d_worker(): lazy reload from pretrained if None
- _unload_tex_pipeline(): del + gc.collect() + empty_cache()
- _ensure_tex_pipeline(): lazy load from tex_conf if None
generation_all() flow in low_vram_mode:
shape gen → _unload_i23d_worker → _ensure_tex_pipeline →
texture gen → _unload_tex_pipeline
(shape model reloads on next _gen_shape call via _ensure_i23d_worker)
Startup: tex_pipeline NOT loaded in low_vram_mode (only tex_conf stored),
reducing startup VRAM from ~13.5GB to ~7.25GB.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- generation_all(): offload i23d_worker to CPU before texture gen,
restore after — mirrors batch_generate.py sequential strategy.
Prevents OOM when both models peak simultaneously on RTX 3080.
- Change texture config: max_num_view 8→9, resolution 768→512.
768 resolution OOMs (14.6GB activation); 512 is practical max for
RTX 3080 20GB. max_views 9 gives better texture coverage.
- Only active when --low_vram_mode flag is passed.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>