[GH-ISSUE #23] Feature request: Batch size standard: 1, Bulk Generation 1 #24

Closed
opened 2026-02-26 21:30:50 +03:00 by kerem · 1 comment
Owner

Originally created by @Basthet on GitHub (Feb 5, 2026).
Original GitHub issue: https://github.com/fspecii/ace-step-ui/issues/23

Please, have a heart for us poor 8GB VRAM owners. While batch size greater than 1 let Ace-step crash, the creation speed is so fast I do not care about a bulk generation of around 5.

Best would be to make it user adjustable (e.g.: store able preference) or a start parameter, config file

Nevertheless: Thanks for this great UI!

Originally created by @Basthet on GitHub (Feb 5, 2026). Original GitHub issue: https://github.com/fspecii/ace-step-ui/issues/23 Please, have a heart for us poor 8GB VRAM owners. While batch size greater than 1 let Ace-step crash, the creation speed is so fast I do not care about a bulk generation of around 5. Best would be to make it user adjustable (e.g.: store able preference) or a start parameter, config file Nevertheless: Thanks for this great UI!
kerem closed this issue 2026-02-26 21:30:50 +03:00
Author
Owner

@fspecii commented on GitHub (Feb 5, 2026):

Hi @Basthet, thanks for the suggestion and the kind words!

We've just pushed changes that address this:

  1. Batch size now defaults to 1 — safe for 8GB VRAM GPUs out of the box
  2. Both batch size and bulk count are now persisted in your browser — change them once and they'll stick for all future sessions
  3. New LM Backend selector — you can switch from VLLM (~9.2 GB) to PT (~1.6 GB) in Advanced Settings, which saves a lot of VRAM

So the workflow for 8GB users is: keep batch size at 1, use PT backend, and use bulk generation (1-10 jobs queued sequentially) if you want multiple variations — each job runs one at a time so VRAM stays low.

Closing this as resolved — feel free to reopen if you need anything else!

<!-- gh-comment-id:3856165368 --> @fspecii commented on GitHub (Feb 5, 2026): Hi @Basthet, thanks for the suggestion and the kind words! We've just pushed changes that address this: 1. **Batch size now defaults to 1** — safe for 8GB VRAM GPUs out of the box 2. **Both batch size and bulk count are now persisted in your browser** — change them once and they'll stick for all future sessions 3. **New LM Backend selector** — you can switch from VLLM (~9.2 GB) to PT (~1.6 GB) in Advanced Settings, which saves a lot of VRAM So the workflow for 8GB users is: keep batch size at 1, use PT backend, and use bulk generation (1-10 jobs queued sequentially) if you want multiple variations — each job runs one at a time so VRAM stays low. Closing this as resolved — feel free to reopen if you need anything else!
Sign in to join this conversation.
No labels
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/ace-step-ui#24
No description provided.