[GH-ISSUE #5653] Issue during openwebui.sh installation: /dev/dri/renderD128 added by build.func causes “not a device” error #1210

Closed
opened 2026-02-26 12:48:07 +03:00 by kerem · 9 comments
Owner

Originally created by @Zorrochi on GitHub (Jul 2, 2025).
Original GitHub issue: https://github.com/community-scripts/ProxmoxVE/issues/5653

Have you read and understood the above guidelines?

yes

📜 What is the name of the script you are using?

Open WebUI

📂 What was the exact command used to execute the script?

bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/openwebui.sh)"

⚙️ What settings are you using?

  • Default Settings
  • Advanced Settings

🖥️ Which Linux distribution are you using?

No response

📝 Provide a clear and concise description of the issue.

First of all, thank you very much for the great work on these scripts! I really appreciate the effort and help they provide.

When running the openwebui.sh script on Proxmox, the devices /dev/dri/renderD128 and /dev/dri/card0 are automatically added to the CT by the build.func function. This causes errors when starting the container, such as:

TASK ERROR: /dev/dri/renderD128 is not a device

And also in the SSH Window

LXC Container 112 was successfully created.
  ⠋   Starting LXC Container/dev/dri/renderD128 is not a device
  ⠹   Starting LXC Container
[ERROR] in line 1155: exit code 0: while executing command pct start "$CTID"

and maybe similarly for /dev/dri/card0.

These errors occur because the script mounts these devices without checking whether they actually exist or are usable on the host system.

Suggested Improvement:
It would be helpful if the script asked the user whether to add /dev/dri/renderD128 and /dev/dri/card0 to the CT before mounting them. This user prompt would allow skipping the device mounts when they are not available or needed, avoiding these errors.

For example, adding a function like this:

function mount_render_devices() {
    read -p "Do you want to mount /dev/dri/renderD128 into the CT? (y/n): " choice128
    if [[ "$choice128" == "y" || "$choice128" == "Y" ]]; then
        pct set $CTID -lxc.mount.entry /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
    fi

    read -p "Do you want to mount /dev/dri/card0 into the CT? (y/n): " choicecard
    if [[ "$choicecard" == "y" || "$choicecard" == "Y" ]]; then
        pct set $CTID -lxc.mount.entry /dev/dri/card0 dev/dri/card0 none bind,optional,create=file
    fi
}

and calling it before starting the container would solve the problem.

Relevant part of the script:
The devices are added with these commands inside the build.func function:

pct set $CTID -lxc.mount.entry /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
pct set $CTID -lxc.mount.entry /dev/dri/card0 dev/dri/card0 none bind,optional,create=file

🔄 Steps to reproduce the issue.

My server does not have a graphics card, so this error is probably occurring for me.

It was a default installation.

Paste the full error output (if available).

TASK ERROR: /dev/dri/renderD128 is not a device
[ERROR] in line 1155: exit code 0: while executing command pct start "$CTID"

🖼️ Additional context (optional).

No response

Originally created by @Zorrochi on GitHub (Jul 2, 2025). Original GitHub issue: https://github.com/community-scripts/ProxmoxVE/issues/5653 ### ✅ Have you read and understood the above guidelines? yes ### 📜 What is the name of the script you are using? Open WebUI ### 📂 What was the exact command used to execute the script? bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/openwebui.sh)" ### ⚙️ What settings are you using? - [x] Default Settings - [ ] Advanced Settings ### 🖥️ Which Linux distribution are you using? _No response_ ### 📝 Provide a clear and concise description of the issue. First of all, thank you very much for the great work on these scripts! I really appreciate the effort and help they provide. When running the `openwebui.sh` script on Proxmox, the devices `/dev/dri/renderD128` and `/dev/dri/card0` are automatically added to the CT by the `build.func` function. This causes errors when starting the container, such as: ``` TASK ERROR: /dev/dri/renderD128 is not a device ``` And also in the SSH Window ``` LXC Container 112 was successfully created. ⠋ Starting LXC Container/dev/dri/renderD128 is not a device ⠹ Starting LXC Container [ERROR] in line 1155: exit code 0: while executing command pct start "$CTID" ``` and maybe similarly for `/dev/dri/card0`. These errors occur because the script mounts these devices without checking whether they actually exist or are usable on the host system. **Suggested Improvement:** It would be helpful if the script asked the user whether to add `/dev/dri/renderD128` and `/dev/dri/card0` to the CT before mounting them. This user prompt would allow skipping the device mounts when they are not available or needed, avoiding these errors. For example, adding a function like this: ```bash function mount_render_devices() { read -p "Do you want to mount /dev/dri/renderD128 into the CT? (y/n): " choice128 if [[ "$choice128" == "y" || "$choice128" == "Y" ]]; then pct set $CTID -lxc.mount.entry /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file fi read -p "Do you want to mount /dev/dri/card0 into the CT? (y/n): " choicecard if [[ "$choicecard" == "y" || "$choicecard" == "Y" ]]; then pct set $CTID -lxc.mount.entry /dev/dri/card0 dev/dri/card0 none bind,optional,create=file fi } ``` and calling it before starting the container would solve the problem. **Relevant part of the script:** The devices are added with these commands inside the `build.func` function: ```bash pct set $CTID -lxc.mount.entry /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file pct set $CTID -lxc.mount.entry /dev/dri/card0 dev/dri/card0 none bind,optional,create=file ``` ### 🔄 Steps to reproduce the issue. My server does not have a graphics card, so this error is probably occurring for me. It was a default installation. ### ❌ Paste the full error output (if available). TASK ERROR: /dev/dri/renderD128 is not a device [ERROR] in line 1155: exit code 0: while executing command pct start "$CTID" ### 🖼️ Additional context (optional). _No response_
kerem 2026-02-26 12:48:07 +03:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@MickLesk commented on GitHub (Jul 3, 2025):

This App is useless without GPU, why should we do this?

You can create as in advanced Settings as privileged - that worked

<!-- gh-comment-id:3030798432 --> @MickLesk commented on GitHub (Jul 3, 2025): This App is useless without GPU, why should we do this? You can create as in advanced Settings as privileged - that worked
Author
Owner

@Zorrochi commented on GitHub (Jul 3, 2025):

Hi. I'm not so sure, does Open WebUI really not work without GPU? I found this article. https://github.com/open-webui/open-webui/discussions/2167

I mean, shouldn't it be up to the user whether they want to use a graphics card in their server? Something like ollama or LM Studio also works fine with a CPU. I don't want to have to buy an NVIDIA RTX 4090 just so my family can use a chatbot plus good data protection. (If the DSGVO fails and I have to buy a NVIDIA RTX 4090 to protect my families data!😂)

Joke aside. As far as I know Open WebUI is not useless without a GPU, but its performance and functionality will be limited. While you can manage models and perform simple tasks using the CPU, running resource-intensive models will be significantly slower. For optimum performance, especially with demanding applications, a graphics processor is recommended but not mandator?

<!-- gh-comment-id:3031152221 --> @Zorrochi commented on GitHub (Jul 3, 2025): Hi. I'm not so sure, does Open WebUI really not work without GPU? I found this article. https://github.com/open-webui/open-webui/discussions/2167 I mean, shouldn't it be up to the user whether they want to use a graphics card in their server? Something like ollama or LM Studio also works fine with a CPU. I don't want to have to buy an NVIDIA RTX 4090 just so my family can use a chatbot plus good data protection. (If the DSGVO fails and I have to buy a NVIDIA RTX 4090 to protect my families data!😂) Joke aside. As far as I know Open WebUI is not useless without a GPU, but its performance and functionality will be limited. While you can manage models and perform simple tasks using the CPU, running resource-intensive models will be significantly slower. For optimum performance, especially with demanding applications, a graphics processor is recommended but not mandator?
Author
Owner

@MickLesk commented on GitHub (Jul 3, 2025):

Phew. Idk. Ive an Intel Ultra 9 and use 20 Cores for openwebui, its slow as fuck. I'd rather pay 20 dollars a month for GPT or Claude 😅

I use an simple mistral or deepseek, But maybe it's also because I'm a classic IT guy and everything is too slow for me.

Example:
"tell me pi with 10 decimal places" -> that takes over 1 minute

<!-- gh-comment-id:3031162627 --> @MickLesk commented on GitHub (Jul 3, 2025): Phew. Idk. Ive an Intel Ultra 9 and use 20 Cores for openwebui, its slow as fuck. I'd rather pay 20 dollars a month for GPT or Claude 😅 I use an simple mistral or deepseek, But maybe it's also because I'm a classic IT guy and everything is too slow for me. Example: "tell me pi with 10 decimal places" -> that takes over 1 minute
Author
Owner

@Zorrochi commented on GitHub (Jul 3, 2025):

I like gemma-3-12b and qwen3 8B. gemma3 needs about 0.8s for your example prompt on my desktop PC with LM Studio and CPU only (real 16 Cores).

If I'm lazy, I use mistral (devstral 24B) for coding or to start a project fast, but devstral is slow af. even with RTX 3080. I think it's because 24B needs more VRAM than may graphics card provides.

I forgot to reply to your advice that I can do the CT priviliged and then it should work. I have read that Privileged containers should have more authorizations, this could endanger the entire host. There is also less isolation between the container and the host, which can lead to problems if a container is compromised. And only for my "tin foil hat". A.I. + priviliged CT = Terminator 🤖😆

<!-- gh-comment-id:3031289275 --> @Zorrochi commented on GitHub (Jul 3, 2025): I like gemma-3-12b and qwen3 8B. gemma3 needs about 0.8s for your example prompt on my desktop PC with LM Studio and CPU only (real 16 Cores). If I'm lazy, I use mistral (devstral 24B) for coding or to start a project fast, but devstral is slow af. even with RTX 3080. I think it's because 24B needs more VRAM than may graphics card provides. I forgot to reply to your advice that I can do the CT priviliged and then it should work. I have read that Privileged containers should have more authorizations, this could endanger the entire host. There is also less isolation between the container and the host, which can lead to problems if a container is compromised. And only for my "tin foil hat". A.I. + priviliged CT = Terminator 🤖😆
Author
Owner

@MickLesk commented on GitHub (Jul 3, 2025):

Can you try the dev branch?

bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVED/main/ct/openwebui.sh)"
<!-- gh-comment-id:3031424900 --> @MickLesk commented on GitHub (Jul 3, 2025): Can you try the dev branch? ```bash bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVED/main/ct/openwebui.sh)" ```
Author
Owner

@Zorrochi commented on GitHub (Jul 3, 2025):

Thank you! That was fast! O_O

I tested it with an default and default (with verbose) installation.

Let me tell you what I noticed during the installation. In general, the installation took a very long time, about 17 minutes.

The following things took a long time:

  • “Updating Contrainer OS”
  • “Installing Open WebUI (Patience)”

This "renderD128", "card0" and "fb0" I think most people don't know what that is, like me.

Is "renderD128" now the graphics card or is it "card0". Is the framebuffer only possible with the graphics card or is it something like a swap that is created separately? I think it should be written in a more user-friendly way .

I had a confusion that took me 3 minutes. (But I hadn't counted it as part of the 17 minutes)
I wondered why shellscript ask me if I want to install ollama, in addition (y/N) No as default.

The default settings are set a little too low.

I downloaded gemma-3-12b, qwen3 8B and about 1% of gemma3n:e4b and the Bootdisk / virtual disk was full.
I resized it to 50 GiB, i think 35 GiB ist the best for a optimal minimum. Someone could someone could argue "if you use the OpenAI-API you don't need so much wasted space", but it's only 10 GiB more and I assume that most people who have to resize 25 GiB have more annoying work than simply using 10 GiB more.

I also changed the CPU core count to 18 and RAM to 37gb.
Strangely, only 33.3% of the CPU was used when I asked “tell me pi with 10 decimal places”. The Answer took about 28 sec, but it was also with gemma3:12b. It had used 11 GiB of RAM.

So I would recommend 6 cores as a minimum and 12 GiB RAM.

In the verbose installation process I saw a couple errors, I think some a normal. I took a few screenshots.

Image

Visually, there is no space between “...Container OSSetting up...”

There was a "rehash: warning"

Image

Here multiple times a "Requirement already satisfied"-Exception.

Dann sowas

Image

und sowas

Image

and that one

Image

I think that are errors from normal installation process by Open WebUI, not from your script?


Thanks a lot! I read this with tteck in the german proxmox forum. I didn't even know that he passed :( . 1-2 years ago, I donated something because the scripts simply gave me so much pleasure because of the time saved.

If I can still help, please let me know.

<!-- gh-comment-id:3032805946 --> @Zorrochi commented on GitHub (Jul 3, 2025): Thank you! That was fast! O_O I tested it with an default and default (with verbose) installation. Let me tell you what I noticed during the installation. In general, the installation took a very long time, about 17 minutes. The following things took a long time: - “Updating Contrainer OS” - “Installing Open WebUI (Patience)” This "renderD128", "card0" and "fb0" I think most people don't know what that is, like me. Is "renderD128" now the graphics card or is it "card0". Is the framebuffer only possible with the graphics card or is it something like a swap that is created separately? I think it should be written in a more user-friendly way . I had a confusion that took me 3 minutes. (But I hadn't counted it as part of the 17 minutes) I wondered why shellscript ask me if I want to install ollama, in addition (y/N) No as default. The default settings are set a little too low. I downloaded gemma-3-12b, qwen3 8B and about 1% of gemma3n:e4b and the Bootdisk / virtual disk was full. I resized it to 50 GiB, i think 35 GiB ist the best for a optimal minimum. Someone could someone could argue "if you use the OpenAI-API you don't need so much wasted space", but it's only 10 GiB more and I assume that most people who have to resize 25 GiB have more annoying work than simply using 10 GiB more. I also changed the CPU core count to 18 and RAM to 37gb. Strangely, only 33.3% of the CPU was used when I asked “tell me pi with 10 decimal places”. The Answer took about 28 sec, but it was also with gemma3:12b. It had used 11 GiB of RAM. So I would recommend 6 cores as a minimum and 12 GiB RAM. In the verbose installation process I saw a couple errors, I think some a normal. I took a few screenshots. ![Image](https://github.com/user-attachments/assets/fd54e9a2-5427-469c-b2fe-bdfdf9a580e5) Visually, there is no space between “...Container OSSetting up...” There was a "rehash: warning" ![Image](https://github.com/user-attachments/assets/a64fae01-3d46-4108-8644-6bf25e8c3d1c) Here multiple times a "Requirement already satisfied"-Exception. Dann sowas ![Image](https://github.com/user-attachments/assets/b348584b-c70f-4c58-a998-bec3b4d67ec4) und sowas ![Image](https://github.com/user-attachments/assets/a6306e2b-da32-4cb2-aadc-793b0ec393dd) and that one ![Image](https://github.com/user-attachments/assets/5a2a0abb-989e-4823-b3db-af05e8fb8c05) I think that are errors from normal installation process by Open WebUI, not from your script? --- Thanks a lot! I read this with tteck in the german proxmox forum. I didn't even know that he passed :( . 1-2 years ago, I donated something because the scripts simply gave me so much pleasure because of the time saved. If I can still help, please let me know.
Author
Owner

@tremor021 commented on GitHub (Jul 3, 2025):

  • Installation takes a long time because its a multi stage build process. It will take even longer if you choose to install Ollama alongside it. The install part clones open-webui repository from github, then uses PiP to install pytorch which takes some time on its own. Then installs all dependencies for the python part of the applicatio.
    Then installs all the NodeJS crap needed for the web app itself, which takes another eternity. So yea, its a crap app that takes enormous resources to run. If you add Ollama to the mix, it will need to download those binaries also, which is also over 1GB or something. You can maybe speed up the install process by adding more CPU cores than default 4 we give it, but i'm not sure it will drasticaly shorten the installation time.

  • Anyway, knowing what render128, card0 and such terms mean is part of your linux knowledge, its not up to us to educate users how these things work.

  • Verbose mode is beign worked on, so glitches are to be expected until @MickLesk finishes with the core script. All those console output messages you see are from the NodeJS build part of the script. You can ignore them, since the build part breaks if any real errors are met.

<!-- gh-comment-id:3033061317 --> @tremor021 commented on GitHub (Jul 3, 2025): - Installation takes a long time because its a multi stage build process. It will take even longer if you choose to install Ollama alongside it. The install part clones open-webui repository from github, then uses PiP to install pytorch which takes some time on its own. Then installs all dependencies for the python part of the applicatio. Then installs all the NodeJS crap needed for the web app itself, which takes another eternity. So yea, its a crap app that takes enormous resources to run. If you add Ollama to the mix, it will need to download those binaries also, which is also over 1GB or something. You can maybe speed up the install process by adding more CPU cores than default 4 we give it, but i'm not sure it will drasticaly shorten the installation time. - Anyway, knowing what render128, card0 and such terms mean is part of your linux knowledge, its not up to us to educate users how these things work. - Verbose mode is beign worked on, so glitches are to be expected until @MickLesk finishes with the core script. All those console output messages you see are from the NodeJS build part of the script. You can ignore them, since the build part breaks if any real errors are met.
Author
Owner

@MickLesk commented on GitHub (Jul 3, 2025):

Der Spinner in Verbose wird noch so gut es geht abgeschafft, das ganze ist gerade im redesign. Ich bin ja schon mal froh, dass er überhaupt wieder zuverlässig geht. Die Apps sind generell auf den Normalen Use-Case ausgelegt, auch von den Ressourcen her. Wenn man deepseek o.ä. installieren will, muss man selbstständig die Ressourcen erhöhen. Ein kleines Mistral2.3b sollte definitiv drauf passen. Alle anderen Meldungen kommen von NodeJS (also OpenWebUI). Die Frage nach Ollama hatten wir hinzugefügt, weil manche es trennen, also Ollama auf einen seperaten LXC laufen lassen. OpenWebUI selber benötigt leider auch noch einiges an Ressourcen, durch die Menge an JS und Typescript Komponenten. Darauf haben wir letztlich keinen Einfluss. Das Script so, läuft auch auf meinen "Scheiß-PC" (Intel n95 mit 16GB RAM) - das ist immer so das Maß der Dinge, dass es darauf sauber laufen sollte.

<!-- gh-comment-id:3033112865 --> @MickLesk commented on GitHub (Jul 3, 2025): Der Spinner in Verbose wird noch so gut es geht abgeschafft, das ganze ist gerade im redesign. Ich bin ja schon mal froh, dass er überhaupt wieder zuverlässig geht. Die Apps sind generell auf den Normalen Use-Case ausgelegt, auch von den Ressourcen her. Wenn man deepseek o.ä. installieren will, muss man selbstständig die Ressourcen erhöhen. Ein kleines Mistral2.3b sollte definitiv drauf passen. Alle anderen Meldungen kommen von NodeJS (also OpenWebUI). Die Frage nach Ollama hatten wir hinzugefügt, weil manche es trennen, also Ollama auf einen seperaten LXC laufen lassen. OpenWebUI selber benötigt leider auch noch einiges an Ressourcen, durch die Menge an JS und Typescript Komponenten. Darauf haben wir letztlich keinen Einfluss. Das Script so, läuft auch auf meinen "Scheiß-PC" (Intel n95 mit 16GB RAM) - das ist immer so das Maß der Dinge, dass es darauf sauber laufen sollte.
Author
Owner

@Zorrochi commented on GitHub (Jul 3, 2025):

Hi tremor/Slaviša,

good to know. I don't think that more CPU cores would speed up the installation process. Just like you said. As an analogy: "Thousands of crumbs are baked individually to make a cake at the end." And for each small crumb or packet, a connection has to be established and a download started, followed by an installation, among other things. All this in a waiting loop, one after the other, which takes time.

Thank you both for your work and the clarifications!


@MickLesk
Interessant, ich finde es auf jeden Fall spannend, die ganze Skript-Magie dahinter. Anfangs hatte ich mir gedacht: ,,Warum legt man den Container so niedrig aus, da läuft doch niemals gescheit eine LLM drauf, außer die Crap-Modelle, die die Fantasiesprachen entwickeln." Aber da ist mir gar nicht im Sinn gekommen das auch den Anwendungszweck externe LLM per API gibt. Das braucht dann wahrscheinlich extrem wenig ressourcen. Da macht euer Ansatz schon Sinn. Ist ja zum Glück kein Drama eben die virtuellen Komponenten anzupassen und sogar die Platte zu vergrößern.
Vor 2 Jahren hatte ich glaube ich den Sprung gewagt von TrueNAS Apps, die gefühlt jede 3 Monate kaputt gingen zur virtualisierung TrueNAS in Proxmox und antatt TrueNAS Apps dann einfach direkt Container in Proxmox. Das war so viel besser! Seitdem hatte ich keine Probleme z. B. mit HomeAssistant.


As my computer science teacher said, you have to ask the right questions and formulate them well and thoroughly. The same goes for LLMs. Then you'll get a good result:

In the Linux world, "render128", "card0", and "fb0" typically refer to graphics outputs or devices:

  • render128: Often a virtual or minimal graphics renderer used for simple applications where complex 3D graphics aren't needed. It's usually a stripped-down version of a full graphics system.

  • card0: Refers to the first dedicated graphics card in the system. This is the primary hardware graphics processor (GPU) responsible for displaying graphics.

  • fb0: Stands for "framebuffer 0." A framebuffer is a memory area used to store the rendered graphics before they are displayed on the screen. fb0 is often the default framebuffer.

<!-- gh-comment-id:3033871993 --> @Zorrochi commented on GitHub (Jul 3, 2025): Hi tremor/Slaviša, good to know. I don't think that more CPU cores would speed up the installation process. Just like you said. As an analogy: "Thousands of crumbs are baked individually to make a cake at the end." And for each small crumb or packet, a connection has to be established and a download started, followed by an installation, among other things. All this in a waiting loop, one after the other, which takes time. Thank you both for your work and the clarifications! --- @MickLesk Interessant, ich finde es auf jeden Fall spannend, die ganze Skript-Magie dahinter. Anfangs hatte ich mir gedacht: ,,Warum legt man den Container so niedrig aus, da läuft doch niemals gescheit eine LLM drauf, außer die Crap-Modelle, die die Fantasiesprachen entwickeln." Aber da ist mir gar nicht im Sinn gekommen das auch den Anwendungszweck externe LLM per API gibt. Das braucht dann wahrscheinlich extrem wenig ressourcen. Da macht euer Ansatz schon Sinn. Ist ja zum Glück kein Drama eben die virtuellen Komponenten anzupassen und sogar die Platte zu vergrößern. Vor 2 Jahren hatte ich glaube ich den Sprung gewagt von TrueNAS Apps, die gefühlt jede 3 Monate kaputt gingen zur virtualisierung TrueNAS in Proxmox und antatt TrueNAS Apps dann einfach direkt Container in Proxmox. Das war so viel besser! Seitdem hatte ich keine Probleme z. B. mit HomeAssistant. --- As my computer science teacher said, you have to ask the right questions and formulate them well and thoroughly. The same goes for LLMs. Then you'll get a good result: In the Linux world, "render128", "card0", and "fb0" typically refer to graphics outputs or devices: - **render128**: Often a virtual or minimal graphics renderer used for simple applications where complex 3D graphics aren't needed. It's usually a stripped-down version of a full graphics system. - **card0**: Refers to the first dedicated graphics card in the system. This is the primary hardware graphics processor (GPU) responsible for displaying graphics. - **fb0**: Stands for "framebuffer 0." A framebuffer is a memory area used to store the rendered graphics before they are displayed on the screen. fb0 is often the default framebuffer.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/ProxmoxVE#1210
No description provided.