[GH-ISSUE #8231] Sonarr script - unable to parse config error #1760

Closed
opened 2026-02-26 12:50:08 +03:00 by kerem · 9 comments
Owner

Originally created by @devdecrux on GitHub (Oct 9, 2025).
Original GitHub issue: https://github.com/community-scripts/ProxmoxVE/issues/8231

Have you read and understood the above guidelines?

yes

📜 What is the name of the script you are using?

Sonarr

📂 What was the exact command used to execute the script?

bash -c "$(curl -fsSL https://git.community-scripts.org/community-scripts/ProxmoxVE/raw/branch/main/ct/sonarr.sh)"

⚙️ What settings are you using?

  • Default Settings
  • Advanced Settings

🖥️ Which Linux distribution are you using?

Debian 13

📈 Which Proxmox version are you on?

PVE 9

📝 Provide a clear and concise description of the issue.

When I execute the Sonarr script there is an error (or warning) about unable to parse config vm 103. I tried executing with advanced and then default settings. The script does not fail and Sonarr seems like it's working. However, I didn't test extensively.

I think there is somewhere a syntax error in the script.

Edit: I've tried the script on a second homelab server running proxmox 9 and there the error is the same with the only difference that it's not VM 103, but it's VM 101. Which I have no idea does it mean virtual machine?

Image

🔄 Steps to reproduce the issue.

  1. Copy the scripts
  2. Execute it in PVE shell (doesn't matter whether it's advanced or defaults settings)
  3. You will see the text from the image

Paste the full error output (if available).

vm 103 - unable to parse config: xc.idmap.uid = 0 1001 1
vm 103 - unable to parse config: lxc.idmap.uid = 1 100001 1000
vm 103 - unable to parse config: lxc.idmap.uid = 1001 101001 64535
vm 103 - unable to parse config: lxc.idmap.gid = 0 1000 1
vm 103 - unable to parse config: lxc.idmap.gid = 1 100001 1000
vm 103 - unable to parse config: lxc.idmap.gid = 1001 101001 64535

🖼️ Additional context (optional).

No response

Originally created by @devdecrux on GitHub (Oct 9, 2025). Original GitHub issue: https://github.com/community-scripts/ProxmoxVE/issues/8231 ### ✅ Have you read and understood the above guidelines? yes ### 📜 What is the name of the script you are using? Sonarr ### 📂 What was the exact command used to execute the script? bash -c "$(curl -fsSL https://git.community-scripts.org/community-scripts/ProxmoxVE/raw/branch/main/ct/sonarr.sh)" ### ⚙️ What settings are you using? - [x] Default Settings - [x] Advanced Settings ### 🖥️ Which Linux distribution are you using? Debian 13 ### 📈 Which Proxmox version are you on? PVE 9 ### 📝 Provide a clear and concise description of the issue. When I execute the Sonarr script there is an error (or warning) about unable to parse config vm 103. I tried executing with advanced and then default settings. The script does not fail and Sonarr seems like it's working. However, I didn't test extensively. I think there is somewhere a syntax error in the script. Edit: I've tried the script on a second homelab server running proxmox 9 and there the error is the same with the only difference that it's not VM 103, but it's VM 101. Which I have no idea does it mean virtual machine? <img width="749" height="109" alt="Image" src="https://github.com/user-attachments/assets/ff417519-c0e7-4735-ac2d-0b954d12415c" /> ### 🔄 Steps to reproduce the issue. 1. Copy the scripts 2. Execute it in PVE shell (doesn't matter whether it's advanced or defaults settings) 3. You will see the text from the image ### ❌ Paste the full error output (if available). vm 103 - unable to parse config: xc.idmap.uid = 0 1001 1 vm 103 - unable to parse config: lxc.idmap.uid = 1 100001 1000 vm 103 - unable to parse config: lxc.idmap.uid = 1001 101001 64535 vm 103 - unable to parse config: lxc.idmap.gid = 0 1000 1 vm 103 - unable to parse config: lxc.idmap.gid = 1 100001 1000 vm 103 - unable to parse config: lxc.idmap.gid = 1001 101001 64535 ### 🖼️ Additional context (optional). _No response_
kerem 2026-02-26 12:50:08 +03:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@MickLesk commented on GitHub (Oct 9, 2025):

Thats from your Host, its before an lXc created. Maybe orphaned lvm or broken guid rights

<!-- gh-comment-id:3386942281 --> @MickLesk commented on GitHub (Oct 9, 2025): Thats from your Host, its before an lXc created. Maybe orphaned lvm or broken guid rights
Author
Owner

@devdecrux commented on GitHub (Oct 9, 2025):

So after a little bit further investigation I found out the following thing. I have lxc containers: jellyfin, prowlarr and radarr with the following configuration in each lxc XXX.conf:
lxc.idmap.uid = 0 1001 1 lxc.idmap.uid = 1 100001 1000 lxc.idmap.uid = 1001 101001 64535 lxc.idmap.gid = 0 1000 1 lxc.idmap.gid = 1 100001 1000 lxc.idmap.gid = 1001 101001 64535 mp0:/mnt/custom-media-nfs,mp=/mnt/custom-media-nfs

Because I created in my OMV VM a NFS share with group id 1000 and user id 1001. After I stop the containers seems like there is no issue. Is this a problem and how can I fix it. More importantly why it's happening? It's not generally allowed to map gid and uid like that?

<!-- gh-comment-id:3386967818 --> @devdecrux commented on GitHub (Oct 9, 2025): So after a little bit further investigation I found out the following thing. I have lxc containers: jellyfin, prowlarr and radarr with the following configuration in each lxc XXX.conf: `lxc.idmap.uid = 0 1001 1 lxc.idmap.uid = 1 100001 1000 lxc.idmap.uid = 1001 101001 64535 lxc.idmap.gid = 0 1000 1 lxc.idmap.gid = 1 100001 1000 lxc.idmap.gid = 1001 101001 64535 mp0:/mnt/custom-media-nfs,mp=/mnt/custom-media-nfs` Because I created in my OMV VM a NFS share with group id 1000 and user id 1001. After I stop the containers seems like there is no issue. Is this a problem and how can I fix it. More importantly why it's happening? It's not generally allowed to map gid and uid like that?
Author
Owner

@MickLesk commented on GitHub (Oct 9, 2025):

Pve9 uses LXC 6.X Version, that have massive Changes in there conf

You can try this replacement:

lxc.idmap = u 0 1001 1
lxc.idmap = u 1 100001 1000
lxc.idmap = u 1001 101001 64535
lxc.idmap = g 0 1000 1
lxc.idmap = g 1 100001 1000
lxc.idmap = g 1001 101001 64535
<!-- gh-comment-id:3386984545 --> @MickLesk commented on GitHub (Oct 9, 2025): Pve9 uses LXC 6.X Version, that have massive Changes in there conf You can try this replacement: ```bash lxc.idmap = u 0 1001 1 lxc.idmap = u 1 100001 1000 lxc.idmap = u 1001 101001 64535 lxc.idmap = g 0 1000 1 lxc.idmap = g 1 100001 1000 lxc.idmap = g 1001 101001 64535 ```
Author
Owner

@devdecrux commented on GitHub (Oct 9, 2025):

This works now, but it seems that I am not able to login with root anymore in those lxc containers now. After I remove the lines I am able again to login.

Also non of the services qbittorrent, sonarr, radarr, prowlarr are running at the moment. I guess it's connected with losing the root priviliges in the lxc container.

<!-- gh-comment-id:3387155119 --> @devdecrux commented on GitHub (Oct 9, 2025): This works now, but it seems that I am not able to login with root anymore in those lxc containers now. After I remove the lines I am able again to login. Also non of the services qbittorrent, sonarr, radarr, prowlarr are running at the moment. I guess it's connected with losing the root priviliges in the lxc container.
Author
Owner

@MickLesk commented on GitHub (Oct 9, 2025):

When you changed the mapping to: (by your Template)

lxc.idmap = u 0 1001 1
lxc.idmap = g 0 1000 1

you effectively remapped the container’s root user (UID 0 / GID 0) to a non-root account on the host (UID 1001 / GID 1000).

This means inside the container, root no longer corresponds to root on the host.

LXC restricts file access because UID 1001 on the host does not have permission to modify system files.

As a result, you cannot log in as root, and system services that require root privileges (like Sonarr, Radarr, qBittorrent, Prowlarr) fail to start.

So technically, “root” still exists inside the container, but it’s now a non-privileged user in the host namespace — it has no write access to /etc, /root, or /var/lib.

Removing the mapping restores normal behavior, because root is again mapped to host UID 0.

The correct solution is to keep root mapped to 0 and only add specific mappings for your NFS user/group.

So you should Check your configs, idk why your mapping are extra special

<!-- gh-comment-id:3387194362 --> @MickLesk commented on GitHub (Oct 9, 2025): When you changed the mapping to: (by your Template) lxc.idmap = u 0 1001 1 lxc.idmap = g 0 1000 1 you effectively remapped the container’s root user (UID 0 / GID 0) to a non-root account on the host (UID 1001 / GID 1000). This means inside the container, root no longer corresponds to root on the host. LXC restricts file access because UID 1001 on the host does not have permission to modify system files. As a result, you cannot log in as root, and system services that require root privileges (like Sonarr, Radarr, qBittorrent, Prowlarr) fail to start. So technically, “root” still exists inside the container, but it’s now a non-privileged user in the host namespace — it has no write access to /etc, /root, or /var/lib. Removing the mapping restores normal behavior, because root is again mapped to host UID 0. The correct solution is to keep root mapped to 0 and only add specific mappings for your NFS user/group. So you should Check your configs, idk why your mapping are extra special
Author
Owner

@devdecrux commented on GitHub (Oct 9, 2025):

I am sorry in advanced because I am already aware that this is not a an issue with the repo itself. However, I would be really thankful if you help me.

Based on your last comment, you mean that I need to create a new user and group in the lxc container, and map them to the same uid and gid on the proxmox host in order not to touch root.
Unfortunately, this means that I need to switch for example every service (qbittorrent, sonarr etc.) to be run from the new user in order to regain or keep the read/write/execute permissions configured by the NFS share? Am I correct?

<!-- gh-comment-id:3387264248 --> @devdecrux commented on GitHub (Oct 9, 2025): I am sorry in advanced because I am already aware that this is not a an issue with the repo itself. However, I would be really thankful if you help me. Based on your last comment, you mean that I need to create a new user and group in the lxc container, and map them to the same uid and gid on the proxmox host in order not to touch root. Unfortunately, this means that I need to switch for example every service (qbittorrent, sonarr etc.) to be run from the new user in order to regain or keep the read/write/execute permissions configured by the NFS share? Am I correct?
Author
Owner

@MickLesk commented on GitHub (Oct 9, 2025):

When using LXC containers with NFS shares, there are multiple valid ways to handle ID mapping.


Option 1 - Keep root mapped to 0

Keep full root access and only map the NFS user/group.

lxc.idmap = u 0 0 1000
lxc.idmap = u 1000 1000 1
lxc.idmap = u 1001 101001 64534
lxc.idmap = g 0 0 1000
lxc.idmap = g 1000 1000 1
lxc.idmap = g 1001 101001 64534

Pros: root login works, services start normally
Cons: files on NFS appear as root


Option 2 -> Map root to host UID/GID

Map container root to host UID/GID for stronger isolation.

lxc.idmap = u 0 1001 1
lxc.idmap = g 0 1000 1

Pros: better isolation
Cons: no root access, services may fail


Option 3 -> Hybrid mapping

Keep root mapped to 0 and map an additional host user/group.

lxc.idmap = u 0 0 1001
lxc.idmap = u 1001 1001 1
lxc.idmap = u 1002 101002 64534
lxc.idmap = g 0 0 1000
lxc.idmap = g 1000 1000 1
lxc.idmap = g 1001 101001 64534

Pros: root works, NFS permissions correct
Cons: config slightly more complex

<!-- gh-comment-id:3387293047 --> @MickLesk commented on GitHub (Oct 9, 2025): When using LXC containers with NFS shares, there are multiple valid ways to handle ID mapping. --- ## Option 1 - Keep root mapped to 0 Keep full root access and only map the NFS user/group. ```ini lxc.idmap = u 0 0 1000 lxc.idmap = u 1000 1000 1 lxc.idmap = u 1001 101001 64534 lxc.idmap = g 0 0 1000 lxc.idmap = g 1000 1000 1 lxc.idmap = g 1001 101001 64534 ``` **Pros:** root login works, services start normally **Cons:** files on NFS appear as root --- ## Option 2 -> Map root to host UID/GID Map container root to host UID/GID for stronger isolation. ```ini lxc.idmap = u 0 1001 1 lxc.idmap = g 0 1000 1 ``` **Pros:** better isolation **Cons:** no root access, services may fail --- ## Option 3 -> Hybrid mapping Keep root mapped to 0 and map an additional host user/group. ```ini lxc.idmap = u 0 0 1001 lxc.idmap = u 1001 1001 1 lxc.idmap = u 1002 101002 64534 lxc.idmap = g 0 0 1000 lxc.idmap = g 1000 1000 1 lxc.idmap = g 1001 101001 64534 ``` **Pros:** root works, NFS permissions correct **Cons:** config slightly more complex
Author
Owner

@devdecrux commented on GitHub (Oct 9, 2025):

I tried option 1 and 3 - result is that I still can't login with root and the services are not starting.
Should I edit /etc/subuid and /etc/subgid?

<!-- gh-comment-id:3387400442 --> @devdecrux commented on GitHub (Oct 9, 2025): I tried option 1 and 3 - result is that I still can't login with root and the services are not starting. Should I edit `/etc/subuid` and `/etc/subgid`?
Author
Owner

@MickLesk commented on GitHub (Oct 9, 2025):

Ultimately, I just don't know why you're doing it this way. If necessary, it would be better if you posted your complete trace in the Proxmox forum and asked there.

This is too much for me to deal with as an issue, since I don't want to/can't change anything in your strange configurations.

<!-- gh-comment-id:3387503679 --> @MickLesk commented on GitHub (Oct 9, 2025): Ultimately, I just don't know why you're doing it this way. If necessary, it would be better if you posted your complete trace in the Proxmox forum and asked there. This is too much for me to deal with as an issue, since I don't want to/can't change anything in your strange configurations.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/ProxmoxVE#1760
No description provided.