mirror of
https://github.com/sirtoobii/vaultwarden_ldap_sync.git
synced 2026-04-26 20:45:54 +03:00
[GH-ISSUE #10] The LDAP request is only run on start #9
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/vaultwarden_ldap_sync#9
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @GeorgelT on GitHub (Jan 31, 2025).
Original GitHub issue: https://github.com/sirtoobii/vaultwarden_ldap_sync/issues/10
Thank you for creating this sync script, we've been trying to get this functionality also.
I've managed to get it to work, it recognizes the desired users and filters, but we've noticed that the LDAP request that looks for changes only runs on startup. The guide on the main page runs the docker container detached, meaning it's running like a service.
The ldap_emails variable only gets initialized and populated inside the main loop:
ldap_emails = ldc.get_email_list()After the main loop you start the long while that doesn't call the ldap function again, just reads the existing emails list.
Either I understand this wrong or there is some timeout/cache in the ldap request, but any changes we make in our LDAP backend doesn't get reflected in the sync job.
If I restart the container a new sync takes place, the new state gets recognized and changes are applied to vaultwarden.
I've tried adding the above line just after the heartbeat and timer area, it did not help.
I would really appreciate if the script would live update ldap changes or at least have the timeout known to know when to expect changes to be applied if a new user is added to the ldap filter.
@GeorgelT commented on GitHub (Jan 31, 2025):
It also seems to disable the users, but on next scan/run to not detect the user as disabled. Maybe something has changed in the vaultwarden api:
When I re-enable that user on next syncs it doesn't also enable the user again in vaultwarden also.
@sirtoobii commented on GitHub (Jan 31, 2025):
Hi @GeorgelT, thank you for the report. There is indeed an error in the program logic -
get_email_list()should be called inside the loop.I suspect this is due to to the long (default) sleep time between sync attempts. I've pushed a new branch
test/GeorgeITwith a lowered default value and a "Random" email source. Would you mind giving it a short test?@GeorgelT commented on GitHub (Feb 3, 2025):
Here is the output from the initial startup, it has tried to invite the example emails from the source list.
I will update you again later, but monday is a lot of meetings day.
LE: it disabled the users from the mail list, but they are not listed as disabled users in the log, they are indeed disabled in the vaultwarden instance.
@GeorgelT commented on GitHub (Feb 3, 2025):
Ok we were able to check the LDAP filtering. We have 2 test users that we add or remove to the filtered group.
At the moment I can't see any change in the outputs of the debuglog with changes we make in LDAP. Manually running the ldap requests we get the updated data, we can exclude ldap-caching behaviours.
The behaviour mentioned previously, if I enable a user after he has been disabled, he should be enabled again in the instance once he's back in the ldap group. When the user gets back to the group nothing happens. Even after container rebuilds/restarts.
@sirtoobii commented on GitHub (Feb 8, 2025):
Can confirm this behavior, working on it
@sirtoobii commented on GitHub (Feb 9, 2025):
@GeorgelT I ended up doing a major refactoring of the entire codebase. Can you test this branch?
Short summary of changes:
@GeorgelT commented on GitHub (Feb 11, 2025):
I was able to do some testing, not sure if something is buggy, but it doesn't list what it found in vaultwarden anymore. We tried disabling a user than re-enable it and nothing changed in the debugging output. On the vaultwareden side also nothing has changed. I don't think it works anymore. We let it run for some time and even restarted it a couple of times.
Previously even if I were to ignore the vaultwarden listing I would see the requests to the admin api. I don't think it does anything right now or maybe I tested something wrong.
@sirtoobii commented on GitHub (Feb 11, 2025):
Pushed a fix for the missing print statements, can you try again (don't forget to rebuild the container)
@GeorgelT commented on GitHub (Feb 11, 2025):
First of, thank you very much for being so responsive. The new debug listing looks very good.
Things I can confirm at the moment:
Shortened output from one run to the next:
I've manually removed the one user, that is no longer in ldap above. When I add him back he gets invited to vaultwarden as normal, he also gets disabled when I remove him from ldap and now he also gets re-enabled when he's back in the group.
I would say that the desired basic functionality now works as long as user is added/managed by the plugin. Existing users are not adopted or detected from the vaultwarden instance itself. Local state is not being read correctly from vaultwarden I guess.
For completeness my vaultwarden instance has the following version numbers if you wish to debug the API response:
Server Installed Update
1.32.7
Server Latest
1.33.2
Web Installed
2024.6.2c
Database
SQLite: 3.46.0
Thank you for the fixes so far!
@sirtoobii commented on GitHub (Feb 11, 2025):
You need to let it run once with
VUS_ADOPT=1(in.env) as adoption does not happen automatically.@GeorgelT commented on GitHub (Feb 18, 2025):
I added the above VUS_ADOPT=1 flag but it seems it leads to the container crashing on startup.
@sirtoobii commented on GitHub (Feb 18, 2025):
@GeorgelT This is expected behavior as it is most likely not intended to keep the adoption mode on. I'm going to update the PR to make this more clear.
@GeorgelT commented on GitHub (Feb 19, 2025):
If I understand you correctly, it should adopt that user after that one exited run right? It does not seem to do that though. The user that is not addopted is still there even after running it that way.
@ravinder0402 commented on GitHub (Feb 19, 2025):
@GeorgelT @sirtoobii I am getting issue with regular sync means the vaultwarden_ldap_sync database is not storing current data of ldap until this container/pod restart .
Then i found that in sync.py the ldap_emails = ldc.get_email_list() is not placed inside the while loop that will run with every sync after a specific interval of time due to which the current state of user in the ldap is not update in the vaultwarden_ldap_sync database then I paste ldap_emails = ldc.get_email_list() inside the while loop due to which whenever sync run then it will detect the any change on the ldap at every sync and then store them in its database. and we have no need to restart our vaultwarden_ldap_sync container/pod
@sirtoobii commented on GitHub (Feb 19, 2025):
@ravinder0402 This bug is the exact reason why @GeorgelT opened this issue in the first place. Please try the "big-refactoring" branch
@sirtoobii commented on GitHub (Feb 19, 2025):
@GeorgelT Yes, this is how it's supposed to work. I need to verify this locally. Just to be sure, you're not running with
DRY_RUN=1?Edit: Verified that I was using the wrong variable to check if we're in adoption mode. I've pushed a fix to #11
@ravinder0402 commented on GitHub (Feb 19, 2025):
@sirtoobii I am telling that i have already fixed that issue by adding ldap_emails = ldc.get_email_list() inside the while loop in the sync.py
@sirtoobii commented on GitHub (Feb 25, 2025):
@GeorgelT were you able to verify that adoption now works as expected? If yes, I would merge #11 to main. And thank you as well for you continuous help with testing!
@GeorgelT commented on GitHub (Feb 28, 2025):
Sorry for the delay, was quite a full week at work with other projects. I redownloaded the branch, reran the container with adopt on and off flag. So far seems to no adopt users just like previously.
remove flag and rerun, same output:
subsequent sync runs does not seem to change that.
@sirtoobii commented on GitHub (Mar 4, 2025):
Hey @GeorgelT as I cannot reproduce the reported behavior and your first log snippet just ends with
exited with code 0I suspect that your codebase is not updated to the latest commit of #11, can you verify that?With the latest commit the message should be: