mirror of
https://github.com/lldap/lldap.git
synced 2026-04-25 08:15:52 +03:00
[GH-ISSUE #126] Feature Request for Replication/High Availability #55
Labels
No labels
backend
blocked
bug
cleanup
dependencies
docker
documentation
duplicate
enhancement
enhancement
frontend
github_actions
good first issue
help wanted
help wanted
integration
invalid
ldap
pull-request
question
rust
rust
tests
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/lldap-lldap#55
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ekrekeler on GitHub (Mar 20, 2022).
Original GitHub issue: https://github.com/lldap/lldap/issues/126
I think this is a great alternative to OpenLDAP and FreeIPA for self-hosted environments, with just one caveat.
Correct me if I am wrong, but it appears that lldap doesn't support any replication or high availability features if I want to run this on more than one server. I'm just opening an issue for it in case anyone else would be interested in such a thing.
@nitnelave commented on GitHub (Mar 20, 2022):
Hmm, this is not something that I really had considered... Although it should be straightforward once we support other databases. I think once we support Postgresql, you should be able to put as many frontends as you want in front of the DB. High availability for the DB is left as an exercise to the reader :)
@ekrekeler commented on GitHub (Mar 20, 2022):
I agree, it should be possible to leverage replication abilities of external databases once they are supported. Let me know if you would like help testing any of that.
@nitnelave commented on GitHub (Jul 15, 2022):
I'll mark it as duplicate of https://github.com/nitnelave/lldap/issues/87
@xxxserxxx commented on GitHub (Jan 28, 2025):
Because this is a long comment, BLUF: does lldap gracefully handle the DB being changed by an external program while running, or does it need to be shut down before, or after, changes are made? In other words, is there any reason why I shouldn't use a tool such as Litestream or LiteFS to replicate the database between a master lldap instance and a slave lldap instance?
If I did this, what considerations do I need to account for? Will disaster strike if the DB is changed by something other than lldap -- like, is lldap caching a significant piece of the DB (e.g., an index) in memory that could get out of sync and cause problems?
I assert that if lldap doesn't cache any part of the DB in memory (except during the active processing of a request); and if lldap always calls out to the DB and the DB interface library doesn't perform any caching; and if lldap doesn't hard crash or persist errors when DB queries fail; then lldap can be replicated by replicating the SQLite DB (via, e.g., Litestream). This, with the expectation that sometimes there will be race conditions where the DB is being changed exactly when a query is happening and that this could cause a request failure, and these failures can be considered transient and retried. Assuming reasonable network bandwidth and latency and given the relatively small size of the lldap SQLite DB, these transient errors should be infrequent.
Background
When we think about replication, we are usually thinking about three situations: load balancing, failover, and localization. We localize access for speed; we fail over for availability; we load balance for performance.
I'm primarily concerned about localization. I have a non-trivial LAN of computers: a server running home automation and security; a server serving A/V media; two other micro computers piping media to speakers in areas of the house; and a NAS where everything pushes the local backups. I'd like to have a LAN lldap master, so that none of these services need to reach out to the internet for authentication, and so everything still works if the WAN connection goes down.
I also have VPSes serving services I want to be able to access externally -- email, cal & carddav, mealie. I'd like to have the passwords synchronized between these two lldap instances. In my case, the external lldap would be strictly a slave: changes to that DB either be not allowed, or abandoned.
I imagine I have three options:
I'm hoping for the third case, but this will only be safe if those constraints I laid out at the beginning hold: lldap depends entirely on the on-disk DB, and only caches data in memory during active requests; and lldap doesn't crash if there's a DB error (such as an indexed entry disappearing during a query).
If lldap is doing something like, e.g., caching user IDs and not making a DB call if there's no in-memory match, then I'd handle cases one and two the same way, and I'd probably just do the DB sync with rsync.
Just for completeness, failover is probably the easiest to handle, and a clever set-up could even keep multiple instances acting as masters (although, with exactly one active at a time), and sync the DBs based on timestamps. Load balancing I think is most like the localization and would be handled the same, only if a stop/start is required it becomes a less attractive set-up.
TL;DR
Again, I'm mainly concerned about the localizing case: lldap instance in my LAN, replicating the DB to a VPS instance; and I believe the only question that I need answered is whether lldap will gracefully handle the DB being changed by an external program without having to be restarted.
@nitnelave commented on GitHub (Jan 28, 2025):
It might be better as a separate discussion, but LLDAP is made to be stateless, in order to have several instances running in parallel. As a result, an external program (e.g. another LLDAP) can modify the DB without warning during normal operation.
The only "race" I can think of is one instance modifying an entry while another deletes it, in which case an error is the only logical response (but also, you have 2 admins doing weird contradicting stuff at the same time!)
Anything else is guarded by transactions and should work seamlessly.
@xxxserxxx commented on GitHub (Jan 29, 2025):
@nitnelave Excellent! Then, assuming that the implementation follows the intention and is indeed stateless, replication should easy using external tools. The only caveat I can think of is that there should be only one master at a time between syncs, so we don't get internal DB ID conflicts... but, again, for simple cases such as fail-over and localization, this is perfect.
Thank you!