mirror of
https://github.com/lldap/lldap.git
synced 2026-04-26 00:36:01 +03:00
[GH-ISSUE #245] Query performance is 3x slower than OpenLDAP #85
Labels
No labels
backend
blocked
bug
cleanup
dependencies
docker
documentation
duplicate
enhancement
enhancement
frontend
github_actions
good first issue
help wanted
help wanted
integration
invalid
ldap
pull-request
question
rust
rust
tests
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/lldap-lldap#85
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ikaruswill on GitHub (Jul 13, 2022).
Original GitHub issue: https://github.com/lldap/lldap/issues/245
Context
I've been keeping an eye on LLDAP for a long time as I'm quite done with LDAP tinkering. Really excited that it's now available on
arm64.My setup:
ldap-user-manageras frontendRFC2307BISschema (very similar to LLDAP's current schema)Test
So the test I decided to run was with Authelia comparing the 2 LDAP implementations.
Parameters
Results
OpenLDAP
An Authelia login
POSTrequest takes ~1055ms.With the following timestamped logs in OpenLDAP (timestamps from Kubernetes):
We can see that all operations complete in roughly 340ms
LLDAP
An Authelia login
POSTrequest takes ~1509ms.With the following timestamped verbose logs in LLDAP (timestamps from LLDAP):
We can see that all operations complete in roughly 1075ms
Summary
Analysis
kubectl log --timestampsQuestion
I understand that there is some performance difference to be expected since LDAP is indeed incredibly fast and we're switching to SQLite when using LLDAP.
Though I'm still curious whether this performance difference is expected?
Considering my user DB only has a single row, I'd imagine it should have been much faster.
If it is, is there anything that we can do to reduce this difference?
Notes and thoughts
passwords_matchcode but I'm afraid I'm not well versed enough in rust to do so efficiently.uidas thePRIMARY KEY. Not sure where else it could be slow here.userstable byuser_idin the container on the node itself and it was pretty fast. Thus I can confirm this delay is not bottlenecked by I/O.@nitnelave commented on GitHub (Jul 13, 2022):
Wow, that's an extensive bug report :)
I could have saved you a lot of time if you had asked what of time :D This is a consequence of the login protocol we use: the web UI uses the OPAQUE protocol to avoid sending the user's password over the network, instead providing a zero knowledge proof that their password is correct. The safety of the protocol relies partly on an expensive hashing function (argon2 configured for multiple passes) to prevent practical brute force attacks if the DB leaks. The server has to compute a hash (and so does the client) so logging in is an expensive operation.
The situation is even slightly worse when logging in via LDAP since the client simply provides the plaintext password. The server then has to act as both sides of the opaque protocol to validate the password and has to do 2 independent hashes.
The hashes have been configured to be relatively fast (0.3-0.6s) in order to make logging in fast enough, but still expensive enough that brute force is not really an option (security by default).
If you bench any other LDAP query after logging in, you'll see that LLDAP is much faster than OpenLDAP (you saw ~800 us latency). You can also look at the timings in the verbose logs (verbose=true in the config) to see where the time was spent in a query.
@ikaruswill commented on GitHub (Jul 13, 2022):
That's really cool. Thanks for taking the time as well to explain the architecture. Goes to show how much thought you've already invested in LLDAP. 👍
Based on what I understand, the server you mention here is non-LDAP-based, purely a rust backend, and the client here is also bundled together with the image, providing the LDAP interface and translating it to a query/format that the backend understands. Is that correct?
Then in LDAP, there's only a bare server, so passwords are in plain text in the LDAP query. The server then validates the password by comparing the hash from the query and from the data store. In the case of LDAP, perhaps the hash algorithm used (mine was crypt) is not as expensive as argon2 so there was some performance gains there at the expense of security and brute force enablement.
Overall, is it right to say I should probably increase CPU limits on LLDAP to help with login latency?
@ikaruswill commented on GitHub (Jul 13, 2022):
Another question I have (out of curiosity) is if the client and server of LLDAP are doing the argon2 computation, and they're both in the container, and given that OpenLDAP does the 2 sides of the computation also in the container, the performance of logins should be comparable if they use the same algorithm right?
@ikaruswill commented on GitHub (Jul 13, 2022):
One sec, I noticed you mentioned logins through the web UI, I was using authelia so logins pass through directly to the LDAP port, not through the web UI. Does your explanation still apply?
@nitnelave commented on GitHub (Jul 13, 2022):
The OPAQUE protocol is doing more than just comparing hashes, but I guess in terms of expensive computation costs it's similar :)
The LLDAP server is basically:
Most services will connect to LLDAP using the LDAP protocol, so it's "transparent" to them. Authelia sends the password in plain text using an LDAP login (bind) query to LLDAP. Then LLDAP has to do the two sides of the OPAQUE protocol, essentially doing both hashes (and yes they're more expensive than
crypt). Compared to OpenLDAP with a similarly configuredargon2hash, LLDAP is still probably going to be ~2 times slower since it has to compute two hashes for a single login.When logging in directly to the LLDAP administration web UI, your browser does one part of the protocol in JS (well, actually in WASM) and the server does the other half, so each computes a single hash and it's twice as fast.
I'm not sure you can really reduce the login latency, the hash algorithm is made to be slow, but you can try adding more CPU.
@ikaruswill commented on GitHub (Jul 13, 2022):
👍 Got it, thanks again for clarifying.
I increased the CPU limits, but I realized that adding more memory limit (from 20M to 50M) helps with the login much more than increasing CPU beyond 500m. Quite an interesting behaviour, thought to bring that to your attention since
README.mdmentioned requirements of<10MB.Nonetheless, for anyone else who's keen, here's what I eventually settled at for a comparable login performance with OpenLDAP via the LDAP protocol:
Will close this issue now since it's by design. Thanks a lot!
@nitnelave commented on GitHub (Jul 13, 2022):
Yeah, the argon2 hash algorithm is also hard on the RAM. From a simple benchmark loop (
while true; do ldapsearch ...; done) I found that the memory usage goes up to 70MB. Less RAM will just make the login take longer, but it won't impact the rest of the performance.@ikaruswill commented on GitHub (Jul 13, 2022):
That's indeed the same observation I had as well.
Less RAM higher latency for logins, all other operations remain the same. The 70MB number you gave is quite useful. Perhaps I'll set the limit a little higher to cater for the memory spikes and avoid the performance tradeoff associated with RAM bottlenecking.
Have a good one! Thanks for building LLDAP.