[GH-ISSUE #1337] [BUG] OPAQUE password verification potentially blocks async runtime during bind operations #467

Open
opened 2026-02-27 08:17:26 +03:00 by kerem · 4 comments
Owner

Originally created by @PerArneng on GitHub (Oct 24, 2025).
Original GitHub issue: https://github.com/lldap/lldap/issues/1337

Describe the bug
When doing bind in LoginHandler the password_match function uses the opaque cryptographic algorithm which seems to be very CPU bound. From what it seems it looks like it should be put in a spawn_blocking function not to block the async thread pool.

To Reproduce
It's a performance issue so you would need to run benchmarks and compare. I'ts hard to reproduce, because it's just there.

Expected behavior
Faster performance when doing bind with multiple parallel requests.

Logs
N/A

Additional context
We had some performance issues with lldap and a 3rd party system that acted crazy, and did some research and we found that the OPAQUE method was used and it seemed very CPU hungry and reading about it on lldap GitHub page as well, it seemed like that was a fact.

It's even more visible if the core count is low ex if you run in a Kubernetes environment and set limits or have low core count nodes. One of our third party systems started to spam lldap with lots of bind's on each normal request instead of reusing sessions (crazy, I know) . But that led us to investigate. In an normal scenario you might not see this issue so often but putting it out of the async thread pool when executing OPAQUE might give a bit of performance boost when dealing with multiple requests.

Disclaimer: Im pretty novice when it comes to async rust runtimes low level stuff so I could be mistaken here but then please just close the bug.

Originally created by @PerArneng on GitHub (Oct 24, 2025). Original GitHub issue: https://github.com/lldap/lldap/issues/1337 **Describe the bug** When doing [bind](https://github.com/lldap/lldap/blob/eee42502f3cf16c645fc97dd3686933e1f87afb1/crates/sql-backend-handler/src/sql_opaque_handler.rs#L68) in LoginHandler the `password_match` function uses the opaque cryptographic algorithm which seems to be very CPU bound. From what it seems it looks like it should be put in a [spawn_blocking](https://docs.rs/actix-rt/latest/actix_rt/index.html#cpu-bound-tasks-and-blocking-code) function not to block the async thread pool. **To Reproduce** It's a performance issue so you would need to run benchmarks and compare. I'ts hard to reproduce, because it's just there. **Expected behavior** Faster performance when doing bind with multiple parallel requests. **Logs** N/A **Additional context** We had some performance issues with lldap and a 3rd party system that acted crazy, and did some research and we found that the OPAQUE method was used and it seemed very CPU hungry and reading about it on lldap GitHub page as well, it seemed like that was a fact. It's even more visible if the core count is low ex if you run in a Kubernetes environment and set limits or have low core count nodes. One of our third party systems started to spam lldap with lots of bind's on each normal request instead of reusing sessions (crazy, I know) . But that led us to investigate. In an normal scenario you might not see this issue so often but putting it out of the async thread pool when executing OPAQUE might give a bit of performance boost when dealing with multiple requests. **Disclaimer**: Im pretty novice when it comes to async rust runtimes low level stuff so I could be mistaken here but then please just close the bug.
Author
Owner

@nitnelave commented on GitHub (Oct 24, 2025):

Would it be simpler to increase the number of worker threads? Maybe something like num CPU would work. It sounds like it would take advantage of the most parallelization, while adding little complexity and memory overhead, especially for smaller systems (e.g. raspberry pi)

<!-- gh-comment-id:3444659398 --> @nitnelave commented on GitHub (Oct 24, 2025): Would it be simpler to increase the number of worker threads? Maybe something like num CPU would work. It sounds like it would take advantage of the most parallelization, while adding little complexity and memory overhead, especially for smaller systems (e.g. raspberry pi)
Author
Owner

@PerArneng commented on GitHub (Oct 24, 2025):

It could be that it helps by increasing worker threads. Is there an option for that?

The code for doing spawn_blocking would look something like this:

            let opaque_setup = self.opaque_setup.clone();
            let password = request.password;
            let username = request.name.clone();
            if task::spawn_blocking(move || {
                passwords_match(&password_hash, &password, &opaque_setup, &username)
            })
            .await
            .map_err(|e| DomainError::InternalError(e.to_string()))?
            ...

This github issue got id 1337 🤓🎊

<!-- gh-comment-id:3444884370 --> @PerArneng commented on GitHub (Oct 24, 2025): It could be that it helps by increasing worker threads. Is there an option for that? The code for doing spawn_blocking would look something like this: ``` let opaque_setup = self.opaque_setup.clone(); let password = request.password; let username = request.name.clone(); if task::spawn_blocking(move || { passwords_match(&password_hash, &password, &opaque_setup, &username) }) .await .map_err(|e| DomainError::InternalError(e.to_string()))? ... ``` This github issue got id **1337** 🤓🎊
Author
Owner

@nitnelave commented on GitHub (Oct 25, 2025):

You can try changing the code locally in server/src/main.rs where we set the number of workers. Currently it's not configurable, but making it equal to num_threads makes sense, I think

<!-- gh-comment-id:3446030099 --> @nitnelave commented on GitHub (Oct 25, 2025): You can try changing the code locally in server/src/main.rs where we set the number of workers. Currently it's not configurable, but making it equal to num_threads makes sense, I think
Author
Owner

@PerArneng commented on GitHub (Oct 25, 2025):

Thanks, will try that and see if it has any effect 🤩

<!-- gh-comment-id:3446078463 --> @PerArneng commented on GitHub (Oct 25, 2025): Thanks, will try that and see if it has any effect 🤩
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/lldap-lldap#467
No description provided.