[GH-ISSUE #405] Login takes several attempts #155

Closed
opened 2026-02-27 08:15:34 +03:00 by kerem · 13 comments
Owner

Originally created by @poVoq on GitHub (Jan 3, 2023).
Original GitHub issue: https://github.com/lldap/lldap/issues/405

I have this strange issue that logging into the web-interface always takes several attempts.

Initially I thought it was a config issue as I just could not log in, but when randomly trying multiple times to click on the login button it suddenly worked.

Usually it takes 2-5 attempts before the login works.

I am using the :latest official docker image with Podman behind an Nginx reverse-proxy and otherwise everything seems to be working fine.

Originally created by @poVoq on GitHub (Jan 3, 2023). Original GitHub issue: https://github.com/lldap/lldap/issues/405 I have this strange issue that logging into the web-interface always takes several attempts. Initially I thought it was a config issue as I just could not log in, but when randomly trying multiple times to click on the login button it suddenly worked. Usually it takes 2-5 attempts before the login works. I am using the :latest official docker image with Podman behind an Nginx reverse-proxy and otherwise everything seems to be working fine.
kerem 2026-02-27 08:15:34 +03:00
Author
Owner

@nitnelave commented on GitHub (Jan 3, 2023):

That is strange. Could you restart LLDAP with verbose logging (verbose = true in the config) and try to log in until it succeeds, then post the logs?

<!-- gh-comment-id:1369650585 --> @nitnelave commented on GitHub (Jan 3, 2023): That is strange. Could you restart LLDAP with verbose logging (verbose = true in the config) and try to log in until it succeeds, then post the logs?
Author
Owner

@poVoq commented on GitHub (Jan 3, 2023):

Yes I just did that. I guess this might be the problem?

thread 'actix-rt|system:0|arbiter:0' panicked at 'negative last_insert_rowid', /__w/lldap/lldap/${GITHUB_WORKSPACE}/.cargo/registry/src/github.com-1ecc6299db9ec823/sea-orm-0.10.3/src/executor/execute.rs:43:21
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
<!-- gh-comment-id:1369656371 --> @poVoq commented on GitHub (Jan 3, 2023): Yes I just did that. I guess this might be the problem? ``` thread 'actix-rt|system:0|arbiter:0' panicked at 'negative last_insert_rowid', /__w/lldap/lldap/${GITHUB_WORKSPACE}/.cargo/registry/src/github.com-1ecc6299db9ec823/sea-orm-0.10.3/src/executor/execute.rs:43:21 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace ```
Author
Owner

@nitnelave commented on GitHub (Jan 3, 2023):

Oooh, that seems really wrong... Can you post the backtrace? I.e. running it with RUST_BACKTRACE=1
Also, just to help debugging, would you be able to share your LLDAP config and/or anything particular w.r.t. the database? What user you're trying to log in as, which group it's a member of, ...

<!-- gh-comment-id:1369658979 --> @nitnelave commented on GitHub (Jan 3, 2023): Oooh, that seems really wrong... Can you post the backtrace? I.e. running it with RUST_BACKTRACE=1 Also, just to help debugging, would you be able to share your LLDAP config and/or anything particular w.r.t. the database? What user you're trying to log in as, which group it's a member of, ...
Author
Owner

@poVoq commented on GitHub (Jan 3, 2023):

Its a fresh install with a standard config and only two users. Nothing special at all.

I'll try starting the container with the backtrace env and report back.

<!-- gh-comment-id:1369661050 --> @poVoq commented on GitHub (Jan 3, 2023): Its a fresh install with a standard config and only two users. Nothing special at all. I'll try starting the container with the backtrace env and report back.
Author
Owner

@poVoq commented on GitHub (Jan 3, 2023):

The backtrace:

thread 'actix-rt|system:0|arbiter:0' panicked at 'negative last_insert_rowid', /__w/lldap/lldap/${GITHUB_WORKSPACE}/.cargo/registry/src/github.com-1ecc6299db9ec823/sea-orm-0.10.3/src/executor/execute.rs:43:21
stack backtrace:
   0: rust_begin_unwind
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:584:5
   1: core::panicking::panic_fmt
             at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/panicking.rs:142:14
   2: sea_orm::executor::execute::ExecResult::last_insert_id
   3: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
   4: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
   5: <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll
   6: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
   7: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
   8: <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll
   9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  10: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  11: <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll
  12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  13: <actix_web::handler::HandlerServiceFuture<F,T,R> as core::future::future::Future>::poll
  14: <core::pin::Pin<P> as core::future::future::Future>::poll
  15: <core::pin::Pin<P> as core::future::future::Future>::poll
  16: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  17: <tracing_futures::Instrumented<T> as core::future::future::Future>::poll
  18: actix_http::h1::dispatcher::InnerDispatcher<T,S,B,X,U>::poll_response
  19: <actix_http::h1::dispatcher::Dispatcher<T,S,B,X,U> as core::future::future::Future>::poll
  20: <actix_http::service::HttpServiceHandlerResponse<T,S,B,X,U> as core::future::future::Future>::poll
  21: <actix_service::and_then::AndThenServiceResponse<A,B,Req> as core::future::future::Future>::poll
  22: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  23: tokio::runtime::task::harness::Harness<T,S>::poll
  24: std::thread::local::LocalKey<T>::with
  25: tokio::task::local::LocalSet::tick
  26: tokio::macros::scoped_tls::ScopedKey<T>::set
  27: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  28: std::thread::local::LocalKey<T>::with
  29: tokio::runtime::basic_scheduler::Context::enter
  30: tokio::macros::scoped_tls::ScopedKey<T>::set
  31: tokio::runtime::Runtime::block_on
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
<!-- gh-comment-id:1369664298 --> @poVoq commented on GitHub (Jan 3, 2023): The backtrace: ``` thread 'actix-rt|system:0|arbiter:0' panicked at 'negative last_insert_rowid', /__w/lldap/lldap/${GITHUB_WORKSPACE}/.cargo/registry/src/github.com-1ecc6299db9ec823/sea-orm-0.10.3/src/executor/execute.rs:43:21 stack backtrace: 0: rust_begin_unwind at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:584:5 1: core::panicking::panic_fmt at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/panicking.rs:142:14 2: sea_orm::executor::execute::ExecResult::last_insert_id 3: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 4: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 5: <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll 6: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 7: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 8: <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll 9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 10: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 11: <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll 12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 13: <actix_web::handler::HandlerServiceFuture<F,T,R> as core::future::future::Future>::poll 14: <core::pin::Pin<P> as core::future::future::Future>::poll 15: <core::pin::Pin<P> as core::future::future::Future>::poll 16: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 17: <tracing_futures::Instrumented<T> as core::future::future::Future>::poll 18: actix_http::h1::dispatcher::InnerDispatcher<T,S,B,X,U>::poll_response 19: <actix_http::h1::dispatcher::Dispatcher<T,S,B,X,U> as core::future::future::Future>::poll 20: <actix_http::service::HttpServiceHandlerResponse<T,S,B,X,U> as core::future::future::Future>::poll 21: <actix_service::and_then::AndThenServiceResponse<A,B,Req> as core::future::future::Future>::poll 22: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 23: tokio::runtime::task::harness::Harness<T,S>::poll 24: std::thread::local::LocalKey<T>::with 25: tokio::task::local::LocalSet::tick 26: tokio::macros::scoped_tls::ScopedKey<T>::set 27: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 28: std::thread::local::LocalKey<T>::with 29: tokio::runtime::basic_scheduler::Context::enter 30: tokio::macros::scoped_tls::ScopedKey<T>::set 31: tokio::runtime::Runtime::block_on note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace. ```
Author
Owner

@nitnelave commented on GitHub (Jan 3, 2023):

What are the last debug logs from LLDAP? I'm trying to determine what is the actual query that caused it.

<!-- gh-comment-id:1369667109 --> @nitnelave commented on GitHub (Jan 3, 2023): What are the last debug logs from LLDAP? I'm trying to determine what is the actual query that caused it.
Author
Owner

@poVoq commented on GitHub (Jan 3, 2023):

https://pastebin.com/gseymnXX

Is the full log output.

<!-- gh-comment-id:1369670164 --> @poVoq commented on GitHub (Jan 3, 2023): https://pastebin.com/gseymnXX Is the full log output.
Author
Owner

@nitnelave commented on GitHub (Jan 3, 2023):

Thanks. Last thing, can you open users.db with sqlite3 and run pragma table_info(jwt_refresh_storage); ?

<!-- gh-comment-id:1369676482 --> @nitnelave commented on GitHub (Jan 3, 2023): Thanks. Last thing, can you open `users.db` with `sqlite3` and run `pragma table_info(jwt_refresh_storage);` ?
Author
Owner

@poVoq commented on GitHub (Jan 3, 2023):

Yes and then?

sqlite> pragma table_info(jwt_refresh_storage);
0|refresh_token_hash|INTEGER|1||1
1|user_id|text(255)|1||0
2|expiry_date|TEXT|1||0
<!-- gh-comment-id:1369679169 --> @poVoq commented on GitHub (Jan 3, 2023): Yes and then? ``` sqlite> pragma table_info(jwt_refresh_storage); 0|refresh_token_hash|INTEGER|1||1 1|user_id|text(255)|1||0 2|expiry_date|TEXT|1||0 ```
Author
Owner

@nitnelave commented on GitHub (Jan 3, 2023):

Thanks. I believe that it's an issue with the ORM we're using: https://github.com/SeaQL/sea-orm/issues/1357

<!-- gh-comment-id:1369680374 --> @nitnelave commented on GitHub (Jan 3, 2023): Thanks. I believe that it's an issue with the ORM we're using: https://github.com/SeaQL/sea-orm/issues/1357
Author
Owner

@nitnelave commented on GitHub (Jan 3, 2023):

For context, you're hitting the panic 50% of the time, when the hash maps to a negative signed value instead of a positive one. So yes, keep trying to log in :/

<!-- gh-comment-id:1369681791 --> @nitnelave commented on GitHub (Jan 3, 2023): For context, you're hitting the panic 50% of the time, when the hash maps to a negative signed value instead of a positive one. So yes, keep trying to log in :/
Author
Owner

@nitnelave commented on GitHub (Jan 24, 2023):

Alright, I contributed with the sea-orm authors on a PR, and we got a release that fixes this issue. I've got a PR to update the dependency.

<!-- gh-comment-id:1401639470 --> @nitnelave commented on GitHub (Jan 24, 2023): Alright, I contributed with the sea-orm authors on a PR, and we got a release that fixes this issue. I've got a PR to update the dependency.
Author
Owner

@poVoq commented on GitHub (Jan 25, 2023):

Works great now! Thanks!

<!-- gh-comment-id:1403030622 --> @poVoq commented on GitHub (Jan 25, 2023): Works great now! Thanks!
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/lldap-lldap#155
No description provided.