mirror of
https://github.com/jwadow/kiro-gateway.git
synced 2026-04-25 01:15:57 +03:00
[GH-ISSUE #43] Token expires after ~1 hour; gateway stops working until kiro-cli is reopened #28
Labels
No labels
bug
bug
enhancement
enhancement
fixed
fixed
invalid
needs-info
needs-testing
pull-request
question
upstream
wontfix
workaround
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/kiro-gateway-jwadow#28
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @code-pumpkin on GitHub (Jan 18, 2026).
Original GitHub issue: https://github.com/jwadow/kiro-gateway/issues/43
Observed behavior
The gateway works correctly at startup.
After ~1 hour, requests begin failing due to an invalid/expired token.
At this point, the gateway does not automatically refresh the token.
If I briefly open/run kiro-cli, the token is refreshed and the gateway immediately starts working again.
Token storage details
Using the kiro.json file from the .aws directory does not work at all.
This makes sense, as the file appears to be a one-time configuration.
The file is never updated after initial creation (verified via file timestamp).
Using SQLite3 works initially, but the same issue occurs after ~1 hour when the token expires.
What can I do to fix this?
Is there some settings which I am missing for it to not work right?
@jwadow commented on GitHub (Jan 19, 2026):
Hi, @code-pumpkin, thanks for the report.
I'm having trouble reproducing this - my JSON setup has been running for weeks without issues, even with days between requests.
To figure out what's going on with your SQLite setup, I need to understand the exact scenario:
Question 1: What fixes it?
Question 2: Is kiro-cli running in the background?
Question 3: Exact reproduction steps
Which scenario matches yours?
Scenario A (kiro-cli closed):
Scenario B (kiro-cli running):
Optional: Debug logs
If you can, add to your .env file:
LOG_LEVEL="DEBUG"Then reproduce the issue and share the logs from that one failed request (just 10-20 lines showing the error).
Without these details, I can't figure out why SQLite mode fails while JSON mode works fine.
@code-pumpkin commented on GitHub (Jan 19, 2026):
Hi there @jwadow,
Here is the details
Option 4 – JSON credentials file (Not working)
Behavior
Gateway starts successfully.
Models fail to load from Kiro API and fall back to hidden models.
Any request to /v1/chat/completions fails with repeated retries and eventually returns HTTP 504.
Option 3 – SQLite credentials (Mostly works, but tokens expire)
Behavior
Gateway works correctly for ~1–2 hours.
After token expiration, all requests fail with HTTP 500.
Gateway cannot refresh the token on its own.
Restarting the gateway does not fix it.
Opening kiro-cli (even briefly) refreshes the token and immediately fixes the issue once the gateway is restarted.
```2026-01-19 02:53:27 | INFO | kiro.routes_openai:stream_wrapper:309 - HTTP 200 - POST /v1/chat/completions (streaming) - completed 2026-01-19 02:53:27 | INFO | kiro.routes_openai:chat_completions:172 - Request to /v1/chat/completions (model=auto, stream=True) 2026-01-19 02:53:31 | INFO | logging:callHandlers:1737 - 127.0.0.1:51558 - "POST /v1/chat/completions HTTP/1.1" 200 2026-01-19 02:53:35 | INFO | kiro.routes_openai:stream_wrapper:309 - HTTP 200 - POST /v1/chat/completions (streaming) - completed 2026-01-19 02:53:35 | INFO | kiro.routes_openai:chat_completions:172 - Request to /v1/chat/completions (model=auto, stream=True) 2026-01-19 02:53:39 | INFO | logging:callHandlers:1737 - 127.0.0.1:51558 - "POST /v1/chat/completions HTTP/1.1" 200 2026-01-19 02:53:44 | INFO | kiro.routes_openai:stream_wrapper:309 - HTTP 200 - POST /v1/chat/completions (streaming) - completed 2026-01-19 02:53:44 | INFO | kiro.routes_openai:chat_completions:172 - Request to /v1/chat/completions (model=auto, stream=True) 2026-01-19 02:53:48 | INFO | logging:callHandlers:1737 - 127.0.0.1:51558 - "POST /v1/chat/completions HTTP/1.1" 200 2026-01-19 02:53:51 | INFO | kiro.routes_openai:stream_wrapper:309 - HTTP 200 - POST /v1/chat/completions (streaming) - completed 2026-01-19 02:53:51 | INFO | kiro.routes_openai:chat_completions:172 - Request to /v1/chat/completions (model=auto, stream=True) 2026-01-19 02:53:55 | INFO | logging:callHandlers:1737 - 127.0.0.1:51558 - "POST /v1/chat/completions HTTP/1.1" 200 2026-01-19 02:54:45 | INFO | kiro.routes_openai:stream_wrapper:309 - HTTP 200 - POST /v1/chat/completions (streaming) - completed 2026-01-19 02:54:45 | INFO | kiro.routes_openai:chat_completions:172 - Request to /v1/chat/completions (model=auto, stream=True) 2026-01-19 02:54:49 | INFO | logging:callHandlers:1737 - 127.0.0.1:51558 - "POST /v1/chat/completions HTTP/1.1" 200 2026-01-19 02:55:09 | INFO | kiro.routes_openai:stream_wrapper:309 - HTTP 200 - POST /v1/chat/completions (streaming) - completed 2026-01-19 04:48:43 | INFO | kiro.routes_openai:chat_completions:172 - Request to /v1/chat/completions (model=auto, stream=True) 2026-01-19 04:48:43 | INFO | kiro.auth:_load_credentials_from_sqlite:255 - Credentials loaded from SQLite database: ~/.local/share/kiro-cli/data.sqlite3 2026-01-19 04:48:43 | INFO | kiro.auth:_do_aws_sso_oidc_refresh:521 - Refreshing Kiro token via AWS SSO OIDC... 2026-01-19 04:48:43 | ERROR | kiro.auth:_do_aws_sso_oidc_refresh:550 - AWS SSO OIDC refresh failed: status=400, body={"error":"invalid_request","error_description":"Invalid request","reason":null} 2026-01-19 04:48:43 | ERROR | kiro.auth:_do_aws_sso_oidc_refresh:557 - AWS SSO OIDC error details: error=invalid_request, description=Invalid request 2026-01-19 04:48:43 | WARNING | kiro.auth:_refresh_token_aws_sso_oidc:497 - Token refresh failed with 400, reloading credentials from SQLite and retrying... 2026-01-19 04:48:43 | INFO | kiro.auth:_load_credentials_from_sqlite:255 - Credentials loaded from SQLite database: ~/.local/share/kiro-cli/data.sqlite3 2026-01-19 04:48:43 | INFO | kiro.auth:_do_aws_sso_oidc_refresh:521 - Refreshing Kiro token via AWS SSO OIDC... 2026-01-19 04:48:44 | ERROR | kiro.auth:_do_aws_sso_oidc_refresh:550 - AWS SSO OIDC refresh failed: status=400, body={"error":"invalid_request","error_description":"Invalid request","reason":null} 2026-01-19 04:48:44 | ERROR | kiro.auth:_do_aws_sso_oidc_refresh:557 - AWS SSO OIDC error details: error=invalid_request, description=Invalid request 2026-01-19 04:48:44 | WARNING | kiro.auth:get_access_token:624 - Token refresh failed with 400 after SQLite reload. This may happen if kiro-cli refreshed tokens in memory without persisting. 2026-01-19 04:48:44 | ERROR | kiro.routes_openai:chat_completions:353 - Internal error: Token expired and refresh failed. Please run 'kiro-cli login' to refresh your credentials. 2026-01-19 04:48:44 | ERROR | kiro.routes_openai:chat_completions:355 - HTTP 500 - POST /v1/chat/completions - Token expired and refresh failed. Please run 'kiro-cli login' to refresh your credentials. 2026-01-19 04:48:44 | INFO | logging:callHandlers:1737 - 127.0.0.1:43578 - "POST /v1/chat/completions HTTP/1.1" 500 2026-01-19 04:48:44 | INFO | kiro.routes_openai:chat_completions:172 - Request to /v1/chat/completions (model=gpt-4.1-mini, stream=True) 2026-01-19 04:48:44 | INFO | kiro.auth:_load_credentials_from_sqlite:255 - Credentials loaded from SQLite database: ~/.local/share/kiro-cli/data.sqlite3 2026-01-19 04:48:44 | INFO | kiro.auth:_do_aws_sso_oidc_refresh:521 - Refreshing Kiro token via AWS SSO OIDC... 2026-01-19 04:48:44 | ERROR | kiro.auth:_do_aws_sso_oidc_refresh:550 - AWS SSO OIDC refresh failed: status=400, body={"error":"invalid_request","error_description":"Invalid request","reason":null} 2026-01-19 04:48:44 | ERROR | kiro.auth:_do_aws_sso_oidc_refresh:557 - AWS SSO OIDC error details: error=invalid_request, description=Invalid request 2026-01-19 04:48:44 | WARNING | kiro.auth:_refresh_token_aws_sso_oidc:497 - Token refresh failed with 400, reloading credentials from SQLite and retrying... 2026-01-19 04:48:44 | INFO | kiro.auth:_load_credentials_from_sqlite:255 - Credentials loaded from SQLite database: ~/.local/share/kiro-cli/data.sqlite3 2026-01-19 04:48:44 | INFO | kiro.auth:_do_aws_sso_oidc_refresh:521 - Refreshing Kiro token via AWS SSO OIDC... 2026-01-19 04:48:44 | ERROR | kiro.auth:_do_aws_sso_oidc_refresh:550 - AWS SSO OIDC refresh failed: status=400, body={"error":"invalid_request","error_description":"Invalid request","reason":null} 2026-01-19 04:48:44 | ERROR | kiro.auth:_do_aws_sso_oidc_refresh:557 - AWS SSO OIDC error details: error=invalid_request, description=Invalid request 2026-01-19 04:48:44 | WARNING | kiro.auth:get_access_token:624 - Token refresh failed with 400 after SQLite reload. This may happen if kiro-cli refreshed tokens in memory without persisting. 2026-01-19 04:48:44 | ERROR | kiro.routes_openai:chat_completions:353 - Internal error: Token expired and refresh failed. Please run 'kiro-cli login' to refresh your credentials. 2026-01-19 04:48:44 | ERROR | kiro.routes_openai:chat_completions:355 - HTTP 500 - POST /v1/chat/completions - Token expired and refresh failed. Please run 'kiro-cli login' to refresh your credentials. 2026-01-19 04:48:44 | INFO | logging:callHandlers:1737 - 127.0.0.1:43578 - "POST /v1/chat/completions HTTP/1.1" 500 ```Answers to Your Questions
Opening kiro-cli fixes it temporarily.
Restarting the gateway alone does not fix it.
The only reliable fix is:
Stop the gateway
Open kiro-cli (login happens automatically)
Close kiro-cli
Restart the gateway
This suggests kiro-cli refreshes or rehydrates tokens.
Not intentionally.
I don’t have any terminal or process running kiro-cli.
It’s possible something background-related exists, but from my side there’s no visible running instance.
This scenario matches my setup exactly:
Run kiro-cli and log in
Close kiro-cli
Start the gateway → works fine
Wait ~1–2 hours
Requests start failing (token refresh fails)
Restarting gateway alone does not help
To fix:
Stop gateway
Open kiro-cli, then close it
Restart gateway → everything works again
Additional info.
My AWS SSO / OIDC setup is in ca-central-1, not us-east-1.
I noticed region handling is already accounted for in the code, but mentioning it just in case.
@code-pumpkin commented on GitHub (Jan 19, 2026):
Additional Log
```2026-01-19 10:25:16 | INFO | __main__::587 - Starting Uvicorn server on 0.0.0.0:8000... 2026-01-19 10:25:16 | INFO | logging:callHandlers:1737 - Started server process [1468452] 2026-01-19 10:25:16 | INFO | logging:callHandlers:1737 - Waiting for application startup. 2026-01-19 10:25:16 | INFO | main:lifespan:287 - Starting application... Creating state managers. 2026-01-19 10:25:16 | INFO | main:lifespan:309 - Shared HTTP client created with connection pooling 2026-01-19 10:25:16 | DEBUG | kiro.auth:_load_credentials_from_sqlite:217 - SSO region from SQLite: ca-central-1 (API stays at us-east-1) 2026-01-19 10:25:16 | INFO | kiro.auth:_load_credentials_from_sqlite:255 - Credentials loaded from SQLite database: ~/.local/share/kiro-cli/data.sqlite3 2026-01-19 10:25:16 | INFO | kiro.auth:_detect_auth_type:171 - Detected auth type: AWS SSO OIDC (kiro-cli) 2026-01-19 10:25:16 | INFO | main:lifespan:327 - Loading models from Kiro API... 2026-01-19 10:25:16 | DEBUG | kiro.auth:get_access_token:610 - SQLite mode: reloading credentials before refresh attempt 2026-01-19 10:25:16 | DEBUG | kiro.auth:_load_credentials_from_sqlite:217 - SSO region from SQLite: ca-central-1 (API stays at us-east-1) 2026-01-19 10:25:16 | INFO | kiro.auth:_load_credentials_from_sqlite:255 - Credentials loaded from SQLite database: ~/.local/share/kiro-cli/data.sqlite3 2026-01-19 10:25:16 | INFO | kiro.auth:_do_aws_sso_oidc_refresh:521 - Refreshing Kiro token via AWS SSO OIDC... 2026-01-19 10:25:16 | DEBUG | kiro.auth:_do_aws_sso_oidc_refresh:541 - AWS SSO OIDC refresh request: url=https://oidc.ca-central-1.amazonaws.com/token, sso_region=ca-central-1, api_region=us-east-1, client_id=LfE1Nt4L... 2026-01-19 10:25:17 | ERROR | kiro.auth:_do_aws_sso_oidc_refresh:550 - AWS SSO OIDC refresh failed: status=400, body={"error":"invalid_request","error_description":"Invalid request","reason":null} 2026-01-19 10:25:17 | ERROR | kiro.auth:_do_aws_sso_oidc_refresh:557 - AWS SSO OIDC error details: error=invalid_request, description=Invalid request 2026-01-19 10:25:17 | WARNING | kiro.auth:_refresh_token_aws_sso_oidc:497 - Token refresh failed with 400, reloading credentials from SQLite and retrying... 2026-01-19 10:25:17 | DEBUG | kiro.auth:_load_credentials_from_sqlite:217 - SSO region from SQLite: ca-central-1 (API stays at us-east-1) 2026-01-19 10:25:17 | INFO | kiro.auth:_load_credentials_from_sqlite:255 - Credentials loaded from SQLite database: ~/.local/share/kiro-cli/data.sqlite3 2026-01-19 10:25:17 | INFO | kiro.auth:_do_aws_sso_oidc_refresh:521 - Refreshing Kiro token via AWS SSO OIDC... 2026-01-19 10:25:17 | DEBUG | kiro.auth:_do_aws_sso_oidc_refresh:541 - AWS SSO OIDC refresh request: url=https://oidc.ca-central-1.amazonaws.com/token, sso_region=ca-central-1, api_region=us-east-1, client_id=LfE1Nt4L... 2026-01-19 10:25:17 | ERROR | kiro.auth:_do_aws_sso_oidc_refresh:550 - AWS SSO OIDC refresh failed: status=400, body={"error":"invalid_request","error_description":"Invalid request","reason":null} 2026-01-19 10:25:17 | ERROR | kiro.auth:_do_aws_sso_oidc_refresh:557 - AWS SSO OIDC error details: error=invalid_request, description=Invalid request 2026-01-19 10:25:17 | WARNING | kiro.auth:get_access_token:624 - Token refresh failed with 400 after SQLite reload. This may happen if kiro-cli refreshed tokens in memory without persisting. 2026-01-19 10:25:17 | WARNING | main:lifespan:355 - Failed to fetch models from Kiro API: Token expired and refresh failed. Please run 'kiro-cli login' to refresh your credentials. 2026-01-19 10:25:17 | WARNING | main:lifespan:356 - Server will start with hidden models only (fallback mode) 2026-01-19 10:25:17 | DEBUG | kiro.cache:add_hidden_model:127 - Added hidden model: claude-3.7-sonnet → CLAUDE_3_7_SONNET_20250219_V1_0 2026-01-19 10:25:17 | DEBUG | main:lifespan:363 - Added 1 hidden models to cache 2026-01-19 10:25:17 | INFO | main:lifespan:367 - Model cache ready: 1 models total 2026-01-19 10:25:17 | INFO | main:lifespan:374 - Model resolver initialized 2026-01-19 10:25:17 | INFO | logging:callHandlers:1737 - Application startup complete. 2026-01-19 10:25:17 | INFO | logging:callHandlers:1737 - Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) 2026-01-19 10:26:15 | INFO | kiro.routes_openai:chat_completions:172 - Request to /v1/chat/completions (model=auto, stream=True) 2026-01-19 10:26:15 | DEBUG | kiro.converters_openai:build_kiro_payload:269 - Converting OpenAI request: model=auto -> auto, messages=1, tools=17, system_prompt_length=6441 2026-01-19 10:26:15 | DEBUG | kiro.converters_core:inject_thinking_tags:333 - Injecting fake reasoning tags with max_tokens=4000 2026-01-19 10:26:15 | DEBUG | kiro.auth:get_access_token:610 - SQLite mode: reloading credentials before refresh attempt 2026-01-19 10:26:15 | DEBUG | kiro.auth:_load_credentials_from_sqlite:217 - SSO region from SQLite: ca-central-1 (API stays at us-east-1) 2026-01-19 10:26:15 | INFO | kiro.auth:_load_credentials_from_sqlite:255 - Credentials loaded from SQLite database: ~/.local/share/kiro-cli/data.sqlite3 2026-01-19 10:26:15 | DEBUG | kiro.auth:get_access_token:614 - SQLite reload provided fresh token, no refresh needed 2026-01-19 10:26:15 | DEBUG | kiro.http_client:request_with_retry:216 - Sending request to Kiro API... 2026-01-19 10:26:17 | DEBUG | kiro.streaming_core:parse_kiro_stream:147 - Thinking parser initialized with mode: as_reasoning_content 2026-01-19 10:26:17 | DEBUG | kiro.streaming_core:parse_kiro_stream:155 - Waiting for first token (timeout=15.0s)... 2026-01-19 10:26:17 | DEBUG | kiro.streaming_core:parse_kiro_stream:160 - First token received 2026-01-19 10:26:17 | DEBUG | kiro.thinking_parser:_handle_pre_content:192 - Thinking tag '' detected. Transitioning to IN_THINKING. 2026-01-19 10:26:17 | INFO | logging:callHandlers:1737 - 127.0.0.1:54112 - "POST /v1/chat/completions HTTP/1.1" 200 2026-01-19 10:26:19 | DEBUG | kiro.thinking_parser:_process_thinking_buffer:284 - Closing tag '' found. Transitioning to STREAMING. 2026-01-19 10:26:20 | DEBUG | kiro.parsers:_finalize_tool_call:404 - Finalizing tool call 'list_directory' with raw arguments: '{"path": "devCode"}' 2026-01-19 10:26:20 | DEBUG | kiro.parsers:_finalize_tool_call:412 - Tool 'list_directory' arguments parsed successfully: ['path'] 2026-01-19 10:26:20 | DEBUG | kiro.parsers:_finalize_tool_call:404 - Finalizing tool call 'list_directory' with raw arguments: '' 2026-01-19 10:26:20 | DEBUG | kiro.parsers:_finalize_tool_call:434 - Tool 'list_directory' has empty arguments string (will be deduplicated) 2026-01-19 10:26:20 | DEBUG | kiro.streaming_core:parse_kiro_stream:208 - Thinking block processing completed 2026-01-19 10:26:20 | DEBUG | kiro.parsers:deduplicate_tool_calls:206 - Deduplicated tool calls: 2 -> 1 2026-01-19 10:26:21 | DEBUG | kiro.tokenizer:_get_encoding:62 - [Tokenizer] Initialized tiktoken with cl100k_base encoding 2026-01-19 10:26:21 | DEBUG | kiro.streaming_openai:stream_kiro_to_openai_internal:225 - Processing 1 tool calls for streaming response 2026-01-19 10:26:21 | DEBUG | kiro.streaming_openai:stream_kiro_to_openai_internal:237 - Tool call [0] 'list_directory': id=tooluse_nZ3R2A3FQ_evM11Vgmp5rw, args_length=19 2026-01-19 10:26:21 | DEBUG | kiro.streaming_openai:stream_kiro_to_openai_internal:281 - [Usage] auto: prompt_tokens=9013 (subtraction), completion_tokens=128 (tiktoken), total_tokens=9141 (API Kiro) 2026-01-19 10:26:21 | DEBUG | kiro.streaming_openai:stream_kiro_to_openai_internal:319 - Streaming completed successfully 2026-01-19 10:26:21 | INFO | kiro.routes_openai:stream_wrapper:309 - HTTP 200 - POST /v1/chat/completions (streaming) - completed 2026-01-19 10:26:21 | INFO | kiro.routes_openai:chat_completions:172 - Request to /v1/chat/completions (model=auto, stream=True)```I waited for the existing tokens to expire and then ran the gateway.
Initially, requests failed as expected once the token was expired. While the gateway was still running and failing, I opened kiro-cli in a separate process and sent a request. After doing that, requests through the gateway immediately started working without restarting the gateway.
@jwadow commented on GitHub (Jan 19, 2026):
@code-pumpkin
Thanks for the detailed logs.
I found a potential issue in the code. The gateway successfully refreshes tokens via AWS SSO OIDC, but doesn't save them back to SQLite. Because of this, on the next refresh it loads old tokens from the database and tries to use an already invalidated refresh_token.
What changed:
The main problem about invalid_request from AWS:
I checked the request format - it's fully compliant with OAuth 2.0 spec. Maybe AWS SSO OIDC just uses invalid_request as a catch-all error instead of the proper invalid_grant for expired/revoked tokens? I think this is their quirk, not a bug in our code?
About ca-central-1 region:
The code correctly handles different regions - SSO region is used for the OIDC endpoint, while API region stays us-east-1. This is normal.
What to try:
git pullor new clone)Honestly, I'm 50/50 confident this will fix the issue, but without access to your environment I can't guarantee 100%. If it doesn't help - we'll dig deeper, possibly something specific with ca-central-1 region or how kiro-cli writes to SQLite.
I don't have access to the CLI, and working on the project showed me that Amazon seems too lazy to write proper error codes, so I wouldn't be surprised if
invalid_requestequalsinvalid_grant.In any case, I've now fixed the architectural error in the code, but I can't say whether it fixed your original problem.
And asking you to catch the refresh request to accurately determine what it looks like is too complicated.
P.S. About that weird DNS error with JSON credentials:
I noticed in your first logs you tried using JSON credentials from ~/.aws/sso/cache/kiro-auth-token.json and got '[Errno -2] Name or service not known' when trying to reach prod.us-east-1.auth.desktop.kiro.dev.
This is really strange. I'm literally in the most sanctioned country in the world (Russia), and this URL works perfectly fine for me - even without VPN. I can access it from my browser, from Python, from everywhere. So it's bizarre that your system can't resolve it.
Could you try running this in your Python environment to check if DNS resolution works?
On Linux/macOS:
python3 -c 'import socket; print(socket.gethostbyname("prod.us-east-1.auth.desktop.kiro.dev"))'On Windows:
python -c "import socket; print(socket.gethostbyname('prod.us-east-1.auth.desktop.kiro.dev'))"If this fails, it might be something specific with your DNS resolver, firewall, or Python's network stack. But honestly, since your kiro-cli works fine, I'd just stick with SQLite mode (Option 3) and forget about JSON credentials. They're not really meant to be used directly anyway.
@code-pumpkin commented on GitHub (Jan 19, 2026):
Hi @jwadow,
Thanks for the advice.
I’ll pull the latest changes to my machine and run it under observability for the next few hours.
Here’s what I’m currently seeing:
```❯ python3 -c 'import socket; print(socket.gethostbyname("prod.us-east-1.auth.desktop.kiro.dev"))' 13.225.196.39 ❯ ping prod.us-east-1.auth.desktop.kiro.dev PING prod.us-east-1.auth.desktop.kiro.dev (13.225.196.58) 56(84) bytes of data. 64 bytes from server-13-225-196-58.yul62.r.cloudfront.net (13.225.196.58): icmp_seq=1 ttl=243 time=36.6 ms 64 bytes from server-13-225-196-58.yul62.r.cloudfront.net (13.225.196.58): icmp_seq=2 ttl=243 time=36.2 ms ^C```I’ll keep you updated on whether this resolves the issue.
Thank you so much for taking the time to help — I really appreciate it.
@code-pumpkin commented on GitHub (Jan 19, 2026):
It still seems to have the issue
```2026-01-19 12:16:33 | INFO | kiro.routes_openai:stream_wrapper:309 - HTTP 200 - POST /v1/chat/completions (streaming) - completed 2026-01-19 12:16:33 | INFO | kiro.routes_openai:chat_completions:172 - Request to /v1/chat/completions (model=claude-opus-4.5, stream=True) 2026-01-19 12:16:33 | INFO | kiro.routes_openai:chat_completions:172 - Request to /v1/chat/completions (model=gpt-4.1-mini, stream=True) 2026-01-19 12:16:34 | ERROR | kiro.routes_openai:chat_completions:231 - Error from Kiro API: 400 - {"message":"Invalid model. Please select a different model to continue.","reason":"INVALID_MODEL_ID"} 2026-01-19 12:16:34 | WARNING | kiro.routes_openai:chat_completions:245 - HTTP 400 - POST /v1/chat/completions - Invalid model. Please select a different model to continue. (reason: INVALID_MODEL_ID) 2026-01-19 12:16:34 | INFO | logging:callHandlers:1737 - 127.0.0.1:48758 - "POST /v1/chat/completions HTTP/1.1" 400 2026-01-19 12:16:36 | INFO | logging:callHandlers:1737 - 127.0.0.1:45920 - "POST /v1/chat/completions HTTP/1.1" 200 2026-01-19 12:16:38 | INFO | kiro.routes_openai:stream_wrapper:309 - HTTP 200 - POST /v1/chat/completions (streaming) - completed 2026-01-19 12:16:38 | INFO | kiro.routes_openai:chat_completions:172 - Request to /v1/chat/completions (model=claude-opus-4.5, stream=True) 2026-01-19 12:16:40 | INFO | logging:callHandlers:1737 - 127.0.0.1:45920 - "POST /v1/chat/completions HTTP/1.1" 200 2026-01-19 12:17:21 | INFO | kiro.routes_openai:stream_wrapper:309 - HTTP 200 - POST /v1/chat/completions (streaming) - completed 2026-01-19 12:17:21 | INFO | kiro.routes_openai:chat_completions:172 - Request to /v1/chat/completions (model=claude-opus-4.5, stream=True) 2026-01-19 12:17:21 | INFO | kiro.auth:_load_credentials_from_sqlite:255 - Credentials loaded from SQLite database: ~/.local/share/kiro-cli/data.sqlite3 2026-01-19 12:17:21 | INFO | kiro.auth:_do_aws_sso_oidc_refresh:583 - Refreshing Kiro token via AWS SSO OIDC... 2026-01-19 12:17:21 | INFO | kiro.routes_openai:chat_completions:172 - Request to /v1/chat/completions (model=gpt-4.1-mini, stream=True) 2026-01-19 12:17:22 | ERROR | kiro.auth:_do_aws_sso_oidc_refresh:612 - AWS SSO OIDC refresh failed: status=400, body={"error":"invalid_request","error_description":"Invalid request","reason":null} 2026-01-19 12:17:22 | ERROR | kiro.auth:_do_aws_sso_oidc_refresh:619 - AWS SSO OIDC error details: error=invalid_request, description=Invalid request 2026-01-19 12:17:22 | WARNING | kiro.auth:_refresh_token_aws_sso_oidc:559 - Token refresh failed with 400, reloading credentials from SQLite and retrying... 2026-01-19 12:17:22 | INFO | kiro.auth:_load_credentials_from_sqlite:255 - Credentials loaded from SQLite database: ~/.local/share/kiro-cli/data.sqlite3 2026-01-19 12:17:22 | INFO | kiro.auth:_do_aws_sso_oidc_refresh:583 - Refreshing Kiro token via AWS SSO OIDC...```@Sere-Fu commented on GitHub (Jan 19, 2026):
I'm in a similar situation, I clone the repo last week and use sqlite as auth. After initial kiro-cli login, i kill the kiro-cli shell and the gateway works fine. after about 5 mins, the gateway fail to refresh the auth. so i just add a cron job to do "kiro-cli login" every 5mins to refresh it and it works fine for now. it would be better if the cron job can be removed.
@jwadow commented on GitHub (Jan 20, 2026):
@code-pumpkin @Sere-Fu
I think I found the issue. We were sending the request in the wrong format.
AWS SSO OIDC CreateToken API expects JSON with camelCase parameters (grantType, clientId, clientSecret, refreshToken), but we were sending form-urlencoded with snake_case (grant_type, client_id, etc.). That's why AWS was returning 400 "invalid_request".
I've fixed the request format according to the official AWS specification. Now sending:
The problem is I don't have access to AWS SSO OIDC credentials to test this myself (I only have social login working, which uses a different endpoint). Could you test it on your side?
Just update the code, restart the gateway, and wait an hour for tokens to expire. If everything's correct, it should automatically refresh tokens without errors.
P.S. Also added support for social login from kiro-cli (kirocli:social:token key in SQLite), but that's unrelated to your issue.
@jwadow commented on GitHub (Jan 20, 2026):
@code-pumpkin
FYI: you mentioned trying JSON credentials file and it didn't work, that's now fixed (#45). It was a separate Enterprise IDE issue that's seems been resolved.
@code-pumpkin commented on GitHub (Jan 20, 2026):
Hello there @jwadow,
Yes, that fixed the issue for option 3 (SQLite). Let’s go!!!
I haven’t tested option 4 (the JSON one) yet, but I’ll try that shortly as well.
Thank you so much for helping me out with this problem — I really appreciate it.
Also, slightly off track from the original issue: I’m a CS major, and this got me thinking about how to approach problems like this in general — especially when custom fixes or deeper debugging are required.
How do you usually debug things at this level? For example, how did you figure out that the issue was an incorrect request format when sending data to AWS?
Any pointers on building that kind of problem-solving mindset would be really helpful.
Regards,
Pumpkin
@code-pumpkin commented on GitHub (Jan 20, 2026):
Hi again,
I was reviewing the other PR related to the enterprise ODIC issue.
On my machine, I see two files. From the PR, it looks like the code is pulling data from the
file.I’m not sure which file I should use for option 4 when testing this. Could you please confirm which one is correct?
Regards,
Pumpkin
@Sere-Fu commented on GitHub (Jan 20, 2026):
It works fine for more than 1 hour without me refreshing it, looks like a solid fix. Thanks!
@jwadow commented on GitHub (Jan 20, 2026):
@code-pumpkin
Glad to hear SQLite is working now.
For your question about the two JSON files: use
kiro-auth-token.jsonin KIRO_CREDS_FILE. The gateway will automatically load the device registration from the hashed file when it sees clientIdHash in the main file.Regarding your debugging question - honestly, it was trial and error. When I first saw the 400 "invalid_request" error, I assumed the problem was that we weren't saving refreshed tokens back to SQLite. So I added the persistence logic (first commit). But after testing, the issue persisted.
Then I launched a Linux VM with mitmproxy, tried using kiro-cli, reverse-engineered the http requests, checked the database and found out that everything was completely different for me because i'm not enterprise user, but I did add support for SQLite socials (Google, GitHub), which was missing.
That's when I went back to basics: I compared our request format against the AWS SSO OIDC CreateToken API documentation. Turned out we were sending form-urlencoded with snake_case parameters (grant_type, client_id), but AWS actually expects JSON with camelCase (grantType, clientId). Changed that, and it worked.
The key lesson: when your first hypothesis doesn't work, don't keep patching it. Step back, read the actual API spec, and verify your assumptions. Most bugs come from misunderstanding the contract, not from complex logic errors.
I've updated the README with a note about the two-file setup. Let me know if Option 4 works for you now. I'm closing issue but still reading.
@code-pumpkin commented on GitHub (Jan 20, 2026):
Thanks for the advice @jwadow