mirror of
https://github.com/fsouza/fake-gcs-server.git
synced 2026-04-25 21:55:56 +03:00
[GH-ISSUE #1950] Timeout on download_as_bytes #232
Labels
No labels
bug
compatibility-issue
docker
documentation
enhancement
help wanted
needs information
pull-request
question
stale
unfortunate
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/fake-gcs-server#232
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @meganvw on GitHub (Apr 7, 2025).
Original GitHub issue: https://github.com/fsouza/fake-gcs-server/issues/1950
I'm trying to download an object as bytes via Python, but continuously hit Timeout of 120.0s exceeded:
when I curl the same object from the command line, it returns instantly:
Other calls from Python to the fake-gcs-server running in docker are successfully completing. Here is the code:
I do get a warning, which I'm thinking may be the root, but not sure of the way around it.
Is this a known issue? Thank you!
@meganvw commented on GitHub (Apr 8, 2025):
Figured it out.
I have overridden the GCS client endpoint by setting STORAGE_EMULATOR_HOST, but the call
bucket.get_blob("/blob_path")creates a blob with the default hostname 0.0.0.0 instead of the custom host, which you can see withblob._get_download_url(client).Switching to
Blob.from_uri(blob_ui, client)correctly sets the URI with the custom hostname, and the download will complete.Another small gotcha I hit, if you use
Blob.from_uri()but callblob.reload()first (eg. to populate server side metadata on the blob object), it will default back to the original hostname and cause this same error when trying to callblob.download_as_bytes().