[GH-ISSUE #346] Resumable upload fails with 400 using nodejs client. #68

Open
opened 2026-03-03 12:07:59 +03:00 by kerem · 7 comments
Owner

Originally created by @jeantil on GitHub (Oct 19, 2020).
Original GitHub issue: https://github.com/fsouza/fake-gcs-server/issues/346

While working on #345 I was unable to make the default file.save(...) call work and was forced to add { resumable: false } to make it pass.

The error was:

Error: Upload failed
    at Upload.<anonymous> (/home/jean/dev/startups/yupwego/src/fake-gcs-server/examples/node/node_modules/gcs-resumable-upload/build/src/index.js:191:30)
    at Upload.emit (events.js:327:22)
    at Upload.onResponse (/home/jean/dev/startups/yupwego/src/fake-gcs-server/examples/node/node_modules/gcs-resumable-upload/build/src/index.js:388:14)
    at Upload.makeRequestStream (/home/jean/dev/startups/yupwego/src/fake-gcs-server/examples/node/node_modules/gcs-resumable-upload/build/src/index.js:337:14)
    at processTicksAndRejections (internal/process/task_queues.js:97:5)
    at async Upload.startUploading (/home/jean/dev/startups/yupwego/src/fake-gcs-server/examples/node/node_modules/gcs-resumable-upload/build/src/index.js:219:13)

Running the script in a debugger yields an HTTP 400 error on the following request

PUT https://[::]:4443/upload/resumable/c2bfc362c82655ce7c2c5074bd300d11
Content-Range: 'bytes 0-*/*', 
Authorization: 'Bearer ya29.c.KpMB4QfVy54ujAsnhGPSPjDDAGp…LPYC_aCWrzh57NuDr-91aMHax7j4d5nKTGdQFKC-T', User-Agent: 'google-api-nodejs-client/6.0.6', 
x-goog-api-client: 'gl-node/12.18.3 auth/6.0.6',
Accept: 'application/json'

result in

400 Bad Request
'invalid Content-Range: bytes 0-*/*\n'
Originally created by @jeantil on GitHub (Oct 19, 2020). Original GitHub issue: https://github.com/fsouza/fake-gcs-server/issues/346 While working on #345 I was unable to make the default `file.save(...)` call work and was forced to add `{ resumable: false }` to make it pass. The error was: ``` Error: Upload failed at Upload.<anonymous> (/home/jean/dev/startups/yupwego/src/fake-gcs-server/examples/node/node_modules/gcs-resumable-upload/build/src/index.js:191:30) at Upload.emit (events.js:327:22) at Upload.onResponse (/home/jean/dev/startups/yupwego/src/fake-gcs-server/examples/node/node_modules/gcs-resumable-upload/build/src/index.js:388:14) at Upload.makeRequestStream (/home/jean/dev/startups/yupwego/src/fake-gcs-server/examples/node/node_modules/gcs-resumable-upload/build/src/index.js:337:14) at processTicksAndRejections (internal/process/task_queues.js:97:5) at async Upload.startUploading (/home/jean/dev/startups/yupwego/src/fake-gcs-server/examples/node/node_modules/gcs-resumable-upload/build/src/index.js:219:13) ``` Running the script in a debugger yields an HTTP 400 error on the following request ``` PUT https://[::]:4443/upload/resumable/c2bfc362c82655ce7c2c5074bd300d11 Content-Range: 'bytes 0-*/*', Authorization: 'Bearer ya29.c.KpMB4QfVy54ujAsnhGPSPjDDAGp…LPYC_aCWrzh57NuDr-91aMHax7j4d5nKTGdQFKC-T', User-Agent: 'google-api-nodejs-client/6.0.6', x-goog-api-client: 'gl-node/12.18.3 auth/6.0.6', Accept: 'application/json' ``` result in ``` 400 Bad Request 'invalid Content-Range: bytes 0-*/*\n' ```
Author
Owner

@gnarea commented on GitHub (Jan 25, 2021):

I ran into this issue, but instead of getting a 400, the file.save(...) just hung. I fixed it by passing resumable: false.

<!-- gh-comment-id:766722498 --> @gnarea commented on GitHub (Jan 25, 2021): I ran into this issue, but instead of getting a `400`, the `file.save(...)` just hung. I fixed it by passing `resumable: false`.
Author
Owner

@gnarea commented on GitHub (Jul 9, 2021):

@fsouza, #497 didn't fix this issue. It just made it different. Consider this:

test('Bug', async () => {
  const bucketName = 'foo';
  const fileContents = Buffer.from('bar');

  const client = new Storage({
    apiEndpoint: 'http://127.0.0.1:8080',
    projectId: 'the id',
  });

  await client.createBucket(bucketName);

  const file = client.bucket(bucketName).file('key');
  await file.save(fileContents);
});

file.save() now fails with a more cryptic error:

FetchError: request to http://[::]:8080/upload/resumable/b5236acc2a828b98e362e7b70ee2cdf6 failed, reason: connect ECONNREFUSED :::8080

Server logs:

gcs_1    | time="2021-07-09T15:09:22Z" level=info msg="couldn't load any objects or buckets from \"/data\", starting empty"
gcs_1    | time="2021-07-09T15:09:22Z" level=info msg="server started at http://[::]:8080"
gcs_1    | time="2021-07-09T15:09:28Z" level=info msg="172.24.0.1 - - [09/Jul/2021:15:09:28 +0000] \"POST /storage/v1/b?project=the%20id HTTP/1.1\" 200 110"

Adding resumable: false fixes the issue.

<!-- gh-comment-id:877261304 --> @gnarea commented on GitHub (Jul 9, 2021): @fsouza, #497 didn't fix this issue. It just made it different. Consider this: ```typescript test('Bug', async () => { const bucketName = 'foo'; const fileContents = Buffer.from('bar'); const client = new Storage({ apiEndpoint: 'http://127.0.0.1:8080', projectId: 'the id', }); await client.createBucket(bucketName); const file = client.bucket(bucketName).file('key'); await file.save(fileContents); }); ``` `file.save()` now fails with a more cryptic error: ``` FetchError: request to http://[::]:8080/upload/resumable/b5236acc2a828b98e362e7b70ee2cdf6 failed, reason: connect ECONNREFUSED :::8080 ``` Server logs: ``` gcs_1 | time="2021-07-09T15:09:22Z" level=info msg="couldn't load any objects or buckets from \"/data\", starting empty" gcs_1 | time="2021-07-09T15:09:22Z" level=info msg="server started at http://[::]:8080" gcs_1 | time="2021-07-09T15:09:28Z" level=info msg="172.24.0.1 - - [09/Jul/2021:15:09:28 +0000] \"POST /storage/v1/b?project=the%20id HTTP/1.1\" 200 110" ``` Adding `resumable: false` fixes the issue.
Author
Owner

@fsouza commented on GitHub (Jul 9, 2021):

Hm I wonder where it's trying to connect to. I'll investigate later, thanks for sharing a reproducer.

<!-- gh-comment-id:877279775 --> @fsouza commented on GitHub (Jul 9, 2021): Hm I wonder where it's trying to connect to. I'll investigate later, thanks for sharing a reproducer.
Author
Owner

@stoffeastrom commented on GitHub (Dec 14, 2021):

FYI
I just got hit by this as well. Doing:

diff --git a/fakestorage/upload.go b/fakestorage/upload.go
index ab50369..6c99254 100644
--- a/fakestorage/upload.go
+++ b/fakestorage/upload.go
@@ -350,9 +350,9 @@ func (s *Server) resumableUpload(bucketName string, r *http.Request) jsonRespons
 	}
 	s.uploads.Store(uploadID, obj)
 	header := make(http.Header)
-	header.Set("Location", s.URL()+"/upload/resumable/"+uploadID)
+	header.Set("Location", s.PublicURL()+"/upload/resumable/"+uploadID)
 	if r.Header.Get("X-Goog-Upload-Command") == "start" {
-		header.Set("X-Goog-Upload-URL", s.URL()+"/upload/resumable/"+uploadID)
+		header.Set("X-Goog-Upload-URL", s.PublicURL()+"/upload/resumable/"+uploadID)
 		header.Set("X-Goog-Upload-Status", "active")
 	}
 	return jsonResponse{

e.g changing s.URL() -> s.PublicURL() ensures getting the correct url. However after this I get a Error: Retry limit exceeded which I haven't investigated yet.

<!-- gh-comment-id:993244126 --> @stoffeastrom commented on GitHub (Dec 14, 2021): FYI I just got hit by this as well. Doing: ```git diff --git a/fakestorage/upload.go b/fakestorage/upload.go index ab50369..6c99254 100644 --- a/fakestorage/upload.go +++ b/fakestorage/upload.go @@ -350,9 +350,9 @@ func (s *Server) resumableUpload(bucketName string, r *http.Request) jsonRespons } s.uploads.Store(uploadID, obj) header := make(http.Header) - header.Set("Location", s.URL()+"/upload/resumable/"+uploadID) + header.Set("Location", s.PublicURL()+"/upload/resumable/"+uploadID) if r.Header.Get("X-Goog-Upload-Command") == "start" { - header.Set("X-Goog-Upload-URL", s.URL()+"/upload/resumable/"+uploadID) + header.Set("X-Goog-Upload-URL", s.PublicURL()+"/upload/resumable/"+uploadID) header.Set("X-Goog-Upload-Status", "active") } return jsonResponse{ ``` e.g changing `s.URL()` -> `s.PublicURL()` ensures getting the correct url. However after this I get a `Error: Retry limit exceeded` which I haven't investigated yet.
Author
Owner

@stoffeastrom commented on GitHub (Dec 14, 2021):

Just found the ExternalURL options.. so no need for above change :D

<!-- gh-comment-id:993637030 --> @stoffeastrom commented on GitHub (Dec 14, 2021): Just found the `ExternalURL` options.. so no need for above change :D
Author
Owner

@sergseven commented on GitHub (Jan 4, 2022):

Setting ExternalURL is a clue to have fake-gcs-server working for Testcontainer!

Just in case someone is looking for a solution for Testcontainers & Spring & JUnit 5:

  static final int fakeGcsMappedPort = SocketUtils.findAvailableTcpPort();

  @Container
 static final GenericContainer fakeGcs = new FixedHostPortGenericContainer<>("fsouza/fake-gcs-server")
      .withExposedPorts(4443)
      .withFixedExposedPort(fakeGcsMappedPort, 4443)
      .withCreateContainerCmdModifier(cmd -> cmd.withEntrypoint(
          "/bin/fake-gcs-server",
          "-scheme", "http",
          "-external-url", "http://0.0.0.0:" + fakeGcsMappedPort));

  @DynamicPropertySource
  static void gcs(DynamicPropertyRegistry registry) {
    registry.add("property.for.gcs.storage.host", () -> "http://0.0.0.0:" + fakeGcs.getFirstMappedPort());
  }

the difficulty with Testcontainers is that a random mapped port(external port) is available after the generic container is spinned up and running, so we can't know it before. So the trick is to get random port before the container is started(FixedHostPortGenericContainer used).

<!-- gh-comment-id:1005074775 --> @sergseven commented on GitHub (Jan 4, 2022): Setting ExternalURL is a clue to have fake-gcs-server working for Testcontainer! Just in case someone is looking for a solution for Testcontainers & Spring & JUnit 5: ``` static final int fakeGcsMappedPort = SocketUtils.findAvailableTcpPort(); @Container static final GenericContainer fakeGcs = new FixedHostPortGenericContainer<>("fsouza/fake-gcs-server") .withExposedPorts(4443) .withFixedExposedPort(fakeGcsMappedPort, 4443) .withCreateContainerCmdModifier(cmd -> cmd.withEntrypoint( "/bin/fake-gcs-server", "-scheme", "http", "-external-url", "http://0.0.0.0:" + fakeGcsMappedPort)); @DynamicPropertySource static void gcs(DynamicPropertyRegistry registry) { registry.add("property.for.gcs.storage.host", () -> "http://0.0.0.0:" + fakeGcs.getFirstMappedPort()); } ``` the difficulty with Testcontainers is that a random mapped port(external port) is available after the generic container is spinned up and running, so we can't know it before. So the trick is to get random port before the container is started(`FixedHostPortGenericContainer` used).
Author
Owner

@sergseven commented on GitHub (Jan 25, 2022):

As a follow up, it doesn't work on CI environments where docker containers run on a remote VM, so "container IP" is different than localhost(0.0.0.0) and is only known after container is eventually started.

A workaround for this case is to update external-url of a started fake-gcs-server container, that is proposed in #659.

@fsouza would be nice if you look into #659

<!-- gh-comment-id:1021373541 --> @sergseven commented on GitHub (Jan 25, 2022): As a follow up, it doesn't work on CI environments where docker containers run on a remote VM, so "container IP" is different than localhost(0.0.0.0) and is only known after container is eventually started. A workaround for this case is to update external-url of a started fake-gcs-server container, that is proposed in #659. @fsouza would be nice if you look into #659
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/fake-gcs-server#68
No description provided.