[GH-ISSUE #16] empty file is written to s3 #11

Closed
opened 2026-03-04 01:41:08 +03:00 by kerem · 11 comments
Owner

Originally created by @timurb on GitHub (Feb 24, 2014).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/16

Not sure this is related to #11 but I've decided to create a separate issue.

While writing to s3fs we quite often see that the file of 0 bytes is actually written.
The file size is 75,661,483,206 bytes last time we've see that and about that size in the previous cases. I think we see that only for the only that big file while smaller files (like 20Gb) are written ok.

We use the following command to run s3fs:

s3fs -d foobar-backups /mnt/backups -o allow_other,retries=10,connect_timeout=30,readwrite_timeout=30,use_cache=/mnt/cache -o passwd_file=/etc/foobar-backups

S3fs version is 1.74

The logs for the case are the following:

Feb 23 06:30:07 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122304][hit count=37]
Feb 23 06:30:07 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:07 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198
Feb 23 06:30:07 bacula s3fs: HTTP response code 200
Feb 23 06:30:07 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:07 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122607][hit count=0]
Feb 23 06:30:07 bacula s3fs: file locked(/bacula_client_pool_0198 - /mnt/cache/.foobar-backups.stat/bacula_client_pool_0198)
Feb 23 06:30:07 bacula s3fs: file unlocked(/bacula_client_pool_0198)
Feb 23 06:30:09 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122607][hit count=1]
Feb 23 06:30:09 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122609][hit count=2]
Feb 23 06:30:09 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198?uploads
Feb 23 06:30:09 bacula s3fs: HTTP response code 200
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198?uploadId=1NY2wdJHkfH8NDzefATUGqr.MAJ8AhTfBp4UvuQML527Sgva96MMD6qsF7TEg1Tq3bloudT2AmsSooKaa.qjjZ2qOHrJ.7aS0QEOOn59zs5191lEc.jcu.4Iz2rUT7Jl
Feb 23 06:30:10 bacula s3fs: HTTP response code 200
Feb 23 06:30:10 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198
Feb 23 06:30:10 bacula s3fs: HTTP response code 200
Feb 23 06:30:10 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=0]
Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=1]
Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=2]
Feb 23 06:30:10 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:10 bacula s3fs: file locked(/bacula_client_pool_0198 - /mnt/cache/.foobar-backups.stat/bacula_client_pool_0198)
Feb 23 06:30:10 bacula s3fs: file unlocked(/bacula_client_pool_0198)
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198
Feb 23 06:30:10 bacula s3fs: HTTP response code 200
Feb 23 06:30:10 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=0]
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198
Feb 23 06:30:10 bacula s3fs: HTTP response code 204
Feb 23 06:30:10 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198
Feb 23 06:30:10 bacula s3fs: HTTP response code 404
Feb 23 06:30:10 bacula s3fs: HTTP response code 404 was returned, returning ENOENT
Feb 23 06:30:10 bacula s3fs: Body Text: 
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198/
Feb 23 06:30:10 bacula s3fs: HTTP response code 404
Feb 23 06:30:10 bacula s3fs: HTTP response code 404 was returned, returning ENOENT
Feb 23 06:30:10 bacula s3fs: Body Text: 
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198_%24folder%24
Feb 23 06:30:10 bacula s3fs: HTTP response code 404
Feb 23 06:30:10 bacula s3fs: HTTP response code 404 was returned, returning ENOENT
Feb 23 06:30:10 bacula s3fs: Body Text: 
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com?delimiter=/&prefix=bacula_client_pool_0198/&max-keys=1000
Feb 23 06:30:10 bacula s3fs: HTTP response code 200
Feb 23 06:30:10 bacula s3fs: contents_xp->nodesetval is empty.
Feb 23 06:30:10 bacula s3fs: contents_xp->nodesetval is empty.
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198
Feb 23 06:30:10 bacula s3fs: HTTP response code 404
Feb 23 06:30:10 bacula s3fs: HTTP response code 404 was returned, returning ENOENT
Feb 23 06:30:10 bacula s3fs: Body Text: 
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198/
Feb 23 06:30:10 bacula s3fs: HTTP response code 404
Feb 23 06:30:10 bacula s3fs: HTTP response code 404 was returned, returning ENOENT
Feb 23 06:30:10 bacula s3fs: Body Text: 
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198_%24folder%24
Feb 23 06:30:10 bacula s3fs: HTTP response code 404
Feb 23 06:30:10 bacula s3fs: HTTP response code 404 was returned, returning ENOENT
Feb 23 06:30:10 bacula s3fs: Body Text: 
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com?delimiter=/&prefix=bacula_client_pool_0198/&max-keys=1000
Feb 23 06:30:10 bacula s3fs: HTTP response code 200
Feb 23 06:30:10 bacula s3fs: contents_xp->nodesetval is empty.
Feb 23 06:30:10 bacula s3fs: contents_xp->nodesetval is empty.
Feb 23 06:30:10 bacula s3fs: create zero byte file object.
Feb 23 06:30:10 bacula s3fs: uploading... [path=/bacula_client_pool_0198][fd=-1][size=0]
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198
Feb 23 06:30:10 bacula s3fs: HTTP response code 200
Feb 23 06:30:10 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:10 bacula s3fs: file locked(/bacula_client_pool_0198 - /mnt/cache/.foobar-backups.stat/bacula_client_pool_0198)
Feb 23 06:30:10 bacula s3fs: file unlocked(/bacula_client_pool_0198)
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198
Feb 23 06:30:10 bacula s3fs: HTTP response code 200
Feb 23 06:30:10 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=0]
Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=1]
Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=2]
Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=3]
Feb 23 06:30:10 bacula s3fs: copying... [path=/bacula_client_pool_0198]
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198
Feb 23 06:30:10 bacula s3fs: HTTP response code 200
Feb 23 06:30:10 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198
Feb 23 06:30:10 bacula s3fs: HTTP response code 200
Feb 23 06:30:10 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=0]
Feb 23 06:44:01 bacula /USR/SBIN/CRON[30386]: (root) CMD (/usr/local/bin/cleanup_s3fs_cache)
Feb 23 06:44:02 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=1]
Feb 23 07:44:01 bacula /USR/SBIN/CRON[11138]: (root) CMD (/usr/local/bin/cleanup_s3fs_cache)
Feb 23 07:44:01 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393123442][hit count=2]
Feb 23 08:44:01 bacula /USR/SBIN/CRON[24259]: (root) CMD (/usr/local/bin/cleanup_s3fs_cache)
Feb 23 08:44:01 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393127041][hit count=3]
Feb 23 09:44:02 bacula /USR/SBIN/CRON[3633]: (root) CMD (/usr/local/bin/cleanup_s3fs_cache)
Feb 23 09:44:02 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393130641][hit count=4]
Feb 23 10:44:01 bacula /USR/SBIN/CRON[14815]: (root) CMD (/usr/local/bin/cleanup_s3fs_cache)
Feb 23 10:44:02 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393134242][hit count=5]
Feb 23 11:44:01 bacula /USR/SBIN/CRON[25959]: (root) CMD (/usr/local/bin/cleanup_s3fs_cache)
Feb 23 11:44:02 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393137842][hit count=6]
Feb 23 12:44:01 bacula /USR/SBIN/CRON[4655]: (root) CMD (/usr/local/bin/cleanup_s3fs_cache)
Feb 23 12:44:01 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393141442][hit count=7]
Feb 23 12:48:46 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393145041][hit count=8]
Feb 23 12:48:46 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393145326][hit count=9]
Feb 23 12:48:46 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0198]
Feb 23 12:48:46 bacula s3fs: file locked(/bacula_client_pool_0198 - /mnt/cache/.foobar-backups.stat/bacula_client_pool_0198)
Feb 23 12:48:46 bacula s3fs: file unlocked(/bacula_client_pool_0198)

Do you have any ideas what could be the reason for that behaviour and how we could fix that?

Thanks.

Originally created by @timurb on GitHub (Feb 24, 2014). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/16 Not sure this is related to #11 but I've decided to create a separate issue. While writing to s3fs we quite often see that the file of 0 bytes is actually written. The file size is 75,661,483,206 bytes last time we've see that and about that size in the previous cases. I think we see that only for the only that big file while smaller files (like 20Gb) are written ok. We use the following command to run s3fs: ``` s3fs -d foobar-backups /mnt/backups -o allow_other,retries=10,connect_timeout=30,readwrite_timeout=30,use_cache=/mnt/cache -o passwd_file=/etc/foobar-backups ``` S3fs version is 1.74 The logs for the case are the following: ``` Feb 23 06:30:07 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122304][hit count=37] Feb 23 06:30:07 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0198] Feb 23 06:30:07 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198 Feb 23 06:30:07 bacula s3fs: HTTP response code 200 Feb 23 06:30:07 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0198] Feb 23 06:30:07 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122607][hit count=0] Feb 23 06:30:07 bacula s3fs: file locked(/bacula_client_pool_0198 - /mnt/cache/.foobar-backups.stat/bacula_client_pool_0198) Feb 23 06:30:07 bacula s3fs: file unlocked(/bacula_client_pool_0198) Feb 23 06:30:09 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122607][hit count=1] Feb 23 06:30:09 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122609][hit count=2] Feb 23 06:30:09 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198?uploads Feb 23 06:30:09 bacula s3fs: HTTP response code 200 Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198?uploadId=1NY2wdJHkfH8NDzefATUGqr.MAJ8AhTfBp4UvuQML527Sgva96MMD6qsF7TEg1Tq3bloudT2AmsSooKaa.qjjZ2qOHrJ.7aS0QEOOn59zs5191lEc.jcu.4Iz2rUT7Jl Feb 23 06:30:10 bacula s3fs: HTTP response code 200 Feb 23 06:30:10 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0198] Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198 Feb 23 06:30:10 bacula s3fs: HTTP response code 200 Feb 23 06:30:10 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0198] Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=0] Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=1] Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=2] Feb 23 06:30:10 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0198] Feb 23 06:30:10 bacula s3fs: file locked(/bacula_client_pool_0198 - /mnt/cache/.foobar-backups.stat/bacula_client_pool_0198) Feb 23 06:30:10 bacula s3fs: file unlocked(/bacula_client_pool_0198) Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198 Feb 23 06:30:10 bacula s3fs: HTTP response code 200 Feb 23 06:30:10 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0198] Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=0] Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198 Feb 23 06:30:10 bacula s3fs: HTTP response code 204 Feb 23 06:30:10 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0198] Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198 Feb 23 06:30:10 bacula s3fs: HTTP response code 404 Feb 23 06:30:10 bacula s3fs: HTTP response code 404 was returned, returning ENOENT Feb 23 06:30:10 bacula s3fs: Body Text: Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198/ Feb 23 06:30:10 bacula s3fs: HTTP response code 404 Feb 23 06:30:10 bacula s3fs: HTTP response code 404 was returned, returning ENOENT Feb 23 06:30:10 bacula s3fs: Body Text: Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198_%24folder%24 Feb 23 06:30:10 bacula s3fs: HTTP response code 404 Feb 23 06:30:10 bacula s3fs: HTTP response code 404 was returned, returning ENOENT Feb 23 06:30:10 bacula s3fs: Body Text: Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com?delimiter=/&prefix=bacula_client_pool_0198/&max-keys=1000 Feb 23 06:30:10 bacula s3fs: HTTP response code 200 Feb 23 06:30:10 bacula s3fs: contents_xp->nodesetval is empty. Feb 23 06:30:10 bacula s3fs: contents_xp->nodesetval is empty. Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198 Feb 23 06:30:10 bacula s3fs: HTTP response code 404 Feb 23 06:30:10 bacula s3fs: HTTP response code 404 was returned, returning ENOENT Feb 23 06:30:10 bacula s3fs: Body Text: Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198/ Feb 23 06:30:10 bacula s3fs: HTTP response code 404 Feb 23 06:30:10 bacula s3fs: HTTP response code 404 was returned, returning ENOENT Feb 23 06:30:10 bacula s3fs: Body Text: Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198_%24folder%24 Feb 23 06:30:10 bacula s3fs: HTTP response code 404 Feb 23 06:30:10 bacula s3fs: HTTP response code 404 was returned, returning ENOENT Feb 23 06:30:10 bacula s3fs: Body Text: Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com?delimiter=/&prefix=bacula_client_pool_0198/&max-keys=1000 Feb 23 06:30:10 bacula s3fs: HTTP response code 200 Feb 23 06:30:10 bacula s3fs: contents_xp->nodesetval is empty. Feb 23 06:30:10 bacula s3fs: contents_xp->nodesetval is empty. Feb 23 06:30:10 bacula s3fs: create zero byte file object. Feb 23 06:30:10 bacula s3fs: uploading... [path=/bacula_client_pool_0198][fd=-1][size=0] Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198 Feb 23 06:30:10 bacula s3fs: HTTP response code 200 Feb 23 06:30:10 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0198] Feb 23 06:30:10 bacula s3fs: file locked(/bacula_client_pool_0198 - /mnt/cache/.foobar-backups.stat/bacula_client_pool_0198) Feb 23 06:30:10 bacula s3fs: file unlocked(/bacula_client_pool_0198) Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198 Feb 23 06:30:10 bacula s3fs: HTTP response code 200 Feb 23 06:30:10 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0198] Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=0] Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=1] Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=2] Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=3] Feb 23 06:30:10 bacula s3fs: copying... [path=/bacula_client_pool_0198] Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198 Feb 23 06:30:10 bacula s3fs: HTTP response code 200 Feb 23 06:30:10 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0198] Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198 Feb 23 06:30:10 bacula s3fs: HTTP response code 200 Feb 23 06:30:10 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0198] Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=0] Feb 23 06:44:01 bacula /USR/SBIN/CRON[30386]: (root) CMD (/usr/local/bin/cleanup_s3fs_cache) Feb 23 06:44:02 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=1] Feb 23 07:44:01 bacula /USR/SBIN/CRON[11138]: (root) CMD (/usr/local/bin/cleanup_s3fs_cache) Feb 23 07:44:01 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393123442][hit count=2] Feb 23 08:44:01 bacula /USR/SBIN/CRON[24259]: (root) CMD (/usr/local/bin/cleanup_s3fs_cache) Feb 23 08:44:01 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393127041][hit count=3] Feb 23 09:44:02 bacula /USR/SBIN/CRON[3633]: (root) CMD (/usr/local/bin/cleanup_s3fs_cache) Feb 23 09:44:02 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393130641][hit count=4] Feb 23 10:44:01 bacula /USR/SBIN/CRON[14815]: (root) CMD (/usr/local/bin/cleanup_s3fs_cache) Feb 23 10:44:02 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393134242][hit count=5] Feb 23 11:44:01 bacula /USR/SBIN/CRON[25959]: (root) CMD (/usr/local/bin/cleanup_s3fs_cache) Feb 23 11:44:02 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393137842][hit count=6] Feb 23 12:44:01 bacula /USR/SBIN/CRON[4655]: (root) CMD (/usr/local/bin/cleanup_s3fs_cache) Feb 23 12:44:01 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393141442][hit count=7] Feb 23 12:48:46 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393145041][hit count=8] Feb 23 12:48:46 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393145326][hit count=9] Feb 23 12:48:46 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0198] Feb 23 12:48:46 bacula s3fs: file locked(/bacula_client_pool_0198 - /mnt/cache/.foobar-backups.stat/bacula_client_pool_0198) Feb 23 12:48:46 bacula s3fs: file unlocked(/bacula_client_pool_0198) ``` Do you have any ideas what could be the reason for that behaviour and how we could fix that? Thanks.
kerem closed this issue 2026-03-04 01:41:08 +03:00
Author
Owner

@ggtakec commented on GitHub (Mar 17, 2014):

Hi,

I saw your log, and had some question about your processes.
I want to know what /usr/local/bin/cleanup_s3fs_cache script(?) is doing, and I think that it seems to read a file(xxx_0198).
The log seems that s3fs has put zero byte file, and I could not find something wrong...
I want to know what you are doing and your expected result, etc, please let me know about those.

I'm sorry for replying late, and thanks in advance for your help.

<!-- gh-comment-id:37838572 --> @ggtakec commented on GitHub (Mar 17, 2014): Hi, I saw your log, and had some question about your processes. I want to know what /usr/local/bin/cleanup_s3fs_cache script(?) is doing, and I think that it seems to read a file(xxx_0198). The log seems that s3fs has put zero byte file, and I could not find something wrong... I want to know what you are doing and your expected result, etc, please let me know about those. I'm sorry for replying late, and thanks in advance for your help.
Author
Owner

@timurb commented on GitHub (Mar 17, 2014):

I think the script has nothing to do with this issue and the reason is in the very large files like 75Gb.
After I've started to split that big file into 2 smaller files (50Gb+25Gb) I no longer see the issue.
If s3fs can't handle big files probably error message would be enough.

Just for reference here is the script I'm using to cleanup cache now, it checks if the file in cache is opened by any process and deletes it if no process accesses the cache-file.

find /path/to/s3fs.cache -type f | while read FILE; do
  if ! lsof "${FILE}" > /dev/null; then
    rm -f "${FILE}"
  fi
done

This is not the same case as #10 as in that case I've simply erased the file with no additional checks.

<!-- gh-comment-id:37849992 --> @timurb commented on GitHub (Mar 17, 2014): I think the script has nothing to do with this issue and the reason is in the very large files like 75Gb. After I've started to split that big file into 2 smaller files (50Gb+25Gb) I no longer see the issue. If s3fs can't handle big files probably error message would be enough. Just for reference here is the script I'm using to cleanup cache now, it checks if the file in cache is opened by any process and deletes it if no process accesses the cache-file. ``` find /path/to/s3fs.cache -type f | while read FILE; do if ! lsof "${FILE}" > /dev/null; then rm -f "${FILE}" fi done ``` This is not the same case as #10 as in that case I've simply erased the file with no additional checks.
Author
Owner

@ggtakec commented on GitHub (Mar 18, 2014):

I think below after your script.
At first s3fs makes zero byte object before uploading fully size of the file.
s3fs does not have a file descriptor for the temporally file and stat file during uploading objects.
It means that s3fs open and close these temporally files during uploading.
So that, I think your script removes these files though s3fs needs to read/write these files.

If it was a reason for this problem, we could check it.
Please do not run your script and we want to know whichever same error is occurred.
If you can, please do it.

Thanks in advance for your help.

<!-- gh-comment-id:37948880 --> @ggtakec commented on GitHub (Mar 18, 2014): I think below after your script. At first s3fs makes zero byte object before uploading fully size of the file. s3fs does not have a file descriptor for the temporally file and stat file during uploading objects. It means that s3fs open and close these temporally files during uploading. So that, I think your script removes these files though s3fs needs to read/write these files. If it was a reason for this problem, we could check it. Please do not run your script and we want to know whichever same error is occurred. If you can, please do it. Thanks in advance for your help.
Author
Owner

@timurb commented on GitHub (Mar 18, 2014):

Ok, I'll check that over weekend. Thanks for a quick reply!

<!-- gh-comment-id:37949532 --> @timurb commented on GitHub (Mar 18, 2014): Ok, I'll check that over weekend. Thanks for a quick reply!
Author
Owner

@timurb commented on GitHub (Mar 23, 2014):

I've just disabled the cleanup script and I still see the issue.
Here is the log. There are some additional lines here caused by my browsing the s3fs folder.
I very much think that the reason for this was I uploaded the 75Gb file while I've seen somewhere that s3fs has a limit of 64Gb.

Mar 23 15:55:35 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575628][hit count=164]
Mar 23 15:55:35 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0275]
Mar 23 15:55:35 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275
Mar 23 15:55:35 bacula s3fs: HTTP response code 200
Mar 23 15:55:35 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0275]
Mar 23 15:55:35 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575735][hit count=0]
Mar 23 15:55:35 bacula s3fs: file locked(/bacula_client_pool_0275 - /mnt/cache/.foobar-backups.stat/bacula_client_pool_0275)
Mar 23 15:55:35 bacula s3fs: file unlocked(/bacula_client_pool_0275)
Mar 23 15:55:41 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575735][hit count=1]
Mar 23 15:55:41 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575741][hit count=2]
Mar 23 15:55:41 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275?uploads
Mar 23 15:55:41 bacula s3fs: HTTP response code 200
Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275?uploadId=y1oljDLjAFhNR.S4CBaAABeTFSkba6vw3YYrjT6H6GXUnJAi1NOnAbK3Tolmlr39Qrb_FNxpO2_ApXrNgVz2ynORbQew6glGUvqe_COD9w3MtfRH0.YrE.JVO9PV0mPx
Mar 23 15:55:48 bacula s3fs: HTTP response code 200
Mar 23 15:55:48 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0275]
Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275
Mar 23 15:55:48 bacula s3fs: HTTP response code 200
Mar 23 15:55:48 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0275]
Mar 23 15:55:48 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575748][hit count=0]
Mar 23 15:55:48 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575748][hit count=1]
Mar 23 15:55:48 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575748][hit count=2]
Mar 23 15:55:48 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0275]
Mar 23 15:55:48 bacula s3fs: file locked(/bacula_client_pool_0275 - /mnt/cache/.foobar-backups.stat/bacula_client_pool_0275)
Mar 23 15:55:48 bacula s3fs: file unlocked(/bacula_client_pool_0275)
Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275
Mar 23 15:55:48 bacula s3fs: HTTP response code 200
Mar 23 15:55:48 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0275]
Mar 23 15:55:48 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575748][hit count=0]
Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275
Mar 23 15:55:48 bacula s3fs: HTTP response code 204
Mar 23 15:55:48 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0275]
Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275
Mar 23 15:55:48 bacula s3fs: HTTP response code 404
Mar 23 15:55:48 bacula s3fs: HTTP response code 404 was returned, returning ENOENT
Mar 23 15:55:48 bacula s3fs: Body Text:
Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275/
Mar 23 15:55:48 bacula s3fs: HTTP response code 404
Mar 23 15:55:48 bacula s3fs: HTTP response code 404 was returned, returning ENOENT
Mar 23 15:55:48 bacula s3fs: Body Text:
Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275_%24folder%24
Mar 23 15:55:48 bacula s3fs: HTTP response code 404
Mar 23 15:55:48 bacula s3fs: HTTP response code 404 was returned, returning ENOENT
Mar 23 15:55:48 bacula s3fs: Body Text:
Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com?delimiter=/&prefix=bacula_client_pool_0275/&max-keys=1000
Mar 23 15:55:48 bacula s3fs: HTTP response code 200
Mar 23 15:55:48 bacula s3fs: contents_xp->nodesetval is empty.
Mar 23 15:55:48 bacula s3fs: contents_xp->nodesetval is empty.
Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275
Mar 23 15:55:48 bacula s3fs: HTTP response code 404
Mar 23 15:55:48 bacula s3fs: HTTP response code 404 was returned, returning ENOENT
Mar 23 15:55:48 bacula s3fs: Body Text:
Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275/
Mar 23 15:55:48 bacula s3fs: HTTP response code 404
Mar 23 15:55:48 bacula s3fs: HTTP response code 404 was returned, returning ENOENT
Mar 23 15:55:48 bacula s3fs: Body Text:
Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275_%24folder%24
Mar 23 15:55:48 bacula s3fs: HTTP response code 404
Mar 23 15:55:48 bacula s3fs: HTTP response code 404 was returned, returning ENOENT
Mar 23 15:55:48 bacula s3fs: Body Text:
Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com?delimiter=/&prefix=bacula_client_pool_0275/&max-keys=1000
Mar 23 15:55:48 bacula s3fs: HTTP response code 200
Mar 23 15:55:48 bacula s3fs: contents_xp->nodesetval is empty.
Mar 23 15:55:48 bacula s3fs: contents_xp->nodesetval is empty.
Mar 23 15:55:48 bacula s3fs: create zero byte file object.
Mar 23 15:55:48 bacula s3fs: uploading... [path=/bacula_client_pool_0275][fd=-1][size=0]
Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275
Mar 23 15:55:48 bacula s3fs: HTTP response code 200
Mar 23 15:55:48 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0275]
Mar 23 15:55:48 bacula s3fs: file locked(/bacula_client_pool_0275 - /mnt/cache/.foobar-backups.stat/bacula_client_pool_0275)
Mar 23 15:55:48 bacula s3fs: file unlocked(/bacula_client_pool_0275)
Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275
Mar 23 15:55:48 bacula s3fs: HTTP response code 200
Mar 23 15:55:48 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0275]
Mar 23 15:55:48 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575748][hit count=0]
Mar 23 15:55:48 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575748][hit count=1]
Mar 23 15:55:48 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575748][hit count=2]
Mar 23 15:55:48 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575748][hit count=3]
Mar 23 15:55:48 bacula s3fs: copying... [path=/bacula_client_pool_0275]
Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275
Mar 23 15:55:49 bacula s3fs: HTTP response code 200
Mar 23 15:55:49 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0275]
Mar 23 15:55:49 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275
Mar 23 15:55:49 bacula s3fs: HTTP response code 200
Mar 23 15:55:49 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0275]
Mar 23 15:55:49 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575749][hit count=0]
Mar 23 20:38:06 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com?delimiter=/&prefix=&max-keys=1000
Mar 23 20:38:06 bacula s3fs: HTTP response code 200
Mar 23 20:38:06 bacula s3fs: contents_xp->nodesetval is empty.
Mar 23 20:38:06 bacula s3fs: stat cache hit [path=/bacula_client_pool_0273][time=1395575628][hit count=55]
Mar 23 20:38:06 bacula s3fs: stat cache hit [path=/bacula_client_pool_0274][time=1395575628][hit count=55]
Mar 23 20:38:06 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575749][hit count=1]
Mar 23 22:20:19 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395592689][hit count=13]
Mar 23 22:20:19 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395598819][hit count=14]
Mar 23 22:20:19 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0275]
Mar 23 22:20:19 bacula s3fs: file locked(/bacula_client_pool_0275 - /mnt/cache/.foobar-backups.stat/bacula_client_pool_0275)
Mar 23 22:20:19 bacula s3fs: file unlocked(/bacula_client_pool_0275)

For reference here are the sizes of the files (the first one is on s3fs mount, the second one is in cache).

root@monitoring:~# ls -l /mnt/cache/foobar-backups/bacula_client_pool_0275 /mnt/backups/bacula_client_pool_0275
-rw-r----- 1 bacula bacula           0 Мар 23 15:55 /mnt/backups/bacula_client_pool_0275
-rw------- 1 root   root   75707217116 Мар 23 22:20 /mnt/cache/foobar-backups/bacula_client_pool_0275
<!-- gh-comment-id:38394295 --> @timurb commented on GitHub (Mar 23, 2014): I've just disabled the cleanup script and I still see the issue. Here is the log. There are some additional lines here caused by my browsing the s3fs folder. I very much think that the reason for this was I uploaded the 75Gb file while I've seen somewhere that s3fs has a limit of 64Gb. ``` Mar 23 15:55:35 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575628][hit count=164] Mar 23 15:55:35 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0275] Mar 23 15:55:35 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275 Mar 23 15:55:35 bacula s3fs: HTTP response code 200 Mar 23 15:55:35 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0275] Mar 23 15:55:35 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575735][hit count=0] Mar 23 15:55:35 bacula s3fs: file locked(/bacula_client_pool_0275 - /mnt/cache/.foobar-backups.stat/bacula_client_pool_0275) Mar 23 15:55:35 bacula s3fs: file unlocked(/bacula_client_pool_0275) Mar 23 15:55:41 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575735][hit count=1] Mar 23 15:55:41 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575741][hit count=2] Mar 23 15:55:41 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275?uploads Mar 23 15:55:41 bacula s3fs: HTTP response code 200 Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275?uploadId=y1oljDLjAFhNR.S4CBaAABeTFSkba6vw3YYrjT6H6GXUnJAi1NOnAbK3Tolmlr39Qrb_FNxpO2_ApXrNgVz2ynORbQew6glGUvqe_COD9w3MtfRH0.YrE.JVO9PV0mPx Mar 23 15:55:48 bacula s3fs: HTTP response code 200 Mar 23 15:55:48 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0275] Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275 Mar 23 15:55:48 bacula s3fs: HTTP response code 200 Mar 23 15:55:48 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0275] Mar 23 15:55:48 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575748][hit count=0] Mar 23 15:55:48 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575748][hit count=1] Mar 23 15:55:48 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575748][hit count=2] Mar 23 15:55:48 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0275] Mar 23 15:55:48 bacula s3fs: file locked(/bacula_client_pool_0275 - /mnt/cache/.foobar-backups.stat/bacula_client_pool_0275) Mar 23 15:55:48 bacula s3fs: file unlocked(/bacula_client_pool_0275) Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275 Mar 23 15:55:48 bacula s3fs: HTTP response code 200 Mar 23 15:55:48 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0275] Mar 23 15:55:48 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575748][hit count=0] Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275 Mar 23 15:55:48 bacula s3fs: HTTP response code 204 Mar 23 15:55:48 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0275] Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275 Mar 23 15:55:48 bacula s3fs: HTTP response code 404 Mar 23 15:55:48 bacula s3fs: HTTP response code 404 was returned, returning ENOENT Mar 23 15:55:48 bacula s3fs: Body Text: Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275/ Mar 23 15:55:48 bacula s3fs: HTTP response code 404 Mar 23 15:55:48 bacula s3fs: HTTP response code 404 was returned, returning ENOENT Mar 23 15:55:48 bacula s3fs: Body Text: Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275_%24folder%24 Mar 23 15:55:48 bacula s3fs: HTTP response code 404 Mar 23 15:55:48 bacula s3fs: HTTP response code 404 was returned, returning ENOENT Mar 23 15:55:48 bacula s3fs: Body Text: Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com?delimiter=/&prefix=bacula_client_pool_0275/&max-keys=1000 Mar 23 15:55:48 bacula s3fs: HTTP response code 200 Mar 23 15:55:48 bacula s3fs: contents_xp->nodesetval is empty. Mar 23 15:55:48 bacula s3fs: contents_xp->nodesetval is empty. Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275 Mar 23 15:55:48 bacula s3fs: HTTP response code 404 Mar 23 15:55:48 bacula s3fs: HTTP response code 404 was returned, returning ENOENT Mar 23 15:55:48 bacula s3fs: Body Text: Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275/ Mar 23 15:55:48 bacula s3fs: HTTP response code 404 Mar 23 15:55:48 bacula s3fs: HTTP response code 404 was returned, returning ENOENT Mar 23 15:55:48 bacula s3fs: Body Text: Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275_%24folder%24 Mar 23 15:55:48 bacula s3fs: HTTP response code 404 Mar 23 15:55:48 bacula s3fs: HTTP response code 404 was returned, returning ENOENT Mar 23 15:55:48 bacula s3fs: Body Text: Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com?delimiter=/&prefix=bacula_client_pool_0275/&max-keys=1000 Mar 23 15:55:48 bacula s3fs: HTTP response code 200 Mar 23 15:55:48 bacula s3fs: contents_xp->nodesetval is empty. Mar 23 15:55:48 bacula s3fs: contents_xp->nodesetval is empty. Mar 23 15:55:48 bacula s3fs: create zero byte file object. Mar 23 15:55:48 bacula s3fs: uploading... [path=/bacula_client_pool_0275][fd=-1][size=0] Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275 Mar 23 15:55:48 bacula s3fs: HTTP response code 200 Mar 23 15:55:48 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0275] Mar 23 15:55:48 bacula s3fs: file locked(/bacula_client_pool_0275 - /mnt/cache/.foobar-backups.stat/bacula_client_pool_0275) Mar 23 15:55:48 bacula s3fs: file unlocked(/bacula_client_pool_0275) Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275 Mar 23 15:55:48 bacula s3fs: HTTP response code 200 Mar 23 15:55:48 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0275] Mar 23 15:55:48 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575748][hit count=0] Mar 23 15:55:48 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575748][hit count=1] Mar 23 15:55:48 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575748][hit count=2] Mar 23 15:55:48 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575748][hit count=3] Mar 23 15:55:48 bacula s3fs: copying... [path=/bacula_client_pool_0275] Mar 23 15:55:48 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275 Mar 23 15:55:49 bacula s3fs: HTTP response code 200 Mar 23 15:55:49 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0275] Mar 23 15:55:49 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0275 Mar 23 15:55:49 bacula s3fs: HTTP response code 200 Mar 23 15:55:49 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0275] Mar 23 15:55:49 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575749][hit count=0] Mar 23 20:38:06 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com?delimiter=/&prefix=&max-keys=1000 Mar 23 20:38:06 bacula s3fs: HTTP response code 200 Mar 23 20:38:06 bacula s3fs: contents_xp->nodesetval is empty. Mar 23 20:38:06 bacula s3fs: stat cache hit [path=/bacula_client_pool_0273][time=1395575628][hit count=55] Mar 23 20:38:06 bacula s3fs: stat cache hit [path=/bacula_client_pool_0274][time=1395575628][hit count=55] Mar 23 20:38:06 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395575749][hit count=1] Mar 23 22:20:19 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395592689][hit count=13] Mar 23 22:20:19 bacula s3fs: stat cache hit [path=/bacula_client_pool_0275][time=1395598819][hit count=14] Mar 23 22:20:19 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0275] Mar 23 22:20:19 bacula s3fs: file locked(/bacula_client_pool_0275 - /mnt/cache/.foobar-backups.stat/bacula_client_pool_0275) Mar 23 22:20:19 bacula s3fs: file unlocked(/bacula_client_pool_0275) ``` For reference here are the sizes of the files (the first one is on s3fs mount, the second one is in cache). ``` root@monitoring:~# ls -l /mnt/cache/foobar-backups/bacula_client_pool_0275 /mnt/backups/bacula_client_pool_0275 -rw-r----- 1 bacula bacula 0 Мар 23 15:55 /mnt/backups/bacula_client_pool_0275 -rw------- 1 root root 75707217116 Мар 23 22:20 /mnt/cache/foobar-backups/bacula_client_pool_0275 ```
Author
Owner

@ggtakec commented on GitHub (Mar 29, 2014):

Yes, s3fs limits uploading object's size as 64GB.
Please see, https://github.com/s3fs-fuse/s3fs-fuse/blob/master/src/fdcache.cpp#L890

I think, If you can change codes for test, you can upload over 64GB file.
MAX_OBJECT_SIZE and FDPAGE_SIZE in fdcache.cpp, and MULTIPART_SIZE in curl.cpp, if these symbol will be changed for test, you will be able to upload over default limit(64GB).
https://github.com/s3fs-fuse/s3fs-fuse/blob/master/src/fdcache.cpp#L54
https://github.com/s3fs-fuse/s3fs-fuse/blob/master/src/curl.cpp#L138
But if it does not working, probably you need to change more.

<!-- gh-comment-id:38997252 --> @ggtakec commented on GitHub (Mar 29, 2014): Yes, s3fs limits uploading object's size as 64GB. Please see, https://github.com/s3fs-fuse/s3fs-fuse/blob/master/src/fdcache.cpp#L890 I think, If you can change codes for test, you can upload over 64GB file. MAX_OBJECT_SIZE and FDPAGE_SIZE in fdcache.cpp, and MULTIPART_SIZE in curl.cpp, if these symbol will be changed for test, you will be able to upload over default limit(64GB). https://github.com/s3fs-fuse/s3fs-fuse/blob/master/src/fdcache.cpp#L54 https://github.com/s3fs-fuse/s3fs-fuse/blob/master/src/curl.cpp#L138 But if it does not working, probably you need to change more.
Author
Owner

@timurb commented on GitHub (Mar 29, 2014):

Having a limit for max file size is ok but you better know about that at the moment of writing but not some months after, when you need that file and it is gone.

Could you please fix s3fs so that it would produce some kind of error if you try to write more than max allowed size?

Thanks in advance!

<!-- gh-comment-id:38997731 --> @timurb commented on GitHub (Mar 29, 2014): Having a limit for max file size is ok but you better know about that at the moment of writing but not some months after, when you need that file and it is gone. Could you please fix s3fs so that it would produce some kind of error if you try to write more than max allowed size? Thanks in advance!
Author
Owner

@ggtakec commented on GitHub (Mar 29, 2014):

This limit continues from an old version.
I think this upper limit is expanded and maybe I can change it, please wait for changing and testing.

But if you need it soon, you can change symbols.
for example:
MAX_OBJECT_SIZE 137438953470LL
FDPAGE_SIZE 100 * 1024 * 1024
MULTIPART_SIZE 20971520

<!-- gh-comment-id:38999825 --> @ggtakec commented on GitHub (Mar 29, 2014): This limit continues from an old version. I think this upper limit is expanded and maybe I can change it, please wait for changing and testing. But if you need it soon, you can change symbols. for example: MAX_OBJECT_SIZE 137438953470LL FDPAGE_SIZE 100 \* 1024 \* 1024 MULTIPART_SIZE 20971520
Author
Owner

@timurb commented on GitHub (Mar 29, 2014):

The current limit is ok for me.

My point is: the limit should be very hard/impossible to reach (probably some terrabytes at least) or we should received explicit error message when we write to the file past limit.

<!-- gh-comment-id:39001542 --> @timurb commented on GitHub (Mar 29, 2014): The current limit is ok for me. My point is: the limit should be very hard/impossible to reach (probably some terrabytes at least) or we should received explicit error message when we write to the file past limit.
Author
Owner

@ggtakec commented on GitHub (Mar 30, 2014):

s3fs returns error code as ENOTSUP in most cases when over 64gb.

I branched new "upperlimit#16" which is added new option as "multipart_size" option.
This option value means one part size(MB) for multipart uploading.(default 10(MB))

Please check it and try it.
Thanks,

<!-- gh-comment-id:39020131 --> @ggtakec commented on GitHub (Mar 30, 2014): s3fs returns error code as ENOTSUP in most cases when over 64gb. I branched new "upperlimit#16" which is added new option as "multipart_size" option. This option value means one part size(MB) for multipart uploading.(default 10(MB)) Please check it and try it. Thanks,
Author
Owner

@ggtakec commented on GitHub (Apr 4, 2014):

Merged codes to master branch.
Please try to use master branch, and if you found bugs please post new issue.

Regards,

<!-- gh-comment-id:39580534 --> @ggtakec commented on GitHub (Apr 4, 2014): Merged codes to master branch. Please try to use master branch, and if you found bugs please post new issue. Regards,
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#11
No description provided.