[GH-ISSUE #1991] Deadlock with -o streamupload #1008

Closed
opened 2026-03-04 01:50:35 +03:00 by kerem · 11 comments
Owner

Originally created by @gaul on GitHub (Jul 20, 2022).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1991

I notice that s3fs deadlocks when copying a ~300 MB file to AWS S3 when using the new -o streamupload. I will try to collect some ThreadSanitizer debugging info. References #1802.

Originally created by @gaul on GitHub (Jul 20, 2022). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1991 I notice that s3fs deadlocks when copying a ~300 MB file to AWS S3 when using the new `-o streamupload`. I will try to collect some ThreadSanitizer debugging info. References #1802.
kerem closed this issue 2026-03-04 01:50:35 +03:00
Author
Owner

@ggtakec commented on GitHub (Jul 20, 2022):

I still can't reproduce this deadlock problem.
Could you give me more information for reproducing?
(I think it won't take long to fix if it can be reproduced)

<!-- gh-comment-id:1190295363 --> @ggtakec commented on GitHub (Jul 20, 2022): I still can't reproduce this deadlock problem. Could you give me more information for reproducing? (I think it won't take long to fix if it can be reproduced)
Author
Owner

@gaul commented on GitHub (Jul 20, 2022):

test/run_tests_using_sanitizers.sh shows some problems when run with ALL_TESTS=1.

<!-- gh-comment-id:1190298029 --> @gaul commented on GitHub (Jul 20, 2022): `test/run_tests_using_sanitizers.sh` shows some problems when run with `ALL_TESTS=1`.
Author
Owner

@gaul commented on GitHub (Jul 20, 2022):

I'm not sure I understand this race:

WARNING: ThreadSanitizer: data race (pid=1001103)
  Write of size 8 at 0x7b0800047000 by thread T50 (mutexes: write M0):
    #0 free <null> (s3fs+0x4f1c17) (BuildId: 9d87e7c5947222380e818fada77a1921171cc970)
    #1 OPENSSL_sk_free <null> (libcrypto.so.3+0x1fe4f9) (BuildId: 1cc34c0eccf5c38e231a0926e6acc94da2d9bc94)
    #2 S3fsCurl::HeadRequest(char const*, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, header_nocase_cmp, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&) /home/gaul/work/s3fs-fuse/src/curl.cpp:3172:87 (s3fs+0x58f94e) (BuildId: 9d87e7c5947222380e818fada77a1921171cc970)
    #3 get_object_attribute(char const*, stat*, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, header_nocase_cmp, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >*, bool, bool*, bool) /home/gaul/work/s3fs-fuse/src/s3fs.cpp:385:28 (s3fs+0x54ce97) (BuildId: 9d87e7c5947222380e818fada77a1921171cc970)
    #4 check_object_access(char const*, int, stat*) /home/gaul/work/s3fs-fuse/src/s3fs.cpp:528:23 (s3fs+0x560fa4) (BuildId: 9d87e7c5947222380e818fada77a1921171cc970)
    #5 s3fs_open(char const*, fuse_file_info*) /home/gaul/work/s3fs-fuse/src/s3fs.cpp:2280:14 (s3fs+0x55b046) (BuildId: 9d87e7c5947222380e818fada77a1921171cc970)
    #6 fuse_fs_open <null> (libfuse.so.2+0xd4a8) (BuildId: 3e8eb5181e5720c90efd95a67ed13b04b03076d9)

  Previous read of size 8 at 0x7b0800047000 by thread T60 (mutexes: write M1, write M2):
    #0 memcpy <null> (s3fs+0x4cd636) (BuildId: 9d87e7c5947222380e818fada77a1921171cc970)
    #1 OPENSSL_sk_dup <null> (libcrypto.so.3+0x1ffd89) (BuildId: 1cc34c0eccf5c38e231a0926e6acc94da2d9bc94)
    #2 S3fsCurl::GetObjectRequest(char const*, int, long, long) /home/gaul/work/s3fs-fuse/src/curl.cpp:3533:14 (s3fs+0x591cd4) (BuildId: 9d87e7c5947222380e818fada77a1921171cc970)
    #3 FdEntity::Load(long, long, AutoLock::Type, bool) /home/gaul/work/s3fs-fuse/src/fdcache_entity.cpp:1088:39 (s3fs+0x5e0d68) (BuildId: 9d87e7c5947222380e818fada77a1921171cc970)
    #4 FdEntity::Read(int, char*, long, unsigned long, bool) /home/gaul/work/s3fs-fuse/src/fdcache_entity.cpp:1987:22 (s3fs+0x5e7929) (BuildId: 9d87e7c5947222380e818fada77a1921171cc970)
    #5 s3fs_read(char const*, char*, unsigned long, long, fuse_file_info*) /home/gaul/work/s3fs-fuse/src/s3fs.cpp:2348:24 (s3fs+0x55b757) (BuildId: 9d87e7c5947222380e818fada77a1921171cc970)
    #6 fuse_fs_read_buf <null> (libfuse.so.2+0xf4fe) (BuildId: 3e8eb5181e5720c90efd95a67ed13b04b03076d9)
<!-- gh-comment-id:1190302129 --> @gaul commented on GitHub (Jul 20, 2022): I'm not sure I understand this race: ``` WARNING: ThreadSanitizer: data race (pid=1001103) Write of size 8 at 0x7b0800047000 by thread T50 (mutexes: write M0): #0 free <null> (s3fs+0x4f1c17) (BuildId: 9d87e7c5947222380e818fada77a1921171cc970) #1 OPENSSL_sk_free <null> (libcrypto.so.3+0x1fe4f9) (BuildId: 1cc34c0eccf5c38e231a0926e6acc94da2d9bc94) #2 S3fsCurl::HeadRequest(char const*, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, header_nocase_cmp, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&) /home/gaul/work/s3fs-fuse/src/curl.cpp:3172:87 (s3fs+0x58f94e) (BuildId: 9d87e7c5947222380e818fada77a1921171cc970) #3 get_object_attribute(char const*, stat*, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, header_nocase_cmp, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >*, bool, bool*, bool) /home/gaul/work/s3fs-fuse/src/s3fs.cpp:385:28 (s3fs+0x54ce97) (BuildId: 9d87e7c5947222380e818fada77a1921171cc970) #4 check_object_access(char const*, int, stat*) /home/gaul/work/s3fs-fuse/src/s3fs.cpp:528:23 (s3fs+0x560fa4) (BuildId: 9d87e7c5947222380e818fada77a1921171cc970) #5 s3fs_open(char const*, fuse_file_info*) /home/gaul/work/s3fs-fuse/src/s3fs.cpp:2280:14 (s3fs+0x55b046) (BuildId: 9d87e7c5947222380e818fada77a1921171cc970) #6 fuse_fs_open <null> (libfuse.so.2+0xd4a8) (BuildId: 3e8eb5181e5720c90efd95a67ed13b04b03076d9) Previous read of size 8 at 0x7b0800047000 by thread T60 (mutexes: write M1, write M2): #0 memcpy <null> (s3fs+0x4cd636) (BuildId: 9d87e7c5947222380e818fada77a1921171cc970) #1 OPENSSL_sk_dup <null> (libcrypto.so.3+0x1ffd89) (BuildId: 1cc34c0eccf5c38e231a0926e6acc94da2d9bc94) #2 S3fsCurl::GetObjectRequest(char const*, int, long, long) /home/gaul/work/s3fs-fuse/src/curl.cpp:3533:14 (s3fs+0x591cd4) (BuildId: 9d87e7c5947222380e818fada77a1921171cc970) #3 FdEntity::Load(long, long, AutoLock::Type, bool) /home/gaul/work/s3fs-fuse/src/fdcache_entity.cpp:1088:39 (s3fs+0x5e0d68) (BuildId: 9d87e7c5947222380e818fada77a1921171cc970) #4 FdEntity::Read(int, char*, long, unsigned long, bool) /home/gaul/work/s3fs-fuse/src/fdcache_entity.cpp:1987:22 (s3fs+0x5e7929) (BuildId: 9d87e7c5947222380e818fada77a1921171cc970) #5 s3fs_read(char const*, char*, unsigned long, long, fuse_file_info*) /home/gaul/work/s3fs-fuse/src/s3fs.cpp:2348:24 (s3fs+0x55b757) (BuildId: 9d87e7c5947222380e818fada77a1921171cc970) #6 fuse_fs_read_buf <null> (libfuse.so.2+0xf4fe) (BuildId: 3e8eb5181e5720c90efd95a67ed13b04b03076d9) ```
Author
Owner

@gaul commented on GitHub (Jul 25, 2022):

Still a problem with 11adf11957:

Thread 5 (Thread 0x7f4ae2443640 (LWP 536416) "s3fs"):
#0  __futex_abstimed_wait_common64 (private=<optimized out>, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x7f4adc121f20) at futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x7f4adc121f20, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=<optimized out>, cancel=cancel@entry=true) at futex-internal.c:87
#2  0x00007f4ae3089a7f in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x7f4adc121f20, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=<optimized out>) at futex-internal.c:139
#3  0x00007f4ae3094bbf in do_futex_wait (sem=sem@entry=0x7f4adc121f20, abstime=0x0, clockid=0) at /usr/src/debug/glibc-2.35-15.fc36.x86_64/nptl/sem_waitcommon.c:111
#4  0x00007f4ae3094c50 in __new_sem_wait_slow64 (sem=0x7f4adc121f20, abstime=0x0, clockid=0) at /usr/src/debug/glibc-2.35-15.fc36.x86_64/nptl/sem_waitcommon.c:183
#5  0x0000000000443c0a in Semaphore::wait (this=0x7f4adc121f18) at /home/gaul/work/s3fs-fuse/src/psemaphore.h:75
#6  0x000000000047b890 in PseudoFdInfo::WaitAllThreadsExit (this=0x7f4adc121e80) at fdcache_fdinfo.cpp:612
#7  0x000000000047b02e in PseudoFdInfo::ParallelMultipartUploadAll (this=0x7f4adc121e80, path=0x7f4adc119e70 "...", upload_list=std::__cxx11::list = {...}, copy_list=empty std::__cxx11::list, result=@0x7f4ae2442804: 0) at fdcache_fdinfo.cpp:457
#8  0x000000000046f1b4 in FdEntity::RowFlushStreamMultipart (this=0x7f4adc063880, pseudo_obj=0x7f4adc121e80, tpath=0x0) at fdcache_entity.cpp:1867
#9  0x000000000046d4ef in FdEntity::RowFlush (this=0x7f4adc063880, fd=2, tpath=0x0, type=AutoLock::NONE, force_sync=false) at fdcache_entity.cpp:1425
#10 0x000000000041c807 in FdEntity::Flush (this=0x7f4adc063880, fd=2, type=AutoLock::NONE, force_sync=false) at /home/gaul/work/s3fs-fuse/src/fdcache_entity.h:141
#11 0x0000000000411722 in s3fs_flush (_path=0x7f4ad80306c0 "...", fi=0x7f4ae2442b10) at s3fs.cpp:2433
<!-- gh-comment-id:1194036468 --> @gaul commented on GitHub (Jul 25, 2022): Still a problem with 11adf119570225e30afd3e3998bd1dd962dfe43d: ``` Thread 5 (Thread 0x7f4ae2443640 (LWP 536416) "s3fs"): #0 __futex_abstimed_wait_common64 (private=<optimized out>, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x7f4adc121f20) at futex-internal.c:57 #1 __futex_abstimed_wait_common (futex_word=futex_word@entry=0x7f4adc121f20, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=<optimized out>, cancel=cancel@entry=true) at futex-internal.c:87 #2 0x00007f4ae3089a7f in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x7f4adc121f20, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=<optimized out>) at futex-internal.c:139 #3 0x00007f4ae3094bbf in do_futex_wait (sem=sem@entry=0x7f4adc121f20, abstime=0x0, clockid=0) at /usr/src/debug/glibc-2.35-15.fc36.x86_64/nptl/sem_waitcommon.c:111 #4 0x00007f4ae3094c50 in __new_sem_wait_slow64 (sem=0x7f4adc121f20, abstime=0x0, clockid=0) at /usr/src/debug/glibc-2.35-15.fc36.x86_64/nptl/sem_waitcommon.c:183 #5 0x0000000000443c0a in Semaphore::wait (this=0x7f4adc121f18) at /home/gaul/work/s3fs-fuse/src/psemaphore.h:75 #6 0x000000000047b890 in PseudoFdInfo::WaitAllThreadsExit (this=0x7f4adc121e80) at fdcache_fdinfo.cpp:612 #7 0x000000000047b02e in PseudoFdInfo::ParallelMultipartUploadAll (this=0x7f4adc121e80, path=0x7f4adc119e70 "...", upload_list=std::__cxx11::list = {...}, copy_list=empty std::__cxx11::list, result=@0x7f4ae2442804: 0) at fdcache_fdinfo.cpp:457 #8 0x000000000046f1b4 in FdEntity::RowFlushStreamMultipart (this=0x7f4adc063880, pseudo_obj=0x7f4adc121e80, tpath=0x0) at fdcache_entity.cpp:1867 #9 0x000000000046d4ef in FdEntity::RowFlush (this=0x7f4adc063880, fd=2, tpath=0x0, type=AutoLock::NONE, force_sync=false) at fdcache_entity.cpp:1425 #10 0x000000000041c807 in FdEntity::Flush (this=0x7f4adc063880, fd=2, type=AutoLock::NONE, force_sync=false) at /home/gaul/work/s3fs-fuse/src/fdcache_entity.h:141 #11 0x0000000000411722 in s3fs_flush (_path=0x7f4ad80306c0 "...", fi=0x7f4ae2442b10) at s3fs.cpp:2433 ```
Author
Owner

@ggtakec commented on GitHub (Jul 30, 2022):

The #2008 and #2012 have been merged and my tests no longer output error reports.

@gaul
Are the errors still ongoing?

<!-- gh-comment-id:1200087565 --> @ggtakec commented on GitHub (Jul 30, 2022): The #2008 and #2012 have been merged and my tests no longer output error reports. @gaul Are the errors still ongoing?
Author
Owner

@gaul commented on GitHub (Jul 30, 2022):

This appears to be fixed. Thanks!

<!-- gh-comment-id:1200125730 --> @gaul commented on GitHub (Jul 30, 2022): This appears to be fixed. Thanks!
Author
Owner

@gaul commented on GitHub (Aug 2, 2022):

I still see this symptom:

Thread 3 (Thread 0x7f6bbe83f640 (LWP 1833962) "s3fs"):
#0  __futex_abstimed_wait_common64 (private=<optimized out>, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x7f6bb4023f30) at futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x7f6bb4023f30, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=<optimized out>, cancel=cancel@entry=true) at futex-internal.c:87
#2  0x00007f6bc0489a7f in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x7f6bb4023f30, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=<optimized out>) at futex-internal.c:139
#3  0x00007f6bc0494bbf in do_futex_wait (sem=sem@entry=0x7f6bb4023f30, abstime=0x0, clockid=0) at /usr/src/debug/glibc-2.35-15.fc36.x86_64/nptl/sem_waitcommon.c:111
#4  0x00007f6bc0494c50 in __new_sem_wait_slow64 (sem=0x7f6bb4023f30, abstime=0x0, clockid=0) at /usr/src/debug/glibc-2.35-15.fc36.x86_64/nptl/sem_waitcommon.c:183
#5  0x00000000004440ae in Semaphore::wait() ()
#6  0x000000000047c050 in PseudoFdInfo::WaitAllThreadsExit() ()
#7  0x000000000047b7e8 in PseudoFdInfo::ParallelMultipartUploadAll(char const*, std::__cxx11::list<mp_part, std::allocator<mp_part> > const&, std::__cxx11::list<mp_part, std::allocator<mp_part> > const&, int&) ()
#8  0x000000000046f902 in FdEntity::RowFlushStreamMultipart(PseudoFdInfo*, char const*) ()
#9  0x000000000046dc3d in FdEntity::RowFlush(int, char const*, AutoLock::Type, bool) ()
#10 0x000000000041cb7b in FdEntity::Flush(int, AutoLock::Type, bool) ()
#11 0x000000000041193f in s3fs_flush(char const*, fuse_file_info*) ()
#12 0x00007f6bc1306d25 in fuse_flush_common (f=f@entry=0x14fdde0, req=req@entry=0x7f6bb0014390, ino=ino@entry=5, path=0x7f6bb8017c40 "...", fi=fi@entry=0x7f6bbe83eb10) at /usr/src/debug/fuse-2.9.9-14.fc36.x86_64/lib/fuse.c:3906
#13 0x00007f6bc130a024 in fuse_lib_flush (req=0x7f6bb0014390, ino=5, fi=0x7f6bbe83eb10) at /usr/src/debug/fuse-2.9.9-14.fc36.x86_64/lib/fuse.c:3956
#14 0x00007f6bc130b047 in do_flush (req=<optimized out>, nodeid=<optimized out>, inarg=<optimized out>) at /usr/src/debug/fuse-2.9.9-14.fc36.x86_64/lib/fuse_lowlevel.c:1322
#15 0x00007f6bc1316be3 in fuse_ll_process_buf (data=0x152c6d0, buf=0x7f6bbe83ecc0, ch=<optimized out>) at /usr/src/debug/fuse-2.9.9-14.fc36.x86_64/lib/fuse_lowlevel.c:2443
#16 0x00007f6bc1307960 in fuse_do_work (data=0x7f6bb8000b70) at /usr/src/debug/fuse-2.9.9-14.fc36.x86_64/lib/fuse_loop_mt.c:117
#17 0x00007f6bc048ce2d in start_thread (arg=<optimized out>) at pthread_create.c:442
#18 0x00007f6bc05121b0 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
<!-- gh-comment-id:1202447455 --> @gaul commented on GitHub (Aug 2, 2022): I still see this symptom: ``` Thread 3 (Thread 0x7f6bbe83f640 (LWP 1833962) "s3fs"): #0 __futex_abstimed_wait_common64 (private=<optimized out>, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x7f6bb4023f30) at futex-internal.c:57 #1 __futex_abstimed_wait_common (futex_word=futex_word@entry=0x7f6bb4023f30, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=<optimized out>, cancel=cancel@entry=true) at futex-internal.c:87 #2 0x00007f6bc0489a7f in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x7f6bb4023f30, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=<optimized out>) at futex-internal.c:139 #3 0x00007f6bc0494bbf in do_futex_wait (sem=sem@entry=0x7f6bb4023f30, abstime=0x0, clockid=0) at /usr/src/debug/glibc-2.35-15.fc36.x86_64/nptl/sem_waitcommon.c:111 #4 0x00007f6bc0494c50 in __new_sem_wait_slow64 (sem=0x7f6bb4023f30, abstime=0x0, clockid=0) at /usr/src/debug/glibc-2.35-15.fc36.x86_64/nptl/sem_waitcommon.c:183 #5 0x00000000004440ae in Semaphore::wait() () #6 0x000000000047c050 in PseudoFdInfo::WaitAllThreadsExit() () #7 0x000000000047b7e8 in PseudoFdInfo::ParallelMultipartUploadAll(char const*, std::__cxx11::list<mp_part, std::allocator<mp_part> > const&, std::__cxx11::list<mp_part, std::allocator<mp_part> > const&, int&) () #8 0x000000000046f902 in FdEntity::RowFlushStreamMultipart(PseudoFdInfo*, char const*) () #9 0x000000000046dc3d in FdEntity::RowFlush(int, char const*, AutoLock::Type, bool) () #10 0x000000000041cb7b in FdEntity::Flush(int, AutoLock::Type, bool) () #11 0x000000000041193f in s3fs_flush(char const*, fuse_file_info*) () #12 0x00007f6bc1306d25 in fuse_flush_common (f=f@entry=0x14fdde0, req=req@entry=0x7f6bb0014390, ino=ino@entry=5, path=0x7f6bb8017c40 "...", fi=fi@entry=0x7f6bbe83eb10) at /usr/src/debug/fuse-2.9.9-14.fc36.x86_64/lib/fuse.c:3906 #13 0x00007f6bc130a024 in fuse_lib_flush (req=0x7f6bb0014390, ino=5, fi=0x7f6bbe83eb10) at /usr/src/debug/fuse-2.9.9-14.fc36.x86_64/lib/fuse.c:3956 #14 0x00007f6bc130b047 in do_flush (req=<optimized out>, nodeid=<optimized out>, inarg=<optimized out>) at /usr/src/debug/fuse-2.9.9-14.fc36.x86_64/lib/fuse_lowlevel.c:1322 #15 0x00007f6bc1316be3 in fuse_ll_process_buf (data=0x152c6d0, buf=0x7f6bbe83ecc0, ch=<optimized out>) at /usr/src/debug/fuse-2.9.9-14.fc36.x86_64/lib/fuse_lowlevel.c:2443 #16 0x00007f6bc1307960 in fuse_do_work (data=0x7f6bb8000b70) at /usr/src/debug/fuse-2.9.9-14.fc36.x86_64/lib/fuse_loop_mt.c:117 #17 0x00007f6bc048ce2d in start_thread (arg=<optimized out>) at pthread_create.c:442 #18 0x00007f6bc05121b0 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81 ```
Author
Owner

@ggtakec commented on GitHub (Aug 2, 2022):

@gaul Thanks for your report.
Could you tell me about the situation of this phenomenon and the steps I should reproduce this?
(Is it that Semaphore::wait couldn't return ever?)

<!-- gh-comment-id:1202600064 --> @ggtakec commented on GitHub (Aug 2, 2022): @gaul Thanks for your report. Could you tell me about the situation of this phenomenon and the steps I should reproduce this? (Is it that Semaphore::wait couldn't return ever?)
Author
Owner

@gaul commented on GitHub (Aug 3, 2022):

This is just happening when mv /local/path /s3/path for a ~300 MB file.

<!-- gh-comment-id:1203951584 --> @gaul commented on GitHub (Aug 3, 2022): This is just happening when `mv /local/path /s3/path` for a ~300 MB file.
Author
Owner

@ggtakec commented on GitHub (Aug 3, 2022):

@gaul Thanks, I will try to reproduce it

<!-- gh-comment-id:1204027845 --> @ggtakec commented on GitHub (Aug 3, 2022): @gaul Thanks, I will try to reproduce it
Author
Owner

@ggtakec commented on GitHub (Mar 19, 2023):

This will be closed. If you still have problems, please reopen or post a new issue.

<!-- gh-comment-id:1475123048 --> @ggtakec commented on GitHub (Mar 19, 2023): This will be closed. If you still have problems, please reopen or post a new issue.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1008
No description provided.