[GH-ISSUE #2079] S3FS Fio benchmark with nvme cache vs nvme #1052

Closed
opened 2026-03-04 01:50:58 +03:00 by kerem · 4 comments
Owner

Originally created by @itweixiang on GitHub (Dec 20, 2022).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2079

First

I observed two phenomena.

  • S3FS Write File
  1. When S3fs writing files, write data to the Cache first

  2. After the cache is written, the data is synchronized to the remote file system

  3. Write succeeds only after synchronization is completed

  • S3FS Read file

1, The data is in the memory, directly read the memory data

  1. The data is in cache(use_cache), and the cache data is read directly

  2. Data in the remote File System is read from the File System to the cache, and then returned from the cache

FIO Benchmark

I use an NVME hard drive with a speed of 6000MB/s as the cache. the nvme fio benchmark is :

root@a2dfa9548f53:/nvme1# fio --ioengine=libaio --bs=2m --direct=1 --thread --time_based --rw=randread --filename=./test-tmp --runtime=300 --numjobs=16 --iodepth=1 --group_reporting --name=randread-dep1 --size=5g
randread-dep1: (g=0): rw=randread, bs=(R) 2048KiB-2048KiB, (W) 2048KiB-2048KiB, (T) 2048KiB-2048KiB, ioengine=libaio, iodepth=1
...
fio-3.1
Starting 16 threads
Jobs: 16 (f=16): [r(16)][100.0%][r=6138MiB/s,w=0KiB/s][r=3069,w=0 IOPS][eta 00m:00s]
randread-dep1: (groupid=0, jobs=16): err= 0: pid=12123: Tue Dec 20 08:33:20 2022
   read: IOPS=3068, BW=6137MiB/s (6435MB/s)(1798GiB/300005msec)
    slat (usec): min=74, max=12609, avg=203.88, stdev=51.32
    clat (usec): min=566, max=12230, avg=5004.69, stdev=897.46
     lat (usec): min=930, max=17459, avg=5209.11, stdev=897.36
    clat percentiles (usec):
     |  1.00th=[ 3163],  5.00th=[ 3654], 10.00th=[ 3916], 20.00th=[ 4228],
     | 30.00th=[ 4490], 40.00th=[ 4752], 50.00th=[ 4948], 60.00th=[ 5145],
     | 70.00th=[ 5407], 80.00th=[ 5735], 90.00th=[ 6194], 95.00th=[ 6521],
     | 99.00th=[ 7373], 99.50th=[ 7701], 99.90th=[ 8586], 99.95th=[ 8979],
     | 99.99th=[ 9896]
   bw (  KiB/s): min=368640, max=417792, per=6.25%, avg=392946.14, stdev=6189.47, samples=9600
   iops        : min=  180, max=  204, avg=191.78, stdev= 3.02, samples=9600
  lat (usec)   : 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=12.26%, 10=87.72%, 20=0.01%
  cpu          : usr=0.19%, sys=4.28%, ctx=927791, majf=0, minf=290002
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=920574,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=6137MiB/s (6435MB/s), 6137MiB/s-6137MiB/s (6435MB/s-6435MB/s), io=1798GiB (1931GB), run=300005-300005msec

Disk stats (read/write):
  nvme1n1: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%

Then I mount the above NVME hard drive as a cache for my S3FS , The performance of the S3FS tested is as follows:

1g

root@a2dfa9548f53:/alluxio/public# fio --ioengine=libaio --bs=2m --direct=1 --thread --time_based --rw=randread --filename=./test-tmp --runtime=300 --numjobs=16 --iodepth=1 --group_reporting --name=randread-dep1 --size=1g
randread-dep1: (g=0): rw=randread, bs=(R) 2048KiB-2048KiB, (W) 2048KiB-2048KiB, (T) 2048KiB-2048KiB, ioengine=libaio, iodepth=1
...
fio-3.1
Starting 16 threads
Jobs: 16 (f=16): [r(16)][100.0%][r=2526MiB/s,w=0KiB/s][r=1263,w=0 IOPS][eta 00m:00s]
randread-dep1: (groupid=0, jobs=16): err= 0: pid=21994: Tue Dec 20 08:44:44 2022
   read: IOPS=1235, BW=2471MiB/s (2591MB/s)(724GiB/300008msec)
    slat (usec): min=7280, max=57085, avg=12936.89, stdev=1524.87
    clat (nsec): min=534, max=145230, avg=3056.25, stdev=1292.10
     lat (usec): min=7288, max=57091, avg=12941.38, stdev=1525.10
    clat percentiles (nsec):
     |  1.00th=[ 1336],  5.00th=[ 1752], 10.00th=[ 1976], 20.00th=[ 2224],
     | 30.00th=[ 2512], 40.00th=[ 2800], 50.00th=[ 3056], 60.00th=[ 3376],
     | 70.00th=[ 3536], 80.00th=[ 3696], 90.00th=[ 3920], 95.00th=[ 4128],
     | 99.00th=[ 4768], 99.50th=[ 7776], 99.90th=[16320], 99.95th=[21120],
     | 99.99th=[43264]
   bw (  KiB/s): min=49152, max=172376, per=6.25%, avg=158234.01, stdev=8671.84, samples=9600
   iops        : min=   24, max=   84, avg=77.22, stdev= 4.23, samples=9600
  lat (nsec)   : 750=0.10%, 1000=0.22%
  lat (usec)   : 2=10.56%, 4=80.99%, 10=7.82%, 20=0.26%, 50=0.05%
  lat (usec)   : 100=0.01%, 250=0.01%
  cpu          : usr=0.12%, sys=3.27%, ctx=5938177, majf=0, minf=828708
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=370723,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=2471MiB/s (2591MB/s), 2471MiB/s-2471MiB/s (2591MB/s-2591MB/s), io=724GiB (777GB), run=300008-300008msec

5g

root@a2dfa9548f53:/alluxio/public# fio --ioengine=libaio --bs=2m --direct=1 --thread --time_based --rw=randread --filename=./test-tmp --runtime=300 --numjobs=16 --iodepth=1 --group_reporting --name=randread-dep1 --size=5g
randread-dep1: (g=0): rw=randread, bs=(R) 2048KiB-2048KiB, (W) 2048KiB-2048KiB, (T) 2048KiB-2048KiB, ioengine=libaio, iodepth=1
...
fio-3.1
Starting 16 threads
Jobs: 16 (f=16): [r(16)][100.0%][r=712MiB/s,w=0KiB/s][r=356,w=0 IOPS][eta 00m:00s] 
randread-dep1: (groupid=0, jobs=16): err= 0: pid=28270: Tue Dec 20 08:50:14 2022
   read: IOPS=439, BW=880MiB/s (922MB/s)(258GiB/300019msec)
    slat (usec): min=2392, max=91957, avg=36360.16, stdev=11913.52
    clat (nsec): min=527, max=1676.0k, avg=4042.56, stdev=4839.62
     lat (usec): min=2394, max=91963, avg=36366.03, stdev=11913.85
    clat percentiles (nsec):
     |  1.00th=[ 1688],  5.00th=[ 2480], 10.00th=[ 3120], 20.00th=[ 3472],
     | 30.00th=[ 3696], 40.00th=[ 3856], 50.00th=[ 4016], 60.00th=[ 4192],
     | 70.00th=[ 4320], 80.00th=[ 4512], 90.00th=[ 4768], 95.00th=[ 5024],
     | 99.00th=[ 6048], 99.50th=[10816], 99.90th=[20864], 99.95th=[25984],
     | 99.99th=[56064]
   bw (  KiB/s): min=32768, max=143360, per=6.25%, avg=56296.90, stdev=16032.05, samples=9600
   iops        : min=   16, max=   70, avg=27.46, stdev= 7.83, samples=9600
  lat (nsec)   : 750=0.07%, 1000=0.18%
  lat (usec)   : 2=1.89%, 4=46.51%, 10=50.78%, 20=0.46%, 50=0.10%
  lat (usec)   : 100=0.01%
  lat (msec)   : 2=0.01%
  cpu          : usr=0.06%, sys=1.74%, ctx=2116179, majf=1, minf=273420
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=131959,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=880MiB/s (922MB/s), 880MiB/s-880MiB/s (922MB/s-922MB/s), io=258GiB (277GB), run=300019-300019msec

With an NVME drive as a cache, performance is only one-third at size=1G and one-seventh at size=5g

In a potentially worse case scenario, my Linux machine has 128G of memory and 100G of memory available, which means that either 1G or 5G will be fully loaded into memory. Fio may read directly from memory instead of the NVME cache

The speed of reading from memory should not be only 2000MB/S ?

ENV

  • uname -r
5.8.0-63-generic
  • s3fs --v
Amazon Simple Storage Service File System V1.86 (commit:unknown) with GnuTLS(gcrypt)
Copyright (C) 2010 Randy Rizun <rrizun@gmail.com>
License GPL2: GNU GPL version 2 <https://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
  • s3fs mount
nohup s3fs public /alluxio/public -o url=http://ip:8331 -o endpoint=cn -f -o passwd_file=/root/.passwd-s3fs -o use_path_request_style -o parallel_count=1000 -o use_cache=/nvme1/s3-cache &

Thank

Thank you for your contribution to S3FS, so that we can use it better. My statement may not be complete and clear, or it may be a performance problem caused by my parameter not being set. I look forward to your reply and comment.

Originally created by @itweixiang on GitHub (Dec 20, 2022). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2079 ### First I observed two phenomena. - S3FS Write File 1. When S3fs writing files, write data to the Cache first 2. After the cache is written, the data is synchronized to the remote file system 3. Write succeeds only after synchronization is completed - S3FS Read file 1, The data is in the memory, directly read the memory data 2. The data is in cache(use_cache), and the cache data is read directly 3. Data in the remote File System is read from the File System to the cache, and then returned from the cache ### FIO Benchmark I use an NVME hard drive with a speed of 6000MB/s as the cache. the nvme fio benchmark is : ``` root@a2dfa9548f53:/nvme1# fio --ioengine=libaio --bs=2m --direct=1 --thread --time_based --rw=randread --filename=./test-tmp --runtime=300 --numjobs=16 --iodepth=1 --group_reporting --name=randread-dep1 --size=5g randread-dep1: (g=0): rw=randread, bs=(R) 2048KiB-2048KiB, (W) 2048KiB-2048KiB, (T) 2048KiB-2048KiB, ioengine=libaio, iodepth=1 ... fio-3.1 Starting 16 threads Jobs: 16 (f=16): [r(16)][100.0%][r=6138MiB/s,w=0KiB/s][r=3069,w=0 IOPS][eta 00m:00s] randread-dep1: (groupid=0, jobs=16): err= 0: pid=12123: Tue Dec 20 08:33:20 2022 read: IOPS=3068, BW=6137MiB/s (6435MB/s)(1798GiB/300005msec) slat (usec): min=74, max=12609, avg=203.88, stdev=51.32 clat (usec): min=566, max=12230, avg=5004.69, stdev=897.46 lat (usec): min=930, max=17459, avg=5209.11, stdev=897.36 clat percentiles (usec): | 1.00th=[ 3163], 5.00th=[ 3654], 10.00th=[ 3916], 20.00th=[ 4228], | 30.00th=[ 4490], 40.00th=[ 4752], 50.00th=[ 4948], 60.00th=[ 5145], | 70.00th=[ 5407], 80.00th=[ 5735], 90.00th=[ 6194], 95.00th=[ 6521], | 99.00th=[ 7373], 99.50th=[ 7701], 99.90th=[ 8586], 99.95th=[ 8979], | 99.99th=[ 9896] bw ( KiB/s): min=368640, max=417792, per=6.25%, avg=392946.14, stdev=6189.47, samples=9600 iops : min= 180, max= 204, avg=191.78, stdev= 3.02, samples=9600 lat (usec) : 750=0.01%, 1000=0.01% lat (msec) : 2=0.01%, 4=12.26%, 10=87.72%, 20=0.01% cpu : usr=0.19%, sys=4.28%, ctx=927791, majf=0, minf=290002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwt: total=920574,0,0, short=0,0,0, dropped=0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=6137MiB/s (6435MB/s), 6137MiB/s-6137MiB/s (6435MB/s-6435MB/s), io=1798GiB (1931GB), run=300005-300005msec Disk stats (read/write): nvme1n1: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% ``` Then I mount the above NVME hard drive as a cache for my S3FS , The performance of the S3FS tested is as follows: 1g ``` root@a2dfa9548f53:/alluxio/public# fio --ioengine=libaio --bs=2m --direct=1 --thread --time_based --rw=randread --filename=./test-tmp --runtime=300 --numjobs=16 --iodepth=1 --group_reporting --name=randread-dep1 --size=1g randread-dep1: (g=0): rw=randread, bs=(R) 2048KiB-2048KiB, (W) 2048KiB-2048KiB, (T) 2048KiB-2048KiB, ioengine=libaio, iodepth=1 ... fio-3.1 Starting 16 threads Jobs: 16 (f=16): [r(16)][100.0%][r=2526MiB/s,w=0KiB/s][r=1263,w=0 IOPS][eta 00m:00s] randread-dep1: (groupid=0, jobs=16): err= 0: pid=21994: Tue Dec 20 08:44:44 2022 read: IOPS=1235, BW=2471MiB/s (2591MB/s)(724GiB/300008msec) slat (usec): min=7280, max=57085, avg=12936.89, stdev=1524.87 clat (nsec): min=534, max=145230, avg=3056.25, stdev=1292.10 lat (usec): min=7288, max=57091, avg=12941.38, stdev=1525.10 clat percentiles (nsec): | 1.00th=[ 1336], 5.00th=[ 1752], 10.00th=[ 1976], 20.00th=[ 2224], | 30.00th=[ 2512], 40.00th=[ 2800], 50.00th=[ 3056], 60.00th=[ 3376], | 70.00th=[ 3536], 80.00th=[ 3696], 90.00th=[ 3920], 95.00th=[ 4128], | 99.00th=[ 4768], 99.50th=[ 7776], 99.90th=[16320], 99.95th=[21120], | 99.99th=[43264] bw ( KiB/s): min=49152, max=172376, per=6.25%, avg=158234.01, stdev=8671.84, samples=9600 iops : min= 24, max= 84, avg=77.22, stdev= 4.23, samples=9600 lat (nsec) : 750=0.10%, 1000=0.22% lat (usec) : 2=10.56%, 4=80.99%, 10=7.82%, 20=0.26%, 50=0.05% lat (usec) : 100=0.01%, 250=0.01% cpu : usr=0.12%, sys=3.27%, ctx=5938177, majf=0, minf=828708 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwt: total=370723,0,0, short=0,0,0, dropped=0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=2471MiB/s (2591MB/s), 2471MiB/s-2471MiB/s (2591MB/s-2591MB/s), io=724GiB (777GB), run=300008-300008msec ``` 5g ``` root@a2dfa9548f53:/alluxio/public# fio --ioengine=libaio --bs=2m --direct=1 --thread --time_based --rw=randread --filename=./test-tmp --runtime=300 --numjobs=16 --iodepth=1 --group_reporting --name=randread-dep1 --size=5g randread-dep1: (g=0): rw=randread, bs=(R) 2048KiB-2048KiB, (W) 2048KiB-2048KiB, (T) 2048KiB-2048KiB, ioengine=libaio, iodepth=1 ... fio-3.1 Starting 16 threads Jobs: 16 (f=16): [r(16)][100.0%][r=712MiB/s,w=0KiB/s][r=356,w=0 IOPS][eta 00m:00s] randread-dep1: (groupid=0, jobs=16): err= 0: pid=28270: Tue Dec 20 08:50:14 2022 read: IOPS=439, BW=880MiB/s (922MB/s)(258GiB/300019msec) slat (usec): min=2392, max=91957, avg=36360.16, stdev=11913.52 clat (nsec): min=527, max=1676.0k, avg=4042.56, stdev=4839.62 lat (usec): min=2394, max=91963, avg=36366.03, stdev=11913.85 clat percentiles (nsec): | 1.00th=[ 1688], 5.00th=[ 2480], 10.00th=[ 3120], 20.00th=[ 3472], | 30.00th=[ 3696], 40.00th=[ 3856], 50.00th=[ 4016], 60.00th=[ 4192], | 70.00th=[ 4320], 80.00th=[ 4512], 90.00th=[ 4768], 95.00th=[ 5024], | 99.00th=[ 6048], 99.50th=[10816], 99.90th=[20864], 99.95th=[25984], | 99.99th=[56064] bw ( KiB/s): min=32768, max=143360, per=6.25%, avg=56296.90, stdev=16032.05, samples=9600 iops : min= 16, max= 70, avg=27.46, stdev= 7.83, samples=9600 lat (nsec) : 750=0.07%, 1000=0.18% lat (usec) : 2=1.89%, 4=46.51%, 10=50.78%, 20=0.46%, 50=0.10% lat (usec) : 100=0.01% lat (msec) : 2=0.01% cpu : usr=0.06%, sys=1.74%, ctx=2116179, majf=1, minf=273420 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwt: total=131959,0,0, short=0,0,0, dropped=0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=880MiB/s (922MB/s), 880MiB/s-880MiB/s (922MB/s-922MB/s), io=258GiB (277GB), run=300019-300019msec ``` **With an NVME drive as a cache, performance is only one-third at size=1G and one-seventh at size=5g** **In a potentially worse case scenario, my Linux machine has 128G of memory and 100G of memory available, which means that either 1G or 5G will be fully loaded into memory. Fio may read directly from memory instead of the NVME cache** **The speed of reading from memory should not be only 2000MB/S ?** ### ENV - uname -r ``` 5.8.0-63-generic ``` - s3fs --v ``` Amazon Simple Storage Service File System V1.86 (commit:unknown) with GnuTLS(gcrypt) Copyright (C) 2010 Randy Rizun <rrizun@gmail.com> License GPL2: GNU GPL version 2 <https://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. ``` - s3fs mount ``` nohup s3fs public /alluxio/public -o url=http://ip:8331 -o endpoint=cn -f -o passwd_file=/root/.passwd-s3fs -o use_path_request_style -o parallel_count=1000 -o use_cache=/nvme1/s3-cache & ``` ### Thank Thank you for your contribution to S3FS, so that we can use it better. My statement may not be complete and clear, or it may be a performance problem caused by my parameter not being set. I look forward to your reply and comment.
kerem closed this issue 2026-03-04 01:50:58 +03:00
Author
Owner

@itweixiang commented on GitHub (Jan 3, 2023):

Two weeks passed and there was no response

<!-- gh-comment-id:1369313403 --> @itweixiang commented on GitHub (Jan 3, 2023): Two weeks passed and there was no response
Author
Owner

@ggtakec commented on GitHub (Jan 11, 2023):

@itweixiang I'm sorry for my late reply.
s3fs uploads and downloads objects on S3, and the use_cache option sits in the middle, creating a cache file on your local disk.

Accessing this local file(cache) is a little more complicated.
Actually reading and writing data to the local file is done directly.
For example, when reading a file(object on the S3 server), it checks whether the file has been updated, and then accesses the local file if there is no change.

If the file is cached in its entirety, the file stat information has not changed, and the s3fs's stat cache has been cached out, the local file will only be read without communicating with the S3 server. .
(I think it will give the best performance)

Currently s3fs has some options for caching(files, stats information) and upload method(multipart, steram, etc).
I think the combination of these will change the performance depending on user usage.

Even if you use a high-speed NVMe SSD, you may not get the expected value if processing other than accessing the SSD(used as cache) takes time.

It's a good idea to try a few combinations of options and see what works best for you.

<!-- gh-comment-id:1378857025 --> @ggtakec commented on GitHub (Jan 11, 2023): @itweixiang I'm sorry for my late reply. s3fs uploads and downloads objects on S3, and the use_cache option sits in the middle, creating a cache file on your local disk. Accessing this local file(cache) is a little more complicated. Actually reading and writing data to the local file is done directly. For example, when reading a file(object on the S3 server), it checks whether the file has been updated, and then accesses the local file if there is no change. If the file is cached in its entirety, the file stat information has not changed, and the s3fs's stat cache has been cached out, the local file will only be read without communicating with the S3 server. . (I think it will give the best performance) Currently s3fs has some options for caching(files, stats information) and upload method(multipart, steram, etc). I think the combination of these will change the performance depending on user usage. Even if you use a high-speed NVMe SSD, you may not get the expected value if processing other than accessing the SSD(used as cache) takes time. It's a good idea to try a few combinations of options and see what works best for you.
Author
Owner

@itweixiang commented on GitHub (Jan 12, 2023):

@ggtakec Thank for your reply

During this time, I also wondered if the metadata overhead of the judgment file was affecting the reading speed

I am trying to follow your suggestions to verify whether there will be better performance. Thank you again for your reply

<!-- gh-comment-id:1380447817 --> @itweixiang commented on GitHub (Jan 12, 2023): @ggtakec Thank for your reply During this time, I also wondered if the metadata overhead of the judgment file was affecting the reading speed I am trying to follow your suggestions to verify whether there will be better performance. Thank you again for your reply
Author
Owner

@ggtakec commented on GitHub (Jan 12, 2023):

@itweixiang Thanks for your kindness, we will also continue to modify it for better performance.

<!-- gh-comment-id:1380467735 --> @ggtakec commented on GitHub (Jan 12, 2023): @itweixiang Thanks for your kindness, we will also continue to modify it for better performance.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1052
No description provided.