[GH-ISSUE #366] random crashes in s3fs #188

Closed
opened 2026-03-04 01:43:05 +03:00 by kerem · 12 comments
Owner

Originally created by @romange on GitHub (Feb 23, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/366

Hi,
We get random crashes with this driver. When it happens we see lines like the one below in the syslog.
We use version 1.79.

Thanks
08:55:43 ip-10-0-4-29 kernel: [3686951.802241] traps: s3fs[2952] general protection ip:441e7a sp:7f69627fb580 error:0 in s3fs[400000+63000]

Originally created by @romange on GitHub (Feb 23, 2016). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/366 Hi, We get random crashes with this driver. When it happens we see lines like the one below in the syslog. We use version 1.79. Thanks `08:55:43 ip-10-0-4-29 kernel: [3686951.802241] traps: s3fs[2952] general protection ip:441e7a sp:7f69627fb580 error:0 in s3fs[400000+63000]`
kerem closed this issue 2026-03-04 01:43:06 +03:00
Author
Owner

@ggtakec commented on GitHub (Mar 6, 2016):

Hi, romange
Did you use latest codes in master branch?
(Some bugs was fixed by it.)

And could you run s3fs with dbglevel option for logging detail?

Regards,

<!-- gh-comment-id:192885056 --> @ggtakec commented on GitHub (Mar 6, 2016): Hi, romange Did you use latest codes in master branch? (Some bugs was fixed by it.) And could you run s3fs with dbglevel option for logging detail? Regards,
Author
Owner

@romange commented on GitHub (Mar 6, 2016):

We used 1.79 which is the latest release that available. eventually we decided to give up on s3fs and use a cron script that manually uploads data to s3.

<!-- gh-comment-id:192885314 --> @romange commented on GitHub (Mar 6, 2016): We used 1.79 which is the latest release that available. eventually we decided to give up on s3fs and use a cron script that manually uploads data to s3.
Author
Owner

@ggtakec commented on GitHub (Mar 6, 2016):

Hi, romange
If you can reproduce, please tell me your environment such as OS and object size etc.
If I can reproduce the defect, I will be able to know the reason of a bug and to fix it.

Even if I might not make it, it is happy that you will try to test new s3fs after we will release new version.

Regards,

<!-- gh-comment-id:192886795 --> @ggtakec commented on GitHub (Mar 6, 2016): Hi, romange If you can reproduce, please tell me your environment such as OS and object size etc. If I can reproduce the defect, I will be able to know the reason of a bug and to fix it. Even if I might not make it, it is happy that you will try to test new s3fs after we will release new version. Regards,
Author
Owner

@romange commented on GitHub (Mar 7, 2016):

It's Ubuntu 14.04, each file is around 50-100Kb. Usually it's one file per hour. in addition there are sporadic reads of dozens of files. It was glued together with sftp service - we built sftp-to-s3 port for our clients to upload data to s3. They uploaded directly to mounted s3fs directory.

Crashes were sporadic - could be weeks or days until the next one. We replaced it with cron script that just reads the data, uploads it to s3 and them removes it from the local disk of the instance.

<!-- gh-comment-id:193156868 --> @romange commented on GitHub (Mar 7, 2016): It's Ubuntu 14.04, each file is around 50-100Kb. Usually it's one file per hour. in addition there are sporadic reads of dozens of files. It was glued together with sftp service - we built sftp-to-s3 port for our clients to upload data to s3. They uploaded directly to mounted s3fs directory. Crashes were sporadic - could be weeks or days until the next one. We replaced it with cron script that just reads the data, uploads it to s3 and them removes it from the local disk of the instance.
Author
Owner

@jdcallet commented on GitHub (Feb 19, 2018):

Hi @romange @ggtakec
Any additional information about this?
I'm stuck getting always this same error.

<!-- gh-comment-id:366703573 --> @jdcallet commented on GitHub (Feb 19, 2018): Hi @romange @ggtakec Any additional information about this? I'm stuck getting always this same error.
Author
Owner

@romange commented on GitHub (Feb 19, 2018):

@jdcallet No, we moved on to other solutions.

<!-- gh-comment-id:366708868 --> @romange commented on GitHub (Feb 19, 2018): @jdcallet No, we moved on to other solutions.
Author
Owner

@anovikov1984 commented on GitHub (Mar 12, 2018):

@romange which other solutions? :) i am facing same problem

<!-- gh-comment-id:372431994 --> @anovikov1984 commented on GitHub (Mar 12, 2018): @romange which other solutions? :) i am facing same problem
Author
Owner

@romange commented on GitHub (Mar 12, 2018):

Well, we used goofys for some time but for heavy-load use-cases we stopped
accessing files via fuse. We just copy them to local disk and then process
them.

On Mon, Mar 12, 2018 at 9:20 PM, anovikov1984 notifications@github.com
wrote:

@romange https://github.com/romange which other solutions? :) i am
facing same problem


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/s3fs-fuse/s3fs-fuse/issues/366#issuecomment-372431994,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ADgSiPuvKkRkynO6f4X5xV_Fj5U8Qj-Tks5tdsqCgaJpZM4HgsCb
.

--
Best regards,
Roman

<!-- gh-comment-id:372474734 --> @romange commented on GitHub (Mar 12, 2018): Well, we used goofys for some time but for heavy-load use-cases we stopped accessing files via fuse. We just copy them to local disk and then process them. On Mon, Mar 12, 2018 at 9:20 PM, anovikov1984 <notifications@github.com> wrote: > @romange <https://github.com/romange> which other solutions? :) i am > facing same problem > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/s3fs-fuse/s3fs-fuse/issues/366#issuecomment-372431994>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/ADgSiPuvKkRkynO6f4X5xV_Fj5U8Qj-Tks5tdsqCgaJpZM4HgsCb> > . > -- Best regards, Roman
Author
Owner

@mnd999 commented on GitHub (Apr 25, 2018):

Getting this too s3fs[23121] general protection ip:7fb03c656df9 sp:7fb00effbbe0 error:0 in libcrypto.so.1.0.0

<!-- gh-comment-id:384205269 --> @mnd999 commented on GitHub (Apr 25, 2018): Getting this too `s3fs[23121] general protection ip:7fb03c656df9 sp:7fb00effbbe0 error:0 in libcrypto.so.1.0.0`
Author
Owner

@gaul commented on GitHub (Feb 2, 2019):

Could you try running s3fs under Valgrind which may reveal the source of these symptoms?

<!-- gh-comment-id:459933958 --> @gaul commented on GitHub (Feb 2, 2019): Could you try running s3fs under Valgrind which may reveal the source of these symptoms?
Author
Owner

@gaul commented on GitHub (Mar 17, 2019):

I suspect that f53503438c resolves this issue. Could you test with 1.85 and report your results?

<!-- gh-comment-id:473666346 --> @gaul commented on GitHub (Mar 17, 2019): I suspect that f53503438c28910a36fe2b155d6623c9af41d951 resolves this issue. Could you test with 1.85 and report your results?
Author
Owner

@gaul commented on GitHub (Apr 9, 2019):

Closing due to inactivity. Please reopen if symptoms persist.

<!-- gh-comment-id:481187568 --> @gaul commented on GitHub (Apr 9, 2019): Closing due to inactivity. Please reopen if symptoms persist.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#188
No description provided.