mirror of
https://github.com/axllent/mailpit.git
synced 2026-04-26 00:35:51 +03:00
[GH-ISSUE #3] GUI not loading all messages #3
Labels
No labels
awaiting feedback
bug
docker
documentation
enhancement
github_actions
invalid
pull-request
question
stale
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/mailpit#3
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @sandervanzutphen on GitHub (Aug 4, 2022).
Original GitHub issue: https://github.com/axllent/mailpit/issues/3
When opening the webclient not all messages are displayed.
The behavior seems a very random. At each refresh different messages are shown. Only very few occasions show all messages.
First pageload;

Second pageload;

Third pageload

v0.0.7, using -d parameter for datastorage.
@axllent commented on GitHub (Aug 4, 2022):
Thanks for the bug report. I have done plenty tests and never encountered this, so please can you please tell me what browser and OS you are using? Does the total number of displayed messages get fixed when you change the 50 (too right) to something else?
@axllent commented on GitHub (Aug 5, 2022):
Ahh, I think I possibly found the bug - when an email doesn't get stored correctly (eg: too large for the in-memory storage [1MB]) Mailpit rolls back the insert (there is a summary table and a raw email table). When rolling back it would always deduct the values from the "stats" (total number of messages, unread count etc), even when rolling back wasn't necessary. Is there a chance that one or two of your test emails were larger than 1MB and that you are using the memory storage (ie: not specifying a data directory) - OR could 1 or 2 of your test emails have been invalid emails (ie: not accepted by the SMTP server)?
I will release a new version later today which hopefully addresses the issue you are having. If, after the update (to the release I haven't made yet), you still continue to get this issue, you can have a look at
http://0.0.0.0:8025/api/catchall/messages(adjust for your IP/port) which should give you the JSON response that is generating those values in the web UI, in particular the first few values in that response eg:"total":9,"unread":9,"count":9,"start":0,- I would be very interested to see what that says, and whether those values change when you refresh the link.@sandervanzutphen commented on GitHub (Aug 5, 2022):
GUI is Chrome on Win10.
Mailpit is on Ubuntu 20.04
Updated to 0.0.8, got some errors on the datastore, so i deleted the entire datastore dir.
Sent two testmails using smtper.net.
GUI AJAX result to api/catchall/messages;
{"total":2,"unread":2,"count":0,"start":0,"items":[]}Request;
Response;
@axllent commented on GitHub (Aug 5, 2022):
That is very strange. Does the Mailpit server report any issues?
@sandervanzutphen commented on GitHub (Aug 5, 2022):
On startup it crashes agien. Claiming memory shortage, but 2GB RAM available.
@sandervanzutphen commented on GitHub (Aug 5, 2022):
Fresh datastore, happens agien. No errors on the server.
@axllent commented on GitHub (Aug 5, 2022):
Ahh, that starts to make more sense. I believe the 2GB is not RAM (despite the message), but rather physical file (in the datastore). What is
/data/mailpit/? Is it a remote (nfs?) mount, does that mount have > 2GB? I actually wrote a wiki article on this very thing earlier today. It sounds to me like the physical database cannot write reliability to the folder, so so becomes corrupt - which would explain the inconsistencies.@sandervanzutphen commented on GitHub (Aug 5, 2022):
/data/mailpitis a regular disk, with plenty of free diskspace.@axllent commented on GitHub (Aug 5, 2022):
FYI: CloverDB (actually BadgerDB) creates a 2GB file in the folder (a
00000*.vlogfile) in amongst the other files - this gets removed again when Mailpit exists normally. Is your Ubuntu server/machine maybe a 32bit machine? What filesystem is/data/mailpiton?@sandervanzutphen commented on GitHub (Aug 5, 2022):
@axllent commented on GitHub (Aug 5, 2022):
I am also running Ubuntu 20 on amd64, and I have extensively tested on this hardware, as well as via Github Actions (virtual hardware in the cloud) and I cannot reproduce your error (inserting / deleting / searching 1000 messages in both RAM and physical). I found a few references of similar errors via Google, but they don't seem appropriate given your storage and architecture, so I really have no idea at this stage.
-d /tmp/mailpit?dmesgto see if there are any kernel errors being reported (eg: hardware issues) shortly after getting the error?I am just trying to narrow down why you are getting this problems.
@sandervanzutphen commented on GitHub (Aug 5, 2022):
{"total":2,"unread":2,"count":0,"start":0,"items":[]}@axllent commented on GitHub (Aug 5, 2022):
@ostafen Sorry to pull you into this, but have you come across this issue before, or something similar? I don't think this has anything to do with Mailpit, but rather something to do with either CloverDB or BadgerDB (occurring in both in RAM & physical storage). I suspect there is an issue with the memory/data mapping corruption which badger only discovers/displays on save/restart.
From what I can tell, @sandervanzutphen is storing two emails (in the later responses), clover returns no error (they saved successfully), but then they aren't found in the database (
totalandunreadin the JSON are values stored in memory for quick lookup, and in this case should be matching thecountanditemslength here because it's the first paginated page of only 2 results). Those in-memory values and are only set after successful writes via CloverDB to two catalogues(email summary / email data).countis thelen(items), anditemsshould be the the two results of the emails.The very first post in this issue explains inconsistencies, where CloverDB is returning different results on page load (2, then 3, then 2 again) - so I am suspecting data corruption.
Restarting results in:
Do you have any ideas? PS: this is using v2 of CloverDB.
@ostafen commented on GitHub (Aug 5, 2022):
@axllent, do you ensure to call
db.Close()every time the server shutdowns or gets killed?@axllent commented on GitHub (Aug 5, 2022):
@ostafen Yes, I have a listener which closes the database. The issue of missing results however also happens when running memory though.
@ostafen commented on GitHub (Aug 5, 2022):
Is the available RAM 2GB? Maybe, badger is trying to allocate too much memory, so it gets the
cannot allocate memoryerror. It could be useful to be able to reproduce the problem.@sandervanzutphen, how much RAM do you have in your VM? Which golang version?
@sandervanzutphen, can you download the clover repo at this link: https://github.com/ostafen/clover and try to run tests?
If the error is due to clover, than I think that some tests should fail
@axllent commented on GitHub (Aug 5, 2022):
FYI: @sandervanzutphen To test you will need "go" installed
sudo apt install golang-go ca-certificates, then check out the clover repo and in the clover folder rungo test ./...@sandervanzutphen commented on GitHub (Aug 5, 2022):
@ostafen
@axllent commented on GitHub (Aug 5, 2022):
@sandervanzutphen That seemed to pass all tests. Can you please run the storage tests for Mailpit (check out Mailpit, and run
go test ./storage -vin the mailpit directory)?@sandervanzutphen commented on GitHub (Aug 5, 2022):
@axllent commented on GitHub (Aug 5, 2022):
Oh damn, I forgot you can't test Mailpit with your version of Go (default is v1.13 on Ubuntu, this project requires v1.18). Thanks anyway for trying. OK, at this stage I still have absolutely no idea what is going wrong, I'm really sorry but it's past midnight so I'll have to think about it more tomorrow. Maybe ostafen has some other ideas in the meantime, but right now I need some sleep :)
@ostafen commented on GitHub (Aug 5, 2022):
@axllent, @sandervanzutphen, unfortunately I'm not able to currently reproduce this problem on my machine. Emails are displayed correctly
@axllent commented on GitHub (Aug 7, 2022):
@sandervanzutphen I also cannot reproduce this issue at all. I have tried multiple things today, including sending well over 300,000 emails (over 10GB of data) several times to Mailpit (running the release binary), and I haven't had any issues at all, even after a restart. I'm almost out of ideas here. What method are you using to send those emails to Mailpit?
@ostafen commented on GitHub (Aug 7, 2022):
@axllent, @sandervanzutphen Also tested on a VM with 2GB of RAM, running ubuntu 20. All is working fine.
@sandervanzutphen If you could give us specific instruction on setting up a VM environment identical to your one, it could be of help
@axllent commented on GitHub (Aug 30, 2022):
@sandervanzutphen I have just released a beta version which I believe will solve your problem, as the new application/database memory requirements are now fraction of what it used to be (documented in #10). Please let me know if this solves your issue?
@ostafen I have to say that I feel really guilty switching to SQLite, especially because of all the help and support you have provided me with CloverDB (thank you so much again!). I still think CloverDB is a really awesome idea, however in the end BadgerDB turned out to be far more hungry for RAM than I was expecting, which isn't a good fit for Mailpit. We use Mailpit for email testing in different self-contained development environments (docker), which, when running 6 or 8 containers at the same time, just wasn't feasible considering Badger requires 50-125 times more RAM than SQLite. An active instance of Mailpit now uses between 8-20MB of RAM in total.
@ostafen commented on GitHub (Oct 4, 2022):
@axllent Sorry for the late response, but I've been very busy in the last months. I completely agree your choice. Your experience also convinced me that BadgerDB is not the perfect storage backend for clover. It is not the first time that a clover user complains about db size and memory usage. Considering that clover has been developed for simple embedded projects, this is a severe drawback. This is why I'm going to switch to bbolt, which looks like a better option. I would be happy if you'll consider switching again to clover in future, once these issues are fixed :)
@axllent commented on GitHub (Oct 5, 2022):
@ostafen thanks for the reply (I thought you were ignoring me :)). I agree with your thinking - CloverDB as a lightweight embedded database isn't lightweight at all due to BadgerDB's requirements. Assuming that bbolt is in fact lightweight, I think something like that would be a very good move for CloverDB.
Whilst I would love to say I'll switch to the new CloverDB (once released), I can't and I won't unfortunately. One major database change is already one more than I like, and SQLite is working exceptionally well for this purpose, and provides additional features which are now part of the core search / filtering in Mailpit. That doesn't mean I won't seriously consider using CloverDB for future projects though (I develop quite a bit of software)!
Just a thought, but depending on your target audience/use database size, you may also want to consider at least looking whether SQLite could potentially serve as your backend database instead of bbolt (I'm using this native go version of SQLite in Mailpit). It is not quite as full-featured as the official SQLite, however I do not think that CloverDB would require any of that missing functionality anyway. It is pretty fast & very lightweight, and CloverDB just acts as a layer on top of it anyway. It may not have the same OS/arch support as bbolt though (but all the main ones are covered). If you did decide to try it and have questions (that I can help with), the you can just reach out to me directly via email (axllent @ gmail) - or just tag me in something.
I'll close this issue now as I'm not getting a response from @sandervanzutphen (I think he's moved on / given up), and this conversation is leading far from the original issue, and I believe it was fully resolved quite a while ago ;-)