[GH-ISSUE #446] Add a chapter dedicated to uwsgi integration #431

Closed
opened 2026-02-27 11:11:44 +03:00 by kerem · 15 comments
Owner

Originally created by @tonioo on GitHub (Dec 4, 2013).
Original GitHub issue: https://github.com/modoboa/modoboa/issues/446

Originally assigned to: @tonioo on GitHub.

Originally created by Louis-Dominique Dubeau on 2013-08-11T21:05:45Z

Modoboa version 1.0.0

Steps to reproduce:

  1. Go to the quarantine.
  2. Select a message to delete.
  3. Click the "Delete" button and accept the deletion when the popup comes up.
  4. Click "Quarantine" at the top to refresh the list.

Step 4+n (where n is variable): Eventually the deleted message won't show up again in the list.

h1. Actual results

After step 3, but before step 4 the message is still visible in the list of messages.

After step 4, the message is probably findable in the list of messages, even though it should have been deleted.

h1. Expected results

After step 3, the message should immediately disappear from the list.

After step 4, the message should not reappear on list refreshes.

h1. Observations

The issue at step 4 is the one that worries me most because I've seen it happen the other way around: modoboa failing to notice the presence of new messages in the quarantine. When I was moving to 1.0.0 (from 0.9.4) I spent a bit of time looking at the database itself through the mysql shell. The listing of quarantined messages I got in modoboa was shorter than the list I saw in amavis' database by a few dozen messages. I set it aside because I had other things to do but I would check from time to time to see whether modoboa noticed the database change. It took more than an hour before modoboa finally caught up to the database. I chalked it up to an upgrade fluke but ever since I've been seeing it recur as I've described above (i.e. modoboa not noticing deletions from the database).

Wild hypothesis: The one major database factor that I've seen change in 1.0.0 is the presence of reversion. As I understand it, it is not currently setup in modoboa to affect the amavis database but at the same time it does hook into the site-wide saving mechanisms. I noted that https://github.com/etianen/django-reversion/wiki/Low-level-API states:

"It is highly recommended that you use TransactionMiddleware in conjunction with RevisionMiddleware to ensure data integrity."

But I do not see TransactionMiddleware mentioned in http://modoboa.readthedocs.org/en/latest/getting_started/upgrade.html nor is it present in modoboa/core/templates/settings.py. Could the lack of TransactionMiddleware be the problem?

Originally created by @tonioo on GitHub (Dec 4, 2013). Original GitHub issue: https://github.com/modoboa/modoboa/issues/446 Originally assigned to: @tonioo on GitHub. **Originally created by Louis-Dominique Dubeau on 2013-08-11T21:05:45Z** Modoboa version 1.0.0 Steps to reproduce: 1. Go to the quarantine. 2. Select a message to delete. 3. Click the "Delete" button and accept the deletion when the popup comes up. 4. Click "Quarantine" at the top to refresh the list. Step 4+n (where n is variable): Eventually the deleted message won't show up again in the list. h1. Actual results After step 3, but before step 4 the message is still visible in the list of messages. After step 4, the message is **probably** findable in the list of messages, even though it should have been deleted. h1. Expected results After step 3, the message should immediately disappear from the list. After step 4, the message should not reappear on list refreshes. h1. Observations The issue at step 4 is the one that worries me most because I've seen it happen the other way around: modoboa failing to notice the presence of new messages in the quarantine. When I was moving to 1.0.0 (from 0.9.4) I spent a bit of time looking at the database itself through the mysql shell. The listing of quarantined messages I got in modoboa was shorter than the list I saw in amavis' database by a few **dozen** messages. I set it aside because I had other things to do but I would check from time to time to see whether modoboa noticed the database change. It took more than an hour before modoboa finally caught up to the database. I chalked it up to an upgrade fluke but ever since I've been seeing it recur as I've described above (i.e. modoboa not noticing deletions from the database). Wild hypothesis: The one major database factor that I've seen change in 1.0.0 is the presence of reversion. As I understand it, it is not currently setup in modoboa to affect the amavis database but at the same time it does hook into the site-wide saving mechanisms. I noted that https://github.com/etianen/django-reversion/wiki/Low-level-API states: "It is highly recommended that you use TransactionMiddleware in conjunction with RevisionMiddleware to ensure data integrity." But I do not see `TransactionMiddleware` mentioned in http://modoboa.readthedocs.org/en/latest/getting_started/upgrade.html nor is it present in `modoboa/core/templates/settings.py`. Could the lack of `TransactionMiddleware` be the problem?
kerem 2026-02-27 11:11:44 +03:00
Author
Owner

@tonioo commented on GitHub (Dec 4, 2013):

Posted by Antoine Nguyen on 2013-08-31T15:43:48Z

Hi Louis,

the TransactionMiddleware is clearly missing from modoboa's default configuration, I'll fix it. Unfortunately, I don't think it is the origin of your issue. From django's documentation:

"The TransactionMiddleware only affects the database aliased as “default” within your DATABASES setting. If you are using multiple databases and want transaction control over databases other than “default”, you will need to write your own transaction middleware."

So it seems the "amavis" database should not be affected... This issue looks more like a cache issue. Do you use one?

<!-- gh-comment-id:29816503 --> @tonioo commented on GitHub (Dec 4, 2013): **Posted by Antoine Nguyen on 2013-08-31T15:43:48Z** Hi Louis, the TransactionMiddleware is clearly missing from modoboa's default configuration, I'll fix it. Unfortunately, I don't think it is the origin of your issue. From django's documentation: "The TransactionMiddleware only affects the database aliased as “default” within your DATABASES setting. If you are using multiple databases and want transaction control over databases other than “default”, you will need to write your own transaction middleware." So it seems the "amavis" database should not be affected... This issue looks more like a cache issue. Do you use one?
Author
Owner

@tonioo commented on GitHub (Dec 4, 2013):

Posted by Louis-Dominique Dubeau on 2013-08-31T19:14:25Z

Ah, yes, I forgot that "default" is the only DB affected. Well, that's why I qualified my hypothesis as being "wild" one.

A cache would be a good candidate for being the source of trouble. However, except for those changes that must be made for a specific site, my modoboa settings are the default ones. I've not turned on anything more than what is strictly required.

I should also add that I've definitely not noticed the behavior I'm reporting with modoboa versions prior to 1.0.0. And then as soon as I have 1.0.0 installed, I notice the odd behavior.

<!-- gh-comment-id:29816504 --> @tonioo commented on GitHub (Dec 4, 2013): **Posted by Louis-Dominique Dubeau on 2013-08-31T19:14:25Z** Ah, yes, I forgot that "default" is the only DB affected. Well, that's why I qualified my hypothesis as being "wild" one. A cache would be a good candidate for being the source of trouble. However, except for those changes that _must_ be made for a specific site, my modoboa settings are the default ones. I've not turned on anything more than what is strictly required. I should also add that I've definitely not noticed the behavior I'm reporting with modoboa versions prior to 1.0.0. And then as soon as I have 1.0.0 installed, I notice the odd behavior.
Author
Owner

@tonioo commented on GitHub (Dec 4, 2013):

Posted by Antoine Nguyen on 2013-09-02T09:09:30Z

What's your database engine?

<!-- gh-comment-id:29816506 --> @tonioo commented on GitHub (Dec 4, 2013): **Posted by Antoine Nguyen on 2013-09-02T09:09:30Z** What's your database engine?
Author
Owner

@tonioo commented on GitHub (Dec 4, 2013):

Posted by Louis-Dominique Dubeau on 2013-09-02T22:43:52Z

It is mysql. The .deb version for it is 5.5.32-0ubuntu0.12.04.1

<!-- gh-comment-id:29816507 --> @tonioo commented on GitHub (Dec 4, 2013): **Posted by Louis-Dominique Dubeau on 2013-09-02T22:43:52Z** It is mysql. The .deb version for it is 5.5.32-0ubuntu0.12.04.1
Author
Owner

@tonioo commented on GitHub (Dec 4, 2013):

Posted by Antoine Nguyen on 2013-09-03T07:25:24Z

I use the same version on my server and I've never encounter this issue... Have you tried to add the TransactionMiddleware to your configuration?

<!-- gh-comment-id:29816509 --> @tonioo commented on GitHub (Dec 4, 2013): **Posted by Antoine Nguyen on 2013-09-03T07:25:24Z** I use the same version on my server and I've never encounter this issue... Have you tried to add the TransactionMiddleware to your configuration?
Author
Owner

@tonioo commented on GitHub (Dec 4, 2013):

Posted by Louis-Dominique Dubeau on 2013-09-09T01:28:03Z

Antoine Nguyen wrote:

I use the same version on my server and I've never encounter this issue... Have you tried to add the TransactionMiddleware to your configuration?

Yeah, I guess I should try that I guess, even though it should not affect the amavis database... I would think.

Here's something. I've modified modoboa.extensions.amavis.sql_listing.SQLWrapper.get_mails so that it ends with:

        ret = Msgrcpt.objects.filter(q).values("mail_id")
        print len(ret)
        return ret

And I've deleted one email. My uwsgi setup starts 4 workers. The log I get shows that one worker process consistently reads 914 emails in the quarantine while another worker process reads 913. So it does seem that the issue is with different processes seeing different things.

<!-- gh-comment-id:29816510 --> @tonioo commented on GitHub (Dec 4, 2013): **Posted by Louis-Dominique Dubeau on 2013-09-09T01:28:03Z** Antoine Nguyen wrote: > I use the same version on my server and I've never encounter this issue... Have you tried to add the TransactionMiddleware to your configuration? Yeah, I guess I should try that I guess, even though it should not affect the amavis database... I would think. Here's something. I've modified modoboa.extensions.amavis.sql_listing.SQLWrapper.get_mails so that it ends with: <pre> ret = Msgrcpt.objects.filter(q).values("mail_id") print len(ret) return ret </pre> And I've deleted one email. My uwsgi setup starts 4 workers. The log I get shows that one worker process _consistently_ reads 914 emails in the quarantine while _another_ worker process reads 913. So it does seem that the issue is with different processes seeing different things.
Author
Owner

@tonioo commented on GitHub (Dec 4, 2013):

Posted by Louis-Dominique Dubeau on 2013-09-09T01:36:39Z

Turning on the transaction middleware had no effect, as expected.

<!-- gh-comment-id:29816511 --> @tonioo commented on GitHub (Dec 4, 2013): **Posted by Louis-Dominique Dubeau on 2013-09-09T01:36:39Z** Turning on the transaction middleware had no effect, as expected.
Author
Owner

@tonioo commented on GitHub (Dec 4, 2013):

Posted by Louis-Dominique Dubeau on 2013-09-09T14:24:00Z

I've investigated mysql transactions a bit and modified the snipped I've mentioned earlier to the following:

        print len(ret)
        from django.db import connections
        cur = connections['amavis'].cursor()
        cur.execute("show global variables like 'autocommit'")
        print cur.fetchone()
        cur.execute("show session variables like 'autocommit'")
        print cur.fetchone()
        cur.execute("show variables like 'tx_isolation'")
        print cur.fetchone()
        print "managed?", connections['amavis'].is_managed()
        connections['amavis'].connection.commit()

The output of the variable queries is:

(u'autocommit', u'ON')
(u'autocommit', u'OFF')
(u'tx_isolation', u'REPEATABLE-READ')

So we know where we're standing: autocommit is turned off at the session level (which is normal). And our isolation is REPEATABLE-READ. Another thing is that with the manual commit above I'm no longer able to reproduce the problem.

Here's what I'm currently thinking. Recall that connections are not shared among threads and transactions are not shared among connections.

  1. Thread A executes _listing() so it issues SELECT queries to get the list of messages. Per Django's default behavior no commits are issued at any time because Django issues commits behind the scenes only on operations that change the database.
  2. Thread B executes the delete() view so it deletes one or more messages and commits because Django commits behind the scenes on operations that change the database.
  3. Thread A executes _listing() again. The SELECTs are in the same transaction as step 1 because no commit was issued. Per REPEATABLE-READ isolation, these selects return the exact same data as in step 1.

If thread B happens to execute _listing() after it deleted, it will see the deletion it performed. If another thread C happens to execute _listing() for the first time after thread B executed the deletion, it will also see the correct data.

The addition of @connections['amavis'].connection.commit()@ modifies the scenario above so that in step 1, Thread A commits to the database. Which means that in step 3, it is a in a new transaction and thus its SELECTs read the database from scratch.

MySQL's documentation on the isolation levels:

https://dev.mysql.com/doc/refman/5.0/en/set-transaction.html

<!-- gh-comment-id:29816512 --> @tonioo commented on GitHub (Dec 4, 2013): **Posted by Louis-Dominique Dubeau on 2013-09-09T14:24:00Z** I've investigated mysql transactions a bit and modified the snipped I've mentioned earlier to the following: <pre> print len(ret) from django.db import connections cur = connections['amavis'].cursor() cur.execute("show global variables like 'autocommit'") print cur.fetchone() cur.execute("show session variables like 'autocommit'") print cur.fetchone() cur.execute("show variables like 'tx_isolation'") print cur.fetchone() print "managed?", connections['amavis'].is_managed() connections['amavis'].connection.commit() </pre> The output of the variable queries is: <pre> (u'autocommit', u'ON') (u'autocommit', u'OFF') (u'tx_isolation', u'REPEATABLE-READ') </pre> So we know where we're standing: autocommit is turned off at the session level (which is normal). And our isolation is REPEATABLE-READ. _Another thing is that with the manual commit above I'm no longer able to reproduce the problem._ Here's what I'm currently thinking. Recall that connections are not shared among threads and transactions are not shared among connections. 1. Thread A executes _listing() so it issues SELECT queries to get the list of messages. Per Django's default behavior no commits are issued at any time because Django issues commits behind the scenes only on operations that change the database. 2. Thread B executes the delete() view so it deletes one or more messages and commits because Django commits behind the scenes on operations that change the database. 3. Thread A executes _listing() again. The SELECTs are in the same transaction as step 1 because no commit was issued. Per REPEATABLE-READ isolation, these selects return the exact same data as in step 1. If thread B happens to execute _listing() after it deleted, it will see the deletion it performed. If another thread C happens to execute _listing() for the first time after thread B executed the deletion, it will also see the correct data. The addition of @connections['amavis'].connection.commit()@ modifies the scenario above so that in step 1, Thread A commits to the database. Which means that in step 3, it is a in a new transaction and thus its SELECTs read the database from scratch. MySQL's documentation on the isolation levels: https://dev.mysql.com/doc/refman/5.0/en/set-transaction.html
Author
Owner

@tonioo commented on GitHub (Dec 4, 2013):

Posted by Antoine Nguyen on 2013-09-09T15:23:05Z

I was thinking about the same origin :) (http://stackoverflow.com/questions/1886909/how-to-disable-django-query-cache)

But, I thought those views were using the autocommit feature by default... We should take a deeper look at the django's API (https://docs.djangoproject.com/en/1.5/topics/db/transactions/).

<!-- gh-comment-id:29816514 --> @tonioo commented on GitHub (Dec 4, 2013): **Posted by Antoine Nguyen on 2013-09-09T15:23:05Z** I was thinking about the same origin :) (http://stackoverflow.com/questions/1886909/how-to-disable-django-query-cache) But, I thought those views were using the autocommit feature by default... We should take a deeper look at the django's API (https://docs.djangoproject.com/en/1.5/topics/db/transactions/).
Author
Owner

@tonioo commented on GitHub (Dec 4, 2013):

Posted by Louis-Dominique Dubeau on 2013-09-09T16:07:44Z

I can't reproduce the problem with gunicorn so I believe the problem is this:

https://docs.djangoproject.com/en/dev/howto/deployment/wsgi/uwsgi/

Warning

Some distributions, including Debian and Ubuntu, ship an outdated version of uWSGI that does not conform to the WSGI specification. Versions prior to 1.2.6 do not call close on the response object after handling a request. In those cases the request_finished signal isn’t sent. This can result in idle connections to database and memcache servers.

Ubuntu 12.04.3 LTS ships uwsgi 1.0.3!

I'm guessing that Django does some cleanup work after the WSGI server calls close. By the way, this little warning shows up only in the documentation for Django version 1.5 and later. I believe I was at 1.4 when I first installed modoboa.

So I've just upgraded to uwsgi 1.9.15 and cannot reproduce the problem anymore. This confirms that uwsgi was the issue.

The puzzling thing is why did I not notice this problem earlier. Maybe because I was using the quarantine in a very different way than I do now that I have the score sorting option. Maybe the pattern of usage I had was enough to tickle Django or uwsgi into doing the necessary cleanup even though uwsgi was buggy. :-/

I could add a uwsgi section to the docs to avoid someone else this headache.

By the way, using gunicorn I was not able to start it using the command in the documentation:

gunicorn -c gunicorn.conf.py

I had to do:

gunicorn -c gunicorn.conf.py modoboa_server.wsgi:application
<!-- gh-comment-id:29816515 --> @tonioo commented on GitHub (Dec 4, 2013): **Posted by Louis-Dominique Dubeau on 2013-09-09T16:07:44Z** I can't reproduce the problem with gunicorn so I believe the problem is this: https://docs.djangoproject.com/en/dev/howto/deployment/wsgi/uwsgi/ > Warning > > Some distributions, including Debian and Ubuntu, ship an outdated version of uWSGI that does not conform to the WSGI specification. Versions prior to 1.2.6 do not call close on the response object after handling a request. In those cases the request_finished signal isn’t sent. This can result in idle connections to database and memcache servers. Ubuntu 12.04.3 LTS ships uwsgi 1.0.3! I'm guessing that Django does some cleanup work after the WSGI server calls close. By the way, this little warning shows up only in the documentation for Django version 1.5 and later. I believe I was at 1.4 when I first installed modoboa. So I've just upgraded to uwsgi 1.9.15 and cannot reproduce the problem anymore. This confirms that uwsgi was the issue. The puzzling thing is why did I not notice this problem earlier. Maybe because I was using the quarantine in a very different way than I do now that I have the score sorting option. Maybe the pattern of usage I had was enough to tickle Django or uwsgi into doing the necessary cleanup even though uwsgi was buggy. :-/ I could add a uwsgi section to the docs to avoid someone else this headache. By the way, using gunicorn I was not able to start it using the command in the documentation: <pre> gunicorn -c gunicorn.conf.py </pre> I had to do: <pre> gunicorn -c gunicorn.conf.py modoboa_server.wsgi:application </pre>
Author
Owner

@tonioo commented on GitHub (Dec 4, 2013):

Posted by Antoine Nguyen on 2013-09-09T19:12:03Z

I think update the documentation is a good idea. I'm glad to ear the problem does not come from modoboa :)

I'll update the documentation about gunicorn. What about this ticket? Can I close it?

<!-- gh-comment-id:29816517 --> @tonioo commented on GitHub (Dec 4, 2013): **Posted by Antoine Nguyen on 2013-09-09T19:12:03Z** I think update the documentation is a good idea. I'm glad to ear the problem does not come from modoboa :) I'll update the documentation about gunicorn. What about this ticket? Can I close it?
Author
Owner

@tonioo commented on GitHub (Dec 4, 2013):

Posted by Louis-Dominique Dubeau on 2013-09-09T19:48:35Z

I'm glad to ear the problem does not come from modoboa :)

And I'm glad I did not break something.

I figured I'd close the ticket once I add the uwsgi notes to modoboa's doc. Or am I prevented from closing tickets? (I have not tried.) I probably won't be able to get to the docs before the weekend comes.

Here. I'm going to try to set the ticket's properties to something sensible, but you can close it if you wish.

<!-- gh-comment-id:29816519 --> @tonioo commented on GitHub (Dec 4, 2013): **Posted by Louis-Dominique Dubeau on 2013-09-09T19:48:35Z** > I'm glad to ear the problem does not come from modoboa :) And I'm glad I did not break something. I figured I'd close the ticket once I add the uwsgi notes to modoboa's doc. Or am I prevented from closing tickets? (I have not tried.) I probably won't be able to get to the docs before the weekend comes. Here. I'm going to try to set the ticket's properties to something sensible, but you can close it if you wish.
Author
Owner

@tonioo commented on GitHub (Dec 4, 2013):

Posted by Antoine Nguyen on 2013-09-09T19:52:59Z

Just close it when the documentation is done, we'll rename it so people will know how you discovered the issue :)

<!-- gh-comment-id:29816520 --> @tonioo commented on GitHub (Dec 4, 2013): **Posted by Antoine Nguyen on 2013-09-09T19:52:59Z** Just close it when the documentation is done, we'll rename it so people will know how you discovered the issue :)
Author
Owner

@tonioo commented on GitHub (Dec 4, 2013):

Posted by Louis-Dominique Dubeau on 2013-09-15T00:16:04Z

I've been modifying the documentation to add a section on uwsgi as discussed and I've noticed that the configuration suggested for nginx+gunicorn uses the listen 443; directive. 443 is for https but SSL is not turned on in the sample config given in the documentation. I've even tested it on a machine here just in case I missed something and indeed setting the port to 443 without turning SSL on results in a machine that accepts connections on the https port but which does not use SSL.

So is that a mistake? I can fix it but I want to make sure I'm not missing something. (And if it is a mistake, then proxy_set_header X-Forwarded-Protocol ssl; should be dropped too?)

<!-- gh-comment-id:29816521 --> @tonioo commented on GitHub (Dec 4, 2013): **Posted by Louis-Dominique Dubeau on 2013-09-15T00:16:04Z** I've been modifying the documentation to add a section on uwsgi as discussed and I've noticed that the configuration suggested for nginx+gunicorn uses the <code>listen 443;</code> directive. 443 is for https but SSL is not turned on in the sample config given in the documentation. I've even tested it on a machine here just in case I missed something and indeed setting the port to 443 without turning SSL on results in a machine that accepts connections on the https port but which does not use SSL. So is that a mistake? I can fix it but I want to make sure I'm not missing something. (And if it is a mistake, then <code>proxy_set_header X-Forwarded-Protocol ssl;</code> should be dropped too?)
Author
Owner

@tonioo commented on GitHub (Dec 4, 2013):

Posted by Antoine Nguyen on 2013-09-15T07:53:11Z

Louis-Dominique Dubeau wrote:

I've been modifying the documentation to add a section on uwsgi as discussed and I've noticed that the configuration suggested for nginx+gunicorn uses the listen 443; directive. 443 is for https but SSL is not turned on in the sample config given in the documentation. I've even tested it on a machine here just in case I missed something and indeed setting the port to 443 without turning SSL on results in a machine that accepts connections on the https port but which does not use SSL.

So is that a mistake? I can fix it but I want to make sure I'm not missing something. (And if it is a mistake, then proxy_set_header X-Forwarded-Protocol ssl; should be dropped too?)

It is a mistake, you can fix it (please) :)

<!-- gh-comment-id:29816522 --> @tonioo commented on GitHub (Dec 4, 2013): **Posted by Antoine Nguyen on 2013-09-15T07:53:11Z** Louis-Dominique Dubeau wrote: > I've been modifying the documentation to add a section on uwsgi as discussed and I've noticed that the configuration suggested for nginx+gunicorn uses the <code>listen 443;</code> directive. 443 is for https but SSL is not turned on in the sample config given in the documentation. I've even tested it on a machine here just in case I missed something and indeed setting the port to 443 without turning SSL on results in a machine that accepts connections on the https port but which does not use SSL. > > So is that a mistake? I can fix it but I want to make sure I'm not missing something. (And if it is a mistake, then <code>proxy_set_header X-Forwarded-Protocol ssl;</code> should be dropped too?) It is a mistake, you can fix it (please) :)
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/modoboa-modoboa#431
No description provided.