mirror of
https://github.com/floccusaddon/floccus.git
synced 2026-04-25 14:16:12 +03:00
[GH-ISSUE #238] Error: Request timed out. Check your server configuration #231
Labels
No labels
browser-specific
bug
correctness issues
enhancement
feature: Google Drive
feature: Linkwarden
feature: git
feature: nextcloud-bookmarks
feature: tabs
feature: webdav
help wanted
native-app
priority: high
priority: low
priority: medium
pull-request
question
question
stale
upstream
waiting for more information
wontfix
🙁 Not following issue template
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/floccus#231
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @marcelklehr on GitHub (Mar 3, 2019).
Original GitHub issue: https://github.com/floccusaddon/floccus/issues/238
@bhlevca
@marcelklehr commented on GitHub (Mar 3, 2019):
It could help to look at the network activity in the debugger:
Firefox
about:debuggingdebugbutton next to floccus entryforce Syncfor the floccus account of your choice.Chrome
chrome://extensionsDeveloper modedist/html/background.htmlbutton in floccus' entry next to "Inspect views: "force Syncfor the floccus account of your choice@marcelklehr commented on GitHub (Mar 3, 2019):
Also, please provide the following:
and the exact error message :)
@bhlevca commented on GitHub (Mar 3, 2019):
I did the debugging in firefox as it was your firs choice:
I attached the har file that contains the debuging messages but the reply was rejected so I am doing it again on the github site.
it takes about 67 sec to return info for the GET method in the debugger window so it is obviously a time out because I set the timeout at 1min in haproxy so I asume ha proxy gets it
I find this line in the nextcloud access log:
and in the haproxy log :
Interesting to note that the logs only appear after 1 min, which it appears is the floccus default timeout, wheres in the debug windows of floccus in both firefox and chrome the request is shown immediately after the trigger
On Chrome I get this additional information:
What do you make of all this?
Thanks
Bogdan
@marcelklehr commented on GitHub (Mar 3, 2019):
Alright, now we have data :)
504 means that the ha proxy has decided to shut the request down and return nothing.
This is just due to the way logs are written on the server. The browsers display them while the requests are running, but the server only writes a log entry once the request is done (whatever that means in this case...).
So, judging by the time out, it appears that this is a performance problem. Sometimes the ha proxy shuts down the request, sometimes floccus does. How many bookmarks do you have on the server? Which database are you using and what is your estimation of the general performance expectation for the server hardware (i.e. is it a Raspi or some high end server)?
@bhlevca commented on GitHub (Mar 3, 2019):
My turn : -) Some things seem to become more clear now
Yes it is a small computer a raspberry Pi so, so small processing power. I have a cluster of them behind the haproxy for load balancing and I synchronize the databases (mysql) with circular master/slave.
However, to make things simpler for nextcloud the nextcloud backend of the haproxy has only one server, which means that the requests for nextcloud go always to the same server/db
I can play with the timeouts on the haproxy side to make them longer if needed , but if the 1 min timeout is on the floccus side I cannot do anything.
This is the first time I try to sync the bookmarks so it is a huge number, over 4000.
Any suggestions?
@bhlevca commented on GitHub (Mar 3, 2019):
I tried with a smaller folder with only a few bookmarks, It fails the same way so it is not a problem with the size of the folder of bookmarks
Also , I increased all the possible timeouts on the proxy, so the timeout is either on the floccus side or on the nextcloud side and is always around 60s, in firefox debug window it gave me 67 sec.
@marcelklehr commented on GitHub (Mar 3, 2019):
So, if this is the first time you're syncing, do you have all of your bookmarks on the server already?
@bhlevca commented on GitHub (Mar 3, 2019):
No, there is nothing on the server although I can import them from a html file if that helps
@bhlevca commented on GitHub (Mar 3, 2019):
One more thing adding bookmarks with "Add to Nextcloud" JS button works well. Just syncing the browser existing bookmarks to the nextcloud server seems to an issue
@marcelklehr commented on GitHub (Mar 3, 2019):
So, with nothing on the server
GET /index.php/apps/bookmarks/public/rest/v2/bookmark?page=-1takes more than one minute?@bhlevca commented on GitHub (Mar 3, 2019):
It appears so. I tried to sync only one leaf of my browser's bookmar tree, which has only 4 bookmarks and it takes the same amount of time to timeout.
As I mentioned above (cross posting) adding a bookmark with the "Add to Nextcloud" JS button works well. Only sync seems to be a problem.
@marcelklehr commented on GitHub (Mar 3, 2019):
If no other nextcloud URLs time out like that, the question is, why does this one.
@marcelklehr commented on GitHub (Mar 3, 2019):
Can you check nextcloud's own logging? It's at
admin -> logging(ordata/nextcloud.log)@bhlevca commented on GitHub (Mar 3, 2019):
i have the log and I looked into it I saw nothing strange except mentioning the intl php lib not being native , but that was from yesterday, and i installed the right one. Can I add an attachment here somehow
If not I can share it on my googledrive with you or send t my email it is small . Do you have an email address I can send it to ?
@marcelklehr commented on GitHub (Mar 3, 2019):
We can also chat over here, if you like https://gitter.im/marcelklehr/floccus :)
@bhlevca commented on GitHub (Mar 4, 2019):
Marcel, just out of curiosity I stopped the long sync, switched log level from 0 to 3 and restarted apache and started sync again. Guess what? It does not timeout !!! It drives me crazy. I don't understand the only thing I did between getting stuck and working was playing with the log level in nextcloud.
I am glad that it is working now but I am scared like heck because I don't understand what has caused that timeout.
@marcelklehr commented on GitHub (Mar 4, 2019):
@bhlevca I suspected exactly this when thinking about it this morning :D Changing the loglevel is not the only thing we did, though. You also added that one bookmark via the bookmarklet. So, it could be that, when there are no bookmarks at all that the request times out, for whatever reason.
@bhlevca commented on GitHub (Mar 4, 2019):
@marcelklehr If it wouldn't take so much time to upload I could easily verify this by deleting all the bookmarks and trying again. Yes I am still up! Going to bed soon though. By the way the sync finished 2 min ago. Is my raspberry pi the culprit or is it a slow process? Could it be made fast by compression/decompression at both ends?
@marcelklehr commented on GitHub (Mar 4, 2019):
It's quite normal for this amount of bookmarks to take that long with floccus, at least on the first run. There is an option for parallel syncing, which uses more bandwidth and might be faster, though. Also, you can try selecting only a subfolder for testing purposes again (in this case, make sure to remove all server-side bookmarks beforehand...).
@bhlevca commented on GitHub (Mar 4, 2019):
I would try again but I need all my 4000 bookmarks and another 7 hours of loading is not a pretty perspective. In retrospect I should have done the test as you suggested with only a subfolder, but I had no expectations when we started the debugging. Maybe next time when something needs to be changed. Good night
@marcelklehr commented on GitHub (Mar 4, 2019):
Good night :)
@bhlevca commented on GitHub (Mar 5, 2019):
Bad news, today the sync failed again with exactly the same problem. I made no changes to anything so I don't understand and don't believe in magic.
I did tried to find the culprit:
Do you have any idea how to attack this? It is not the level of logging the problem as I tries to change it back and forth, restarted the haproxy, apache and even teh entire server. The problem is still there.
You know floccus and the bookmarks app and I hope that you can give me some direction how to investigate.
I knew that something that I don't understand will not fix the problem.
Thanks
Bogdan
@marcelklehr commented on GitHub (Mar 5, 2019):
Alright, so what do we know? It's a fluctuating error, that probably doesn't depend on the database state. It's most definitely a server-side problem. You've been experiencing it for quite some time, yet you're the only person to have reported it, so there is likely something special about your setup -- whether it's an actual setup issue or just a rare bug, we don't know, though.
This is the controller method that should handle the URL in question. If you're up for it, I think it would be a good idea to litter that method with log statements (for starters mainly here: 1, 2, 3, 4, 5).
You can use the following line as a template:
@bhlevca commented on GitHub (Mar 6, 2019):
I assume from the beginning that it is a server side. I will do it closer to the end of the week when I get some time. Thanks for the advice
@bhlevca commented on GitHub (Mar 7, 2019):
OK I did some research, I added some debug statements (by the way -> warn() did not work, I had to replace it with ->debug() )
the method is processed quickly until it reaches findBookmaks() where it times out afte about 1.5 min and I get the 503 return.
So the question is what am I passing to the findBookmarks function that gets it confused.Now I even have lots of bookmarks on the server.
Please look at the nextcloud.log last below, you can see the 5= w and 6= at 1.5min apart, and immediately below is the modified code.
Please let me know what you think is to investigate next.
Bogdan
@marcelklehr commented on GitHub (Mar 7, 2019):
This is progress! Very nice! :)
So, we know now, that it's not an infinite loop or some kind of black hole that swallows the response, but the database query just takes reaally long -- longer at least than the reverse proxy timeout. To be sure, you may try to time this statement: 1, but there's not much more going on in findBookmarks other than preparing the query and executing it.
So, this would indicate that there's something going on with your MySQL instance. I'm not sure how much help I can be with debugging that, especially with your intricate setup, but I found some options allowing investigation into slow queries. The normal MySQL log might help as well.
@bhlevca commented on GitHub (Mar 8, 2019):
Yes slow progress, below more. I identified the problem in the file Bookmarks.php in function findBookmarks there is the code $qb->execute()->fetchAll(); which takes more that 1 min see below timestamps
However after that fetchAll method, getBookmarkParentFolders() gets called in a loop and I see the entries called several times even though the request was abandoned.
I extended all timeout on the proxy to 10 minutes and the timeout still happens at about 1.5 min. Are you sure that is not floccus itself which times-out?
I will try more debugging when I get another chance, but if you have ideas please share them with me.
@marcelklehr commented on GitHub (Mar 8, 2019):
Sorry if I have been unclear: Yes, floccus employs a timeout as well. So, symptomatically, the timeouts on the server and client abandon the request before it is finished. However, I don't think increasing the timeouts is an adequate response, because the time the SQL query takes is much too long for my taste.
@bhlevca commented on GitHub (Mar 8, 2019):
So is there a way to increase the timeout on floccos side? 4000 bookmarks will always take more than a minute to download.
@bhlevca commented on GitHub (Mar 8, 2019):
Could try to sync only a btanch with a few bookmarks to see if this solves the problem. In case it works but it does not for more bookmarks I am not sure if I can use focus in it's current state. However if it still does not I will have ve to investigate why fetchAllvtakes so long
@marcelklehr commented on GitHub (Mar 8, 2019):
This would be the last option, yes. However, you described that the request took one minute even when the database was still empty!
@bhlevca commented on GitHub (Mar 8, 2019):
Sorry for the typos, I wrote the text from my phone
@bhlevca commented on GitHub (Mar 8, 2019):
OK, I tried with fewer bookmarks and it works fine. I left about 100 bookmarks and it completes in a few seconds even on a raspberry pi server.
I guess in my situation with so many bookmarks I will need an option to control the timeout on the floccus side. Otherwise, I think that everything works ... almost.
It appears to me that after loading the bookmarks on the server and we try sync it tries to get them ALL back in one request and that's the reason for failure. It is possible that a better design on the app side (instead getting them all in one shot to do it in a similar way as it did when it uploaded all the 4000 bookmarks without failing?
Thanks
@marcelklehr commented on GitHub (Mar 8, 2019):
Ok, this is new. But in this case I agree, it makes sense to use pagination.
@bhlevca commented on GitHub (Mar 8, 2019):
I have been monitoring it for a while on 2 browsers Brave and Firefox and both happily sync every 15 minutes.
My conclusion is that with small number of bookmarks it works fine as it is, but with a large number a pagination method as you call it should be implemented.
Also, it would be nice to make timeout and sync period configurable with the defaults and recommendations as you consider.
@bhlevca commented on GitHub (Mar 8, 2019):
On another thought, I did not look inside the packet, but are only the differences sent over on both sides, I mean if there any information exchanged between floccus and the server about the bookmarks on each side?
I believe that is is done because if the sides are identical no further sync should take place.
I have an eversync bookmarks plugin I am wondering how can they sync from scratch 4000 bookmarks in 40 sec. I know they have probably powerful servers but they must use some compression trick to do it that fast.. Also, if there are no changes the sync process takes less then a second. I presume that tehy have some version code somewhere and do not check all the 4000 bookmarks to verify if there was a change.
Anyway these just were a few thoughts that I hope that will help with speeding up the code.
@marcelklehr commented on GitHub (Mar 8, 2019):
I think they might be sending multiple bookmarks per request. Floccus, unfortunately, due to REST API constraints, sends one bookmark per request. Floccus has the parallel sync option, though, as mentioned earlier, which might speed up your syncs by sending as much as possible in parallel.
One further possible optimization would be to make the server provide hashes of the folders, so a quick hash comparison is sufficient to determine if something has changed.
@bhlevca commented on GitHub (Mar 8, 2019):
Yes please do that, this is what was I referring to when I said version ID. This would be the single most worthy speed improvement for large sets of bookmarks
@bhlevca commented on GitHub (Mar 8, 2019):
Most likely it should be a hash on each subfolder? Because if one subfolder has changed and the other not you don't want to sync all 5000 bookmarks again. Sounds tricky
;
@marcelklehr commented on GitHub (Mar 8, 2019):
It's not that tricky actually, I have this mechanism in place already, only I download every thing and then calculate the hashes. Doing the hashing on the server would avoid the trouble of downloading everything unconditionally. :)
@marcelklehr commented on GitHub (Mar 8, 2019):
If you like you could give the server implementation a go!? :)
https://github.com/nextcloud/bookmarks/issues/669
@bhlevca commented on GitHub (Mar 9, 2019):
I would but I am not fluent in PHP, if it was Python it would be a different story. Secondly, I am past the age when I burned nights and I have a full time job.
However, I'll be happy to help with testing if it is of any help
@marcelklehr commented on GitHub (Mar 9, 2019):
I see, no worries :)
v3.2.8 now has the fix that should prevent the timeout from kicking in.
@bhlevca commented on GitHub (Mar 11, 2019):
Time out is OK now, I don't get the error anymore.
If the speed issue can be resolved, with the hashtags, floccus+nextcloud would be an excellent solution
Even with the parallel processing, the speed is really slow. If for the first time when you sync an empty bookmark folder it is OK, subsequent requests with just a few bookmarks changed should last a minute max as you said you like it.
Anyway, it is getting better by the day :-)
One more thing that I discovered recently, the timeout still happens when the parallel processing is checked.
@bhlevca commented on GitHub (Mar 11, 2019):
On the same note of speeding up and control it would be useful that beside Merge, two more options should be available:
@marcelklehr commented on GitHub (Mar 19, 2019):
This has already been requested :) See #169
@bernd-wechner commented on GitHub (Oct 1, 2019):
I'm getting heaps of timeouts again. Something is up with my nextcloudpi definitely and I just fixed a few things, and improved response a bit, but still it is just a Raspberry Pi. Still given that's gotta be a common Nextloud backend (NextCloudPi is popular, easy cheap to put into service) so I'd love to see both a way to set a longer timeout, and perhaps more importantly on a timeout see reported even if only in debug log, the exact time that was waited (so the current timeout) and the URL it was waiting along with a timestamp so I can check server logs at that time for clues.
@github-actions[bot] commented on GitHub (Mar 21, 2023):
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.