mirror of
https://github.com/retspen/webvirtmgr.git
synced 2026-04-25 23:55:57 +03:00
[GH-ISSUE #105] Webvirtmgr Memory Issue? #86
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @baggar11 on GitHub (Aug 23, 2013).
Original GitHub issue: https://github.com/retspen/webvirtmgr/issues/105
With the latest updates to Webvirtmgr, apache seems to be crashing and restarting itself unexpectedly. I've tested this on both Ubuntu 12.04 and Ubuntu 13.04. It seems to only happen when leaving the Webvirtmgr webpage open while it updates RAM and CPU.
Syslog entries:
[ 153.403934] apache2 invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
[ 153.403938] apache2 cpuset=/ mems_allowed=0
[ 153.403941] Pid: 1691, comm: apache2 Tainted: GF 3.8.0-29-generic #42-Ubuntu
[ 153.403942] Call Trace:
[ 153.403949] [] dump_header+0x80/0x1c3
[ 153.403953] [] oom_kill_process+0x1b7/0x320
[ 153.403956] [] ? has_ns_capability_noaudit+0x15/0x20
[ 153.403958] [] out_of_memory+0x417/0x450
[ 153.403961] [] __alloc_pages_nodemask+0x7e6/0x920
[ 153.403965] [] ? queue_unplugged+0x46/0xb0
[ 153.403969] [] alloc_pages_current+0xb8/0x180
[ 153.403971] [] __page_cache_alloc+0xaf/0xd0
[ 153.403973] [] filemap_fault+0x2a2/0x470
[ 153.403976] [] __do_fault+0x6f/0x510
[ 153.403979] [] handle_pte_fault+0x95/0x450
[ 153.403983] [] ? _raw_spin_lock+0xe/0x20
[ 153.403987] [] handle_mm_fault+0x299/0x670
[ 153.403989] [] __do_page_fault+0x18d/0x500
[ 153.403993] [] ? change_protection+0x49/0xc0
[ 153.403995] [] ? mprotect_fixup+0x157/0x280
[ 153.403998] [] do_page_fault+0xe/0x10
[ 153.404000] [] page_fault+0x28/0x30
[ 153.404001] Mem-Info:
[ 153.404003] Node 0 DMA per-cpu:
[ 153.404005] CPU 0: hi: 0, btch: 1 usd: 0
[ 153.404006] CPU 1: hi: 0, btch: 1 usd: 0
[ 153.404007] CPU 2: hi: 0, btch: 1 usd: 0
[ 153.404009] CPU 3: hi: 0, btch: 1 usd: 0
[ 153.404010] Node 0 DMA32 per-cpu:
[ 153.404011] CPU 0: hi: 186, btch: 31 usd: 2
[ 153.404012] CPU 1: hi: 186, btch: 31 usd: 49
[ 153.404014] CPU 2: hi: 186, btch: 31 usd: 26
[ 153.404015] CPU 3: hi: 186, btch: 31 usd: 0
[ 153.404016] Node 0 Normal per-cpu:
[ 153.404017] CPU 0: hi: 186, btch: 31 usd: 33
[ 153.404018] CPU 1: hi: 186, btch: 31 usd: 30
[ 153.404020] CPU 2: hi: 186, btch: 31 usd: 140
[ 153.404021] CPU 3: hi: 186, btch: 31 usd: 110
[ 153.404024] active_anon:685490 inactive_anon:251893 isolated_anon:96
[ 153.404024] active_file:16 inactive_file:0 isolated_file:0
[ 153.404024] unevictable:0 dirty:0 writeback:0 unstable:0
[ 153.404024] free:21596 slab_reclaimable:3179 slab_unreclaimable:4217
[ 153.404024] mapped:0 shmem:1517 pagetables:4879 bounce:0
[ 153.404024] free_cma:0
[ 153.404027] Node 0 DMA free:15344kB min:252kB low:312kB high:376kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15104kB managed:15360kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:16kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
[ 153.404032] lowmem_reserve[]: 0 2923 3905 3905
[ 153.404035] Node 0 DMA32 free:54184kB min:50392kB low:62988kB high:75588kB active_anon:2316616kB inactive_anon:579544kB active_file:88kB inactive_file:0kB unevictable:0kB isolated(anon):384kB isolated(file):0kB present:2993380kB managed:2943936kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:180kB kernel_stack:8kB pagetables:11728kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:231 all_unreclaimable? yes
[ 153.404039] lowmem_reserve[]: 0 0 982 982
[ 153.404042] Node 0 Normal free:16856kB min:16932kB low:21164kB high:25396kB active_anon:425344kB inactive_anon:428028kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1005984kB managed:954760kB mlocked:0kB dirty:0kB writeback:40kB mapped:0kB shmem:6068kB slab_reclaimable:12716kB slab_unreclaimable:16672kB kernel_stack:1648kB pagetables:7788kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:246 all_unreclaimable? yes
[ 153.404046] lowmem_reserve[]: 0 0 0 0
[ 153.404048] Node 0 DMA: 0_4kB 0_8kB 1_16kB (U) 1_32kB (U) 1_64kB (U) 1_128kB (U) 1_256kB (U) 1_512kB (U) 0_1024kB 1_2048kB (R) 3_4096kB (M) = 15344kB
[ 153.404058] Node 0 DMA32: 15_4kB (U) 71_8kB (UM) 65_16kB (UM) 30_32kB (UM) 15_64kB (UM) 5_128kB (UM) 2_256kB (UM) 1_512kB (U) 0_1024kB 0_2048kB 12_4096kB (EMR) = 54404kB
[ 153.404068] Node 0 Normal: 335_4kB (UEM) 187_8kB (UEM) 149_16kB (UEM) 54_32kB (UEM) 14_64kB (UEM) 11_128kB (UEM) 2_256kB (EM) 0_512kB 3_1024kB (UEM) 0_2048kB 1*4096kB (R) = 16932kB
[ 153.404078] 14014 total pagecache pages
[ 153.404079] 12452 pages in swap cache
[ 153.404081] Swap cache stats: add 1021860, delete 1009408, find 1919/2155
[ 153.404082] Free swap = 0kB
[ 153.404083] Total swap = 4076540kB
[ 153.414959] 1041904 pages RAM
[ 153.414961] 58614 pages reserved
[ 153.414962] 526450 pages shared
[ 153.414963] 958385 pages non-shared
[ 153.414964] [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name
[ 153.414974] [ 362] 0 362 3817 0 11 56 0 upstart-file-br
[ 153.414978] [ 403] 0 403 4361 0 13 75 0 upstart-udev-br
[ 153.414980] [ 407] 0 407 5411 1 16 174 -1000 udevd
[ 153.414985] [ 556] 0 556 5453 1 15 194 -1000 udevd
[ 153.414987] [ 557] 0 557 5410 1 15 176 -1000 udevd
[ 153.414990] [ 735] 0 735 3814 0 13 53 0 upstart-socket-
[ 153.414992] [ 830] 102 830 5963 1 16 70 0 dbus-daemon
[ 153.414995] [ 856] 101 856 61865 0 24 151 0 rsyslogd
[ 153.414997] [ 1062] 0 1062 2560 0 7 575 0 dhclient
[ 153.415001] [ 1084] 0 1084 13063 0 30 150 -1000 sshd
[ 153.415003] [ 1168] 0 1168 3957 1 13 40 0 getty
[ 153.415005] [ 1171] 0 1171 3957 1 13 39 0 getty
[ 153.415007] [ 1175] 0 1175 3957 1 13 40 0 getty
[ 153.415009] [ 1176] 0 1176 3957 1 13 42 0 getty
[ 153.415012] [ 1183] 0 1183 3957 1 13 41 0 getty
[ 153.415014] [ 1199] 0 1199 5331 17 15 36 0 cron
[ 153.415016] [ 1215] 0 1215 4787 23 14 35 0 irqbalance
[ 153.415019] [ 1216] 33 1216 23389 110 46 3004 0 python
[ 153.415022] [ 1261] 0 1261 234568 1 112 2141 0 libvirtd
[ 153.415024] [ 1280] 0 1280 20705 25 41 484 0 apache2
[ 153.415026] [ 1281] 33 1281 20638 16 39 484 0 apache2
[ 153.415029] [ 1283] 33 1283 175483 49 76 1132 0 apache2
[ 153.415031] [ 1284] 33 1284 191885 62 77 1130 0 apache2
[ 153.415033] [ 1411] 0 1411 3957 1 13 40 0 getty
[ 153.415035] [ 1448] 106 1448 6520 0 17 61 0 dnsmasq
[ 153.415038] [ 1494] 105 1494 750671 4419 294 47018 0 qemu-system-x86
[ 153.415040] [ 1569] 0 1569 18920 1 41 202 0 sshd
[ 153.415043] [ 1586] 1000 1586 18920 26 40 177 0 sshd
[ 153.415045] [ 1587] 1000 1587 5610 0 16 482 0 bash
[ 153.415048] [ 1673] 1000 1673 6484 164 18 48 0 htop
[ 153.415050] [ 1688] 33 1688 2013470 918632 3796 969020 0 apache2
[ 153.415053] Out of memory: Kill process 1688 (apache2) score 944 or sacrifice child
[ 153.415103] Killed process 1688 (apache2) total-vm:8053880kB, anon-rss:3674528kB, file-rss:0kB
@retspen commented on GitHub (Aug 27, 2013):
Need tested this issue. CentOS not have this problem.
@baggar11 commented on GitHub (Aug 27, 2013):
Let me know if there is anything else I can provide you with to help track it down.
@retspen commented on GitHub (Sep 6, 2013):
WSGI or Virtual Host?
@baggar11 commented on GitHub (Sep 6, 2013):
I don't know. It started to happen shortly after the commits for the CPU/RAM bars were added. It didn't used to do that. Maybe Django?
@retspen commented on GitHub (Sep 27, 2013):
Update to new release and test
@retspen commented on GitHub (Oct 15, 2013):
You can change refresh time in file settings.py:
TIME_JS_REFRESH = 2000 to TIME_JS_REFRESH = 8000
and test