mirror of
https://github.com/hickory-dns/hickory-dns.git
synced 2026-04-25 03:05:51 +03:00
[GH-ISSUE #777] High memory usage with client #289
Labels
No labels
blocked
breaking-change
bug
bug:critical
bug:tests
cleanup
compliance
compliance
compliance
crate:all
crate:client
crate:native-tls
crate:proto
crate:recursor
crate:resolver
crate:resolver
crate:rustls
crate:server
crate:util
dependencies
docs
duplicate
easy
easy
enhance
enhance
enhance
feature:dns-over-https
feature:dns-over-quic
feature:dns-over-tls
feature:dnsssec
feature:global_lb
feature:mdns
feature:tsig
features:edns
has workaround
ops
perf
platform:WASM
platform:android
platform:fuchsia
platform:linux
platform:macos
platform:windows
pull-request
question
test
tools
tools
trust
unclear
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/hickory-dns#289
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @xabufr on GitHub (May 21, 2019).
Original GitHub issue: https://github.com/hickory-dns/hickory-dns/issues/777
Describe the bug
I've wrote a client to request million of DNS records over TCP, and I see the memory usage increasing over the time indefinitely, reaching up to 6 GB of RAM (it completes at this moment).
Expected behavior
Memory usage should be consistent over time
System:
Version:
Crate: client
Version: 0.16.0
Additional context
After investigations, I've found that removing the timeout in
crates/proto/src/xfer/dns_multiplexer.rssolves the problem.It seems the use of tokio_timer::Delay produces a memory leak.
I've tried to remove it, and it seems to fix the problem.
patch.txt
@bluejekyll commented on GitHub (May 21, 2019):
Thank you for the report, and the investigation into the root cause. I’ll try and figure out what’s going on here.
@bluejekyll commented on GitHub (May 21, 2019):
This might be related to #692
@xabufr commented on GitHub (May 21, 2019):
I've tried the provided sample but didn't see any memory leak.
I'll try to make a minimal program that reproduce this behavior.
@xabufr commented on GitHub (May 21, 2019):
Ok, I've only be able to reproduce this bug in highly concurrent context with multithreaded tokio runtime.
Cargo.tomlmain.rsMemory usage increase continuously, until all requests are processed.
Changing the runtime to a current_thread::Runtime seems to fix the bug.
Single threaded use constantly ~2MB while multi-threaded approach increase continuously and reach ~43 MB.
EDIT: Here is a heaptrack trace, the leak seems to be in tokio_timer::timer::registration::Registration::new
heaptrack.bug.8789.gz
@bluejekyll commented on GitHub (May 21, 2019):
@carllerche are you aware of any potential leaks in regards to tokio-timer usage? I think we have two related issues in trust-dns now, not sure if it’s how these libraries are using/abusing the timers or if we have an underlying bug in tokio.
@LEXUGE commented on GitHub (Apr 13, 2021):
I can reproduce a similar problem here. The memory usage increases continuously and never gets down even if all the requests are completed. All clients are UDP-based.
Seems like single-threaded configuration mitigates on my side as well.