[GH-ISSUE #777] High memory usage with client #289

Open
opened 2026-03-07 23:17:28 +03:00 by kerem · 6 comments
Owner

Originally created by @xabufr on GitHub (May 21, 2019).
Original GitHub issue: https://github.com/hickory-dns/hickory-dns/issues/777

Describe the bug
I've wrote a client to request million of DNS records over TCP, and I see the memory usage increasing over the time indefinitely, reaching up to 6 GB of RAM (it completes at this moment).

Expected behavior
Memory usage should be consistent over time

System:

  • OS: archlinx
  • Architecture: x86_64
  • Version: latest
  • rustc version: 1.34.0

Version:
Crate: client
Version: 0.16.0

Additional context
After investigations, I've found that removing the timeout in crates/proto/src/xfer/dns_multiplexer.rs solves the problem.
It seems the use of tokio_timer::Delay produces a memory leak.
I've tried to remove it, and it seems to fix the problem.

patch.txt

Originally created by @xabufr on GitHub (May 21, 2019). Original GitHub issue: https://github.com/hickory-dns/hickory-dns/issues/777 **Describe the bug** I've wrote a client to request million of DNS records over TCP, and I see the memory usage increasing over the time indefinitely, reaching up to 6 GB of RAM (it completes at this moment). **Expected behavior** Memory usage should be consistent over time **System:** - OS: archlinx - Architecture: x86_64 - Version: latest - rustc version: 1.34.0 **Version:** Crate: client Version: 0.16.0 **Additional context** After investigations, I've found that removing the timeout in `crates/proto/src/xfer/dns_multiplexer.rs` solves the problem. It seems the use of tokio_timer::Delay produces a memory leak. I've tried to remove it, and it seems to fix the problem. [patch.txt](https://github.com/bluejekyll/trust-dns/files/3202781/patch.txt)
Author
Owner

@bluejekyll commented on GitHub (May 21, 2019):

Thank you for the report, and the investigation into the root cause. I’ll try and figure out what’s going on here.

<!-- gh-comment-id:494408591 --> @bluejekyll commented on GitHub (May 21, 2019): Thank you for the report, and the investigation into the root cause. I’ll try and figure out what’s going on here.
Author
Owner

@bluejekyll commented on GitHub (May 21, 2019):

This might be related to #692

<!-- gh-comment-id:494451413 --> @bluejekyll commented on GitHub (May 21, 2019): This might be related to #692
Author
Owner

@xabufr commented on GitHub (May 21, 2019):

I've tried the provided sample but didn't see any memory leak.
I'll try to make a minimal program that reproduce this behavior.

<!-- gh-comment-id:494454689 --> @xabufr commented on GitHub (May 21, 2019): I've tried the provided sample but didn't see any memory leak. I'll try to make a minimal program that reproduce this behavior.
Author
Owner

@xabufr commented on GitHub (May 21, 2019):

Ok, I've only be able to reproduce this bug in highly concurrent context with multithreaded tokio runtime.
Cargo.toml

[package]
name = "bug"
version = "0.1.0"
edition = "2018"

[dependencies]
trust-dns = "*"
tokio = "*"
failure = "*"

main.rs

use failure::Error;
use std::net::{Ipv4Addr};
use std::str::FromStr;
use std::time::Duration;
use tokio::prelude::*;
use tokio::runtime::Runtime;
use trust_dns::client::{ClientFuture, ClientHandle};
use trust_dns::rr::{DNSClass, Name, RecordType};
use trust_dns::tcp::TcpClientStream;

fn main() -> Result<(), Error> {
    // n.de.net.
    let ip = Ipv4Addr::new(194, 146, 107, 6);
    let port = 53;

    let mut runtime = Runtime::new().unwrap();

    let (stream, handle) = TcpClientStream::with_timeout((ip, port).into(), Duration::from_secs(30));
    let (bg, mut client) = ClientFuture::new(stream, handle, None);

    let name = Name::from_str("example.de")?;

    runtime.spawn(bg);
    let task = stream::iter_ok::<_, Error>(0..100000)
        .map(move |_| {
            client
                .query(name.clone(), DNSClass::IN, RecordType::NS)
                .then(|_| Ok(()))
        })
        .buffer_unordered(500)
        .for_each(|()| Ok(()));
    runtime.block_on(task)?;

    println!("All queries sent");
    std::thread::sleep(Duration::from_secs(60 * 60));

    Ok(())
}

Memory usage increase continuously, until all requests are processed.
Changing the runtime to a current_thread::Runtime seems to fix the bug.
Single threaded use constantly ~2MB while multi-threaded approach increase continuously and reach ~43 MB.

EDIT: Here is a heaptrack trace, the leak seems to be in tokio_timer::timer::registration::Registration::new

heaptrack.bug.8789.gz

<!-- gh-comment-id:494492759 --> @xabufr commented on GitHub (May 21, 2019): Ok, I've only be able to reproduce this bug in highly concurrent context with multithreaded tokio runtime. `Cargo.toml` ```toml [package] name = "bug" version = "0.1.0" edition = "2018" [dependencies] trust-dns = "*" tokio = "*" failure = "*" ``` `main.rs` ```rust use failure::Error; use std::net::{Ipv4Addr}; use std::str::FromStr; use std::time::Duration; use tokio::prelude::*; use tokio::runtime::Runtime; use trust_dns::client::{ClientFuture, ClientHandle}; use trust_dns::rr::{DNSClass, Name, RecordType}; use trust_dns::tcp::TcpClientStream; fn main() -> Result<(), Error> { // n.de.net. let ip = Ipv4Addr::new(194, 146, 107, 6); let port = 53; let mut runtime = Runtime::new().unwrap(); let (stream, handle) = TcpClientStream::with_timeout((ip, port).into(), Duration::from_secs(30)); let (bg, mut client) = ClientFuture::new(stream, handle, None); let name = Name::from_str("example.de")?; runtime.spawn(bg); let task = stream::iter_ok::<_, Error>(0..100000) .map(move |_| { client .query(name.clone(), DNSClass::IN, RecordType::NS) .then(|_| Ok(())) }) .buffer_unordered(500) .for_each(|()| Ok(())); runtime.block_on(task)?; println!("All queries sent"); std::thread::sleep(Duration::from_secs(60 * 60)); Ok(()) } ``` Memory usage increase continuously, until all requests are processed. Changing the runtime to a current_thread::Runtime seems to fix the bug. Single threaded use constantly ~2MB while multi-threaded approach increase continuously and reach ~43 MB. EDIT: Here is a heaptrack trace, the leak seems to be in tokio_timer::timer::registration::Registration::new [heaptrack.bug.8789.gz](https://github.com/bluejekyll/trust-dns/files/3203843/heaptrack.bug.8789.gz)
Author
Owner

@bluejekyll commented on GitHub (May 21, 2019):

@carllerche are you aware of any potential leaks in regards to tokio-timer usage? I think we have two related issues in trust-dns now, not sure if it’s how these libraries are using/abusing the timers or if we have an underlying bug in tokio.

<!-- gh-comment-id:494528640 --> @bluejekyll commented on GitHub (May 21, 2019): @carllerche are you aware of any potential leaks in regards to tokio-timer usage? I think we have two related issues in trust-dns now, not sure if it’s how these libraries are using/abusing the timers or if we have an underlying bug in tokio.
Author
Owner

@LEXUGE commented on GitHub (Apr 13, 2021):

I can reproduce a similar problem here. The memory usage increases continuously and never gets down even if all the requests are completed. All clients are UDP-based.

Seems like single-threaded configuration mitigates on my side as well.

<!-- gh-comment-id:818440230 --> @LEXUGE commented on GitHub (Apr 13, 2021): I can reproduce a similar problem here. The memory usage increases continuously and never gets down even if all the requests are completed. All clients are UDP-based. Seems like single-threaded configuration mitigates on my side as well.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/hickory-dns#289
No description provided.