This project might be open to known security vulnerabilities, which can be prevented by tightening the version range of affected dependencies. Find detailed information at the bottom.

Crate hickory-server

Dependencies

(24 total, 8 outdated, 5 possibly insecure)

CrateRequiredLatestStatus
 async-trait^0.1.430.1.86up to date
 basic-toml^0.10.1.9up to date
 bytes^11.10.0up to date
 cfg-if^11.0.0up to date
 enum-as-inner^0.60.6.1up to date
 futures-util^0.3.50.3.31up to date
 h2 ⚠️^0.3.00.4.7out of date
 h3^0.0.30.0.6out of date
 h3-quinn^0.0.40.0.7out of date
 hickory-proto ⚠️^0.24.00.24.3maybe insecure
 hickory-recursor^0.24.00.24.3up to date
 hickory-resolver^0.24.00.24.3up to date
 http^0.21.2.0out of date
 openssl ⚠️^0.10.550.10.71maybe insecure
 rusqlite^0.310.33.0out of date
 rustls ⚠️^0.21.60.23.23out of date
 serde^1.01.0.217up to date
 thiserror^1.0.202.0.11out of date
 time^0.30.3.37up to date
 tokio ⚠️^1.211.43.0maybe insecure
 tokio-openssl^0.6.00.6.5up to date
 tokio-rustls^0.24.00.26.1out of date
 tokio-util^0.7.90.7.13up to date
 tracing^0.1.300.1.41up to date

Dev dependencies

(3 total, 1 possibly insecure)

CrateRequiredLatestStatus
 futures-executor^0.3.50.3.31up to date
 tokio ⚠️^1.211.43.0maybe insecure
 tracing-subscriber^0.30.3.19up to date

Security Vulnerabilities

tokio: reject_remote_clients Configuration corruption

RUSTSEC-2023-0001

On Windows, configuring a named pipe server with pipe_mode will force ServerOptions::reject_remote_clients as false.

This drops any intended explicit configuration for the reject_remote_clients that may have been set as true previously.

The default setting of reject_remote_clients is normally true meaning the default is also overridden as false.

Workarounds

Ensure that pipe_mode is set first after initializing a ServerOptions. For example:

let mut opts = ServerOptions::new();
opts.pipe_mode(PipeMode::Message);
opts.reject_remote_clients(true);

h2: Degradation of service in h2 servers with CONTINUATION Flood

RUSTSEC-2024-0332

An attacker can send a flood of CONTINUATION frames, causing h2 to process them indefinitely. This results in an increase in CPU usage.

Tokio task budget helps prevent this from a complete denial-of-service, as the server can still respond to legitimate requests, albeit with increased latency.

More details at "https://seanmonstar.com/blog/hyper-http2-continuation-flood/.

Patches available for 0.4.x and 0.3.x versions.

rustls: `rustls::ConnectionCommon::complete_io` could fall into an infinite loop based on network input

RUSTSEC-2024-0336

If a close_notify alert is received during a handshake, complete_io does not terminate.

Callers which do not call complete_io are not affected.

rustls-tokio and rustls-ffi do not call complete_io and are not affected.

rustls::Stream and rustls::StreamOwned types use complete_io and are affected.

openssl: ssl::select_next_proto use after free

RUSTSEC-2025-0004

In openssl versions before 0.10.70, ssl::select_next_proto can return a slice pointing into the server argument's buffer but with a lifetime bound to the client argument. In situations where the server buffer's lifetime is shorter than the client buffer's, this can cause a use after free. This could cause the server to crash or to return arbitrary memory contents to the client.

openssl 0.10.70 fixes the signature of ssl::select_next_proto to properly constrain the output buffer's lifetime to that of both input buffers.

In standard usage of ssl::select_next_proto in the callback passed to SslContextBuilder::set_alpn_select_callback, code is only affected if the server buffer is constructed within the callback. For example:

Not vulnerable - the server buffer has a 'static lifetime:

builder.set_alpn_select_callback(|_, client_protos| {
    ssl::select_next_proto(b"\x02h2", client_protos).ok_or_else(AlpnError::NOACK)
});

Not vulnerable - the server buffer outlives the handshake:

let server_protos = b"\x02h2".to_vec();
builder.set_alpn_select_callback(|_, client_protos| {
    ssl::select_next_proto(&server_protos, client_protos).ok_or_else(AlpnError::NOACK)
});

Vulnerable - the server buffer is freed when the callback returns:

builder.set_alpn_select_callback(|_, client_protos| {
    let server_protos = b"\x02h2".to_vec();
    ssl::select_next_proto(&server_protos, client_protos).ok_or_else(AlpnError::NOACK)
});

hickory-proto: Hickory DNS failure to verify self-signed RRSIG for DNSKEYs

RUSTSEC-2025-0006

Summary

The DNSSEC validation routines treat entire RRsets of DNSKEY records as trusted once they have established trust in only one of the DNSKEYs. As a result, if a zone includes a DNSKEY with a public key that matches a configured trust anchor, all keys in that zone will be trusted to authenticate other records in the zone. There is a second variant of this vulnerability involving DS records, where an authenticated DS record covering one DNSKEY leads to trust in signatures made by an unrelated DNSKEY in the same zone.

Details

verify_dnskey_rrset() will return Ok(true) if any record's public key matches a trust anchor. This results in verify_rrset() returning a Secure proof. This ultimately results in successfully verifying a response containing DNSKEY records. verify_default_rrset() looks up DNSKEY records by calling handle.lookup(), which takes the above code path. There's a comment following this that says "DNSKEYs were already validated by the inner query in the above lookup", but this is not the case. To fully verify the whole RRset of DNSKEYs, it would be necessary to check self-signatures by the trusted key over the other keys. Later in verify_default_rrset(), verify_rrset_with_dnskey() is called multiple times with different keys and signatures, and if any call succeeds, then its Proof is returned.

Similarly, verify_dnskey_rrset() returns Ok(false) if any DNSKEY record is covered by a DS record. A comment says "If all the keys are valid, then we are secure", but this is only checking that one key is authenticated by a DS in the parent zone's delegation point. This time, after control flow returns to verify_rrset(), it will call verify_default_rrset(). The special handling for DNSKEYs in verify_default_rrset() will then call verify_rrset_with_dnskey() using each KSK DNSKEY record, and if one call succeeds, return its Proof. If there are multiple KSK DNSKEYs in the RRset, then this leads to another authentication break. We need to either pass the authenticated DNSKEYs from the DS covering check to the RRSIG validation, or we need to perform this RRSIG validation of the DNSKEY RRset inside verify_dnskey_rrset() and cut verify_default_rrset() out of DNSKEY RRset validation entirely.