Dissecting OpenDPI (DNS)

A Short Introduction

The time is nigh! I know all the teasing about DPI is annoying, but this will be the last article before the release of libpeak’s lightweight inspection. Thirty protocols have been prepared and polished so far. But back to the Domain Name System (DNS) for the time being. We all use it on a daily basis. And, well… covering this topic is too much for this context. If you want to learn about the implementation details, please kindly look at RFC 1035. Let’s get straight to the biscuits.

OpenDPI is Not Responding

The complete code can be found here. DNS can run on top of TCP or UDP, so both cases need to be covered:

if (packet->udp != NULL) {
	dport = ntohs(packet->udp->dest);
}
if (packet->tcp != NULL) {
	dport = ntohs(packet->tcp->dest);
}

if (dport == 53 && packet->payload_packet_len >= 12) {

The code only pulls the destination port and checks for the well-known server port 53. The minimum DNS header size is 12 bytes. That’s alright, but checking the destination port means that the detection can only handle the request! This code will not work on asymmetric DNS traffic. All DNS responses will remain undetected in this case.

if (((packet->payload[2] & 0x80) == 0 &&
    ntohs(get_u16(packet->payload, 4)) <= IPOQUE_MAX_DNS_REQUESTS &&
    ntohs(get_u16(packet->payload, 4)) != 0 &&
    ntohs(get_u16(packet->payload, 6)) == 0 &&
    ntohs(get_u16(packet->payload, 8)) == 0 &&
    ntohs(get_u16(packet->payload, 10)) <= IPOQUE_MAX_DNS_REQUESTS) ||

The first part of the if statement checks DNS over UDP. Is this a request (most significant bit of byte 3 is zero)? The question count needs to be greater than zero and lower than or equal to 16. The answer count and the authority records count must be zero. The additional records count needs to be lower than or equal to 16 as well. Making sure that the authority records count is zero discards a few queries in my private trace collection. Wireshark definitely tags these packets as DNS, so I’d conclude the restriction isn’t recommended.

    ((ntohs(get_u16(packet->payload, 0)) == packet->payload_packet_len - 2) &&
    (packet->payload[4] & 0x80) == 0 &&
    ntohs(get_u16(packet->payload, 6)) <= IPOQUE_MAX_DNS_REQUESTS &&
    ntohs(get_u16(packet->payload, 6)) != 0 &&
    ntohs(get_u16(packet->payload, 8)) == 0 &&
    ntohs(get_u16(packet->payload, 10)) == 0 &&
    packet->payload_packet_len >= 14 &&
    ntohs(get_u16(packet->payload, 12)) <= IPOQUE_MAX_DNS_REQUESTS)) {

This second half does the same checking for DNS over TCP — it’s padded with a two byte length value (excluding the length itself). Notice the payload size check for 14. Very good and just in time! :) The only issue is: the length is provided in TCP to account for segmentation (packet splitting), so checking if the current packet contains the whole DNS message will discard all the cases when it doesn’t.

Condensed Information In As Little As 16 Bits

The absence of port matching (more on why this is important will be in an upcoming article) is the most obvious achievement in the code presented below. This wasn’t an easy feat as most of the DPI intelligence for DNS resides in the third and fourth byte, namely the flags. If the checks are missing, the greedy matching characteristics are unmasked! Secondly, the matching works asymmetrically now. And, finally, TCP and UDP code paths are merged. I can’t confirm, but TCP matching seems to be greedy still. As always, this is a work in progress! Please leave comments, questions or fixes below.

LI_DESCRIBE_APP(dns)
{
        /* TCP: padded with 2 bytes of length */
        const unsigned int padding =
            (packet->net_type == IPPROTO_TCP) * sizeof(uint16_t);
        struct dns {
                uint16_t id;
                uint16_t flags;
                uint16_t question_count;
                uint16_t answer_count;
                uint16_t ns_count;
                uint16_t ar_count;
        } __packed *ptr = (void *)&packet->app.raw[padding];
        uint16_t decoded;

        if (packet->app_len < sizeof(struct dns) + padding) {
                return (0);
        }

        if (padding && be16dec(packet->app.raw) +
            sizeof(uint16_t) < packet->app_len) {
                /* TCP: verify that length is somewhat correct */
                return (0);
        }

        /* verification pimped according to RFC 1035 */
        decoded = be16dec(&ptr->flags);
        if (decoded & 0x0070) {
                /* reserved for future use */
                return (0);
        }

        switch (decoded & 0x7800) {
        case (0 << 12): /* a standard query (QUERY) */
        case (1 << 12): /* an inverse query (IQUERY) */
        case (2 << 12): /* a server status request (STATUS) */
                break;
        default:
                return (0);
        }

        if (decoded & 0x8000) {         /* response handling */
                switch (decoded & 0x000f) {
                case 0: /* No error condition */
                case 1: /* Format error */
                case 3: /* Server failure */
                case 4: /* Name error */
                case 5: /* Refused */
                        break;
                default:
                        /* reserved for future use */
                        return (0);
                }
        } else {                        /* request handling */
                if (decoded & 0x000f) {
                        /* can't have response code */
                        return (0);
                }

                if (decoded & 0x0480) {
                        /* AA and RA bits only for response */
                        return (0);
                }

                if (!ptr->question_count) {
                        /* there should be a question */
                        return (0);
                }

                if (ptr->answer_count) {
                        /* no answers yet */
                        return (0);
                }
        }

        return (1);
}

Oh, and by the way: this also catches Multicast DNS (MDNS) without checking for IP multicast addresses or the port 5353… ;)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.