What IP Tunneling Actually Means
IP tunneling is the technique of encapsulating one network packet inside another. The outer packet carries routing information that the transit network understands. The inner packet carries the original data, which may use a protocol the transit network does not support, or which you want to protect from inspection. When the outer packet reaches its destination — the tunnel endpoint — the outer header is stripped, and the inner packet is delivered to its actual destination.
This mechanism is the foundation of VPNs, IPv6 transition technologies, SD-WAN overlays, and container networking. Understanding how encapsulation works at the packet level makes troubleshooting, MTU problems, and performance tuning significantly easier.
How Encapsulation Works at the Packet Level
Consider a packet that originates from 10.0.1.5 (private network A) destined for 10.0.2.10 (private network B), separated by a public internet path that knows nothing about either private range.
At the tunnel ingress router, the following happens:
- The original IP packet (with source
10.0.1.5and destination10.0.2.10) is treated as a payload. - A new outer IP header is prepended. The outer source address is the public IP of the ingress router (e.g.,
203.0.113.1). The outer destination is the public IP of the egress router (e.g.,198.51.100.1). - The combined packet — outer header plus inner packet — is transmitted across the internet. Routers along the path see only the outer addresses and forward accordingly.
- At the egress router, the outer header is removed. The original inner packet appears with its private addresses intact and is forwarded to its destination on network B.
The internet never sees 10.0.1.5 or 10.0.2.10. It only sees the two public tunnel endpoint addresses. The private topology is completely hidden from the transit network.
Tunnel Types and Protocols
Multiple protocol stacks implement this encapsulation model, each with different feature sets and overhead characteristics:
GRE (Generic Routing Encapsulation)
GRE, defined in RFC 2784, adds a 4-byte header and encapsulates almost any network layer protocol inside IP. GRE provides no encryption and no authentication — it is a pure encapsulation mechanism. It is widely used for site-to-site connectivity on enterprise routers (Cisco, Juniper) and is often combined with IPSec to add encryption. GRE adds 24 bytes of overhead (20 bytes outer IP header + 4 bytes GRE header).
IP-in-IP
IP-in-IP (RFC 2003) is the simplest tunnel type: an IPv4 packet encapsulated inside another IPv4 packet. The overhead is just 20 bytes for the outer IP header. No framing overhead beyond that. It carries no multicast or non-IP traffic. Linux has native support via the ipip kernel module.
6in4
A specific form of IP-in-IP that carries IPv6 packets inside IPv4 packets (protocol number 41). Used during IPv6 transition to carry IPv6 traffic across IPv4-only networks. 6in4 is the basis of many tunnel broker services that provide free IPv6 connectivity to IPv4-only hosts.
L2TP (Layer 2 Tunneling Protocol)
L2TP encapsulates Layer 2 frames (Ethernet or PPP) inside UDP packets. It was designed for ISP remote access deployments. L2TP itself has no encryption — it is almost always combined with IPSec (L2TP/IPSec). L2TP adds more overhead than simpler IP tunnels, but the ability to carry Layer 2 frames enables use cases like VLANs over WAN links.
VXLAN (Virtual Extensible LAN)
VXLAN encapsulates Ethernet frames inside UDP packets. It is the dominant overlay technology in data center virtualization and container networking (Kubernetes, Docker Swarm). VXLAN uses UDP port 4789, adds 50 bytes of overhead, and supports up to 16 million virtual network identifiers (VNIs), solving the 4096 VLAN limit of traditional 802.1Q.
WireGuard
WireGuard is a modern, high-performance VPN protocol that uses UDP encapsulation with integrated encryption (ChaCha20-Poly1305). It is built into the Linux kernel as of version 5.6. WireGuard's codebase is approximately 4,000 lines — orders of magnitude simpler than IPSec or OpenVPN.
Comparison: Common Tunnel Protocols
| Protocol | Encryption | Overhead | Transport | Primary Use Case |
|---|---|---|---|---|
| GRE | None | ~24 bytes | IP protocol 47 | Enterprise site-to-site, multicast over WAN |
| IP-in-IP | None | ~20 bytes | IP protocol 4 | Simple IPv4 encapsulation |
| 6in4 | None | ~20 bytes | IP protocol 41 | IPv6 over IPv4 transit |
| GRE over IPSec | AES-256 (via IPSec) | ~60 bytes | ESP/UDP 4500 | Encrypted enterprise tunnels |
| L2TP/IPSec | AES-256 (via IPSec) | ~60+ bytes | UDP 1701/4500 | Remote access VPN |
| VXLAN | None (add IPSec separately) | ~50 bytes | UDP 4789 | Data center overlay, container networking |
| WireGuard | ChaCha20-Poly1305 | ~60 bytes | UDP (configurable) | Modern VPN, cloud connectivity |
MTU and Fragmentation: The Critical Consideration
Every tunnel adds overhead bytes to each packet. The standard Ethernet MTU is 1500 bytes. If your tunnel adds 60 bytes of overhead, the effective payload MTU drops to 1440 bytes. Any inner packet larger than 1440 bytes must be fragmented before encapsulation or the outer packet will exceed the link MTU and require fragmentation at the IP layer.
Fragmentation is expensive. It increases CPU load, complicates reassembly at the destination, and can cause path MTU issues when firewalls block ICMP "fragmentation needed" messages (ICMP type 3 code 4). This is the single most common cause of intermittent tunnel failures — applications work for small transfers but fail or perform poorly for large file transfers.
The correct fix is to configure the tunnel interface MTU to account for encapsulation overhead. For a GRE tunnel on standard Ethernet, set the tunnel interface MTU to 1476 (1500 minus 24 bytes GRE overhead). For IPSec over GRE, set it to approximately 1400 to account for combined overhead. Additionally, configure TCP MSS clamping with iptables to prevent TCP from negotiating segment sizes that will result in oversized packets:
iptables -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
Real-World Use Cases
Site-to-site VPN between offices: Two Cisco routers at different offices establish a GRE tunnel over the internet, encrypted with IPSec. Traffic between the 192.168.10.0/24 and 192.168.20.0/24 subnets flows as if the offices were on the same LAN, regardless of the public internet path between them.
IPv6 connectivity via tunnel broker: A home user on an IPv4-only ISP configures a 6in4 tunnel to Hurricane Electric's tunnel broker service. The user gets a routed /48 IPv6 prefix and native IPv6 connectivity without waiting for their ISP to deploy IPv6. The IPv6 packets travel encapsulated in IPv4 across the ISP's infrastructure.
Container overlay networking: Kubernetes uses VXLAN (via Flannel, Calico, or Cilium) to create a flat virtual network across nodes. Each pod gets an IP from the cluster's Pod CIDR. VXLAN encapsulates inter-pod traffic in UDP, allowing containers on different physical hosts to communicate as if they were on the same Layer 2 network.
SD-WAN: Software-defined WAN solutions use overlay tunnels (often VXLAN or proprietary variants) over multiple underlay transports (MPLS, broadband, LTE). The SD-WAN controller dynamically routes traffic across the best available underlay path based on measured latency, packet loss, and jitter.
Common Misconceptions
Tunneling and encryption are the same thing
They are not. Tunneling is encapsulation — wrapping one packet inside another. Encryption is a separate operation that scrambles the inner packet's contents. GRE and IP-in-IP tunnel without encrypting. IPSec and WireGuard encrypt and also use tunneling for transport. You can have tunneling without encryption (GRE), encryption without tunneling (TLS for application data), or both together (GRE over IPSec, WireGuard).
A tunnel is always slower than direct routing
Not necessarily. On modern hardware with kernel-level tunnel implementations, the per-packet overhead of encapsulation is negligible — a few microseconds of CPU time. The latency impact of the tunnel is dominated by the transit path, not the encapsulation processing. Where tunnels do cause measurable performance impact is through MTU overhead and fragmentation, which is a configuration issue rather than an inherent limitation of tunneling.
Split tunneling is a security risk by definition
Split tunneling routes only specific traffic through the VPN tunnel while direct internet traffic bypasses it. It is true that split tunneling reduces the visibility and control a security team has over non-tunneled traffic. However, forcing all traffic through a VPN gateway introduces its own latency penalty and creates a bottleneck at the VPN concentrator. Whether to use full tunneling or split tunneling is a policy decision that depends on threat model and bandwidth capacity.
Tunnel interfaces need their own IP addresses
Point-to-point tunnel interfaces do need IP addresses for the tunnel endpoints (the addresses used in the outer IP header), but the tunnel interface itself can be configured as a numbered or unnumbered interface depending on routing requirements. In some configurations, the tunnel endpoint IPs are the only addresses needed and routing uses those. Unnumbered tunnel interfaces reduce address consumption in large-scale deployments.
Pro Tips
- Always configure tunnel interface MTU explicitly. Do not rely on defaults. Calculate the correct MTU based on your tunnel protocol overhead and set it on the tunnel interface configuration. Follow this with TCP MSS clamping to prevent large TCP sessions from discovering the reduced MTU through fragmentation failures.
- Use WireGuard for new deployments where you have control of both endpoints. Its simplicity, performance, and built-in encryption make it the best choice for most new tunnel deployments. Reserve GRE and IP-in-IP for environments where you need multicast support or interoperability with legacy hardware.
- Monitor tunnel endpoint reachability separately from tunnel traffic. A tunnel can appear up (both endpoints reachable) while the inner routing is broken. Test with traffic that traverses the actual tunnel path, not just pings to the tunnel endpoint addresses.
- Document every tunnel with both endpoint IPs, inner subnets, and encryption parameters. Tunnels are invisible to traceroute by default. Without documentation, debugging an outage six months later — when the engineer who built it has left — requires packet captures and reverse engineering the configuration.
- Add GRE keepalives or dead-peer detection for automated failover. Without keepalives, a tunnel endpoint failure may not be detected for minutes. Configure keepalive timers appropriate to your recovery time objective to ensure routing tables are updated promptly when a tunnel goes down.
Inspect your current tunnel status and connection routing details here.