Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wireguard: add per-peer routing table option #27939

Merged
merged 3 commits into from Aug 11, 2017

Conversation

evujumenuk
Copy link

This adds a convenient per-peer option to set the routing table that associated routes are added to. This functionality is very useful for isolating interfaces from the kernel's global routing and forcing all traffic of a virtual interface (or a group of processes, via e.g. "ip rule add uidrange 10000-10009 lookup 42") through Wireguard.

Motivation for this change
Things done

Please check what applies. Note that these are not hard requirements but merely serve as information for reviewers.

  • Tested using sandboxing
    (nix.useSandbox on NixOS,
    or option build-use-sandbox in nix.conf
    on non-NixOS)
  • Built on platform(s)
    • NixOS
    • macOS
    • Linux
  • Tested via one or more NixOS test(s) if existing and applicable for the change (look inside nixos/tests)
  • Tested compilation of all pkgs that depend on this change using nix-shell -p nox --run "nox-review wip"
  • Tested execution of all binary files (usually in ./result/bin/)
  • Fits CONTRIBUTING.md.

This adds a convenient per-peer option to set the routing table that associated routes are added to. This functionality is very useful for isolating interfaces from the kernel's global routing and forcing all traffic of a virtual interface (or a group of processes, via e.g. "ip rule add uidrange 10000-10009 lookup 42") through Wireguard.
@@ -240,7 +250,8 @@ in
peers = [
{ allowedIPs = [ "192.168.20.1/32" ];
publicKey = "xTIBA5rboUvnH4htodjb6e697QjLERt1NAB4mZqp8Dg=";
endpoint = "demo.wireguard.io:12913"; }
endpoint = "demo.wireguard.io:12913";
table = "42"; }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since most people will want to use the main table, I'd recommend keeping this out of the example.

Copy link
Author

@evujumenuk evujumenuk Aug 4, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed! -- I wasn't entirely sure whether omitting the new setting from the example was in the maintainers' best interests.

Rectified by 6070d91.

Most users will be served well by the default "table" setting ("main").
@evujumenuk
Copy link
Author

Thanks for reviewing!

Copy link
Contributor

@zx2c4 zx2c4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just small question about something I missed the first time through.

table = mkOption {
default = "main";
type = types.str;
description = ''The kernel routing table to add this peer's associated
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something I didn't realize first time through. Why is this per-peer? Shouldn't this be per interface?

I realize there's an argument to be made for granularity, but it's hard to imagine what use cases for that you're imagining. Wouldn't it be simpler to do this per-interface instead?

Or do you have a particular per-peer use case in mind?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting.

If used as described in the Conceptual Overview, WireGuard implements (something similar to) policy-based VPN. That is, a WireGuard interface abstracts over a multitude of peer associations, and each peer can be associated with many other peers. Routing decisions are made only within WireGuard code, and ingress filtering is performed automatically.

What you can also do is use WireGuard interfaces as "dumb" point-to-point links, with AllowedIPs set to 0/0 on both sides, and make all routing decisions the same way they are made with other types of interfaces. This is what one would call route-based VPN. Instead of maintaining AllowedIPs, one would need to maintain sensible static routes. Or you could run dynamic routing protocols across WireGuard links. You'd still need to do ingress filtering (BCP-84-like) yourself.

In this latter case, you are right: inserting all routes associated with a single interface into a single routing table is completely sufficient.

In the former case, however, using different tables for different routes on a single interface is useful. Suppose you configure a WireGuard interface with, say, five different peers in different networks, on one of your machines. Furthermore, let's say you run five VMs, with a TAP interface each, on the same machine. Normally, the associated qemu processes all get their packets routed using the same routing table (main). You could, however, use ip rule add iif ${TAPINTERFACE} table ${TABLE} to use a different routing table for each TAP interface. Then, each VM would be connected to a different WireGuard peer. I'd then use ip rule add iif ${WGINTERFACE} table ${WGTABLE} so I can have a separate table for the WireGuard interface, as well.

Personally, I'd go with many interfaces with single peers. (One reason: I'm not sure how WireGuard behaves in the presence of multiple overlapping AllowedIPs settings. I suspect routingtable.c has the answer, maybe you could enlighten us. ;) Using standard routing enables the use of ECMP.)

But WireGuard is flexible enough to also allow single interfaces with many peers.

To be honest, there's also a case to be made for per-AllowedIPs routing tables, but:

  1. I don't readily see how to implement that elegantly without breaking the interface, and
  2. probably not too many people require this functionality.

Copy link
Contributor

@zx2c4 zx2c4 Aug 6, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, given all this, would you prefer to move the table ID to the Interface-level, or keep it at the Peer-level like here?

I'm not sure how WireGuard behaves in the presence of multiple overlapping AllowedIPs settings. I suspect routingtable.c has the answer, maybe you could enlighten us.

Most specific match wins.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd probably want to keep it at the per-peer level -- it enables a few more use cases, without significant drawbacks: If I'm only configuring a single peer on each interface, then I'll only need to set table once per interface definition anyway -- just as if it were a per-interface option.

I think the "multiple interface, single peer" model is simpler and more flexible, but other people will most probably have different requirements. For example, "ingress filtering for free" is a significant advantage for ease of use.

Most specific match wins.

So what if multiple peers have the same CIDR range configured as AllowedIPs? Will packets be forwarded to just one peer? Or all of them in a round-robin fashion? If the latter, how can one deal with TCP packet reordering?

Copy link
Contributor

@zx2c4 zx2c4 Aug 7, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay that's fine. It'll be a hassle if somebody wants all the peers on the same table and there are a lot of peers, but I guess it's just copy and paste for that. So okie dokes.

So what if multiple peers have the same CIDR range configured as AllowedIPs?

This can't happen. Watch what happens when you try:

image

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, I forgot about the case where one configures one interface with many peers and wants their routing table entries in one table that is not main.

Do you think that's going to be a common requirement? It shouldn't be too hard to also introduce a per-interface option that sets a default for the per-peer options.

This can't happen. Watch what happens when you try:

Okay, this seems to mean that for primary/primary, or even primary/secondary setups, using multiple interfaces is necessary.

I'm fine with this! Basically, you introduced a complication in WireGuard's design to make a pretty common case easier to set up. :)

Copy link
Contributor

@zx2c4 zx2c4 Aug 7, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, if you want to do complicated things with routing and secondaries and all sorts of things, you likely want to use the existing routing utilities. The multi-peer feature helps with "server-like" setups.

For this reason maybe you do want to make that table setting per-interface?

If I add a similar config option to wg-quick, I most certainly will make it per-interface. For that reason too maybe it's best to keep everything uniform.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After pondering this problem for a while, I have to agree. Policy routing via ip rule can never achieve its full potential if, in a single interface, different routes to the same CIDR range are disallowed anyway. So supporting per-peer routing tables is not that much of a win. Furthermore, having to specify table for each peer (if it's not main) makes the "correct" setup needlessly complicated.

Linux even gives you BCP{38,84} filtering for free, with net.ipv4.conf.{default,all}.rp_filter! Using a single interface for multiple peers now seems only marginally easier in the "server-like" case. It does save private keys and UDP ports, though.

In any case: I've modified the branch accordingly. eaab02b has the goods.

Do the right thing, and use multiple interfaces for policy routing. For example, WireGuard interfaces do not allow multiple routes for the same CIDR range.
@evujumenuk
Copy link
Author

The Travis CI build was aborted -- apparently, its log exceeded 4 MB. A brief skim reveals that e.g. bash is being built from source...

@fpletz
Copy link
Member

fpletz commented Aug 11, 2017

Travis had to build a mass rebuild. Should be fine.

@fpletz fpletz merged commit 61d133c into NixOS:master Aug 11, 2017
@fpletz
Copy link
Member

fpletz commented Aug 11, 2017

Thanks for the PR and to the reviewers!

@evujumenuk evujumenuk deleted the wireguard-rt_tables branch August 11, 2017 23:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants