I hope to make this a “many part series”, broken up over time, but not every post… if that makes sense. What started this was that while I have access to some labs at work, they aren’t always available and they aren’t always conducive to me blowing them up. (I like to break things… then fix them, it’s how I learn.) Because of this, I bought some gear to hammer away on to test new things on RHEL 7, KVM, OpenStack, etc… but I’m not independently wealthy either, so I’m going to make due with what I have for some pieces…
For example – I have an older Viewsonic 19″ widescreen.. I likely won’t use it much, as I will be hitting the lab server remotely – but it’s there. And as I realized when I first started setting up KVM, I only have a single 1GbE connection… Not my favorite way of doing things. HOWEVER – that is how we got here today, is that single connection.
We’re going back to basics for a moment. Why? Because sometimes it’s a good reminder, it’s a good way to learn a new version of an OS (RHEL 7!!), and sometimes it’s a great way to bring new folks into the fold. I’m hoping for the latter 2 in this case. So, KVM networking by default depends on NAT, or Network Address Translation. Your VMs can easily communicate out, but communicating directly back is troublesome. Enter Linux Bridging. This is a very straightforward way of ensuring consistent 2-way traffic between your VMs and the “world”.
Normally I would advocate that you have an interface dedicated for your hypervisor and another interface dedicated for your VM traffic. But again, we’ve accepted that we’re going to work within constraints and learn from them.. so what are our options? We could carve up our interface into VLANs and then dedicate them.. except that nothing else in my home lab talks VLANs. (Sorry, only small office routers, no Cisco..)
That’s ok, we’ll use the bridge for everything. “But Captain”, you say, “didn’t you once tell us that designating an interface as a bridge means it’s completely used up as a bridge?”
Yup. I’ve said that a couple of times. But that bridge still has an IP address, right? So for our lab purposes, why can’t we just us the bridge IP as the hypervisor access point as well? Again, I would absolutely not advocate doing it this way in production, but this is a home lab where the sole purpose is running VMs to test and learn.
So let’s do it – oh, and you might just learn a little RHEL 7 along the way:
First, configure your primary interface (RHEL 7 changes the naming convention of PCI and onboard network interfaces).. essentially just telling it that it’s sole purpose is to be a bridge:
[root@orionsbelt network-scripts]# cat ifcfg-enp3s0 DEVICE=enp3s0 ONBOOT=yes BRIDGE=br0 NM_CONTROLLED=no
Next, we’ll configure the bridge itself.
[root@orionsbelt network-scripts]# cat ifcfg-br0 DEVICE=br0 ONBOOT=yes TYPE=Bridge BOOTPROTO=dhcp STP=yes DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=no NAME="Bridge br0" UUID=d2d68553-f97e-7549-7a26-b34a26f29318 ZONE=public DELAY=0 BRIDGING_OPTS=priority=32768 PEERDNS=yes PEERROUTES=yes NM_CONTROLLED=no
Next, we restart the networking service, and then disable “NetworkManager”:
[root@orionsbelt network-scripts]# systemctl restart network [root@orionsbelt network-scripts]# systemctl stop NetworkManager [root@orionsbelt network-scripts]# systemctl disable NetworkManager rm '/etc/systemd/system/multi-user.target.wants/NetworkManager.service' rm '/etc/systemd/system/dbus-org.freedesktop.NetworkManager.service' rm '/etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service'
Finally, we check our IP address listing
[root@orionsbelt network-scripts]# ip addr list 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP qlen 1000 link/ether 54:be:f7:68:f3:a0 brd ff:ff:ff:ff:ff:ff 4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 52:54:00:3b:c1:c1 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 500 link/ether 52:54:00:3b:c1:c1 brd ff:ff:ff:ff:ff:ff 6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 54:be:f7:68:f3:a0 brd ff:ff:ff:ff:ff:ff inet 10.0.1.47/24 brd 10.0.1.255 scope global dynamic br0 valid_lft 82184sec preferred_lft 82184sec inet6 fe80::56be:f7ff:fe68:f3a0/64 scope link valid_lft forever preferred_
So now, all of my VMs pickup addresses from the same place as all of my other systems – and yes this is a good thing in my particular environment. I can always go back and adjust the DHCP scope and hard code IP addresses if I need to, and even create additional subnets, but that would be another post..
Hope this helps,