Its late night here and unexpectedly I'm high on motivation to do something except working hence just shifting my procrastination energy into writing this blog.
I've previously blogged a post on setting up an Active/Passive HA setup for Linux servers, so this on is one step further into one server. By one step further into the server I mean to have some form of High Availability on Network Interfaces.
Link Aggregation, NIC Bonding, NIC teaming, Interface Bonding are various names it is known as. Read some basics on it visit this wikipedia link.
My basic motivation for creating NIC Bonding on my servers was to create a self healing topology in which a single cable or interface failure do not impact any service at all. Since I've redundant powers, servers, switches, and routers setup from my very own hands so I know how this will add up in my setup. Removing one cable from the server keeps the server accessible and hence all services working perfectly fine.
The additional benefit which I can benefit from NIC bonding is link "Aggregation". The two 1Gbps interface will and can combine to give me an aggregated speed of 2Gbps. That is something I still need to test and probably post my findings on its reality sometime by transferring huge chunks of data.
Lets move forward.
and insert the following fairly simple to understand lines.
DEVICE=bond0
IPADDR=192.168.15.10
NETMASK=255.255.255.0
GATEWAY=192.168.15.45
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
DNS1=192.168.15.45
Write the lines below in the file,save and exit.
The above configuration is used by the "bonding" Linux kernel module. The options are important here:
mode=1 : Set the bonding method to Active backup.
I've previously blogged a post on setting up an Active/Passive HA setup for Linux servers, so this on is one step further into one server. By one step further into the server I mean to have some form of High Availability on Network Interfaces.
Link Aggregation, NIC Bonding, NIC teaming, Interface Bonding are various names it is known as. Read some basics on it visit this wikipedia link.
My basic motivation for creating NIC Bonding on my servers was to create a self healing topology in which a single cable or interface failure do not impact any service at all. Since I've redundant powers, servers, switches, and routers setup from my very own hands so I know how this will add up in my setup. Removing one cable from the server keeps the server accessible and hence all services working perfectly fine.
The additional benefit which I can benefit from NIC bonding is link "Aggregation". The two 1Gbps interface will and can combine to give me an aggregated speed of 2Gbps. That is something I still need to test and probably post my findings on its reality sometime by transferring huge chunks of data.
WARNING: I had to reboot one of my server as I had an interface already configured so a service restart didn't work properly and the same IP remained configured on eth0 and bond0 and hence caused temporary access issue. Just to be sure have a KVM/ILOM remote access ready while doing this setup.
Creating NIC Bond interface on CentOS 6.4
[root@ASTERISK-A ~]# vim /etc/sysconfig/network-scripts/ifcfg-bond0
and insert the following fairly simple to understand lines.
DEVICE=bond0
IPADDR=192.168.15.10
NETMASK=255.255.255.0
GATEWAY=192.168.15.45
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
DNS1=192.168.15.45
Now remember, we need to have atleast two NIC present on the server to be part of the bond. This could be three Gig Interfaces if you've them available in order to achieve a 3Gbps link.
Edit the Interfaces going to be part of this bond.
[root@ASTERISK-A ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
[root@ASTERISK-A ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
USERCTL=no
MASTER=bond0
SLAVE=yes
Now its time to setup some parameters for the 007-Bond Interface.
[root@ASTERISK-A ~]# vim /etc/modprobe.d/bonding.conf
alias bond0 bonding
options bond0 mode=1 miimon=100 arp_interval=100 arp_ip_target=192.168.15.45,192.168.15.5,192.168.15.20
The above configuration is used by the "bonding" Linux kernel module. The options are important here:
mode=1 : Set the bonding method to Active backup.
miimon=100 : Set the MII link monitoring frequency to 100 milliseconds. This determines how often the link state of each slave is inspected for link failures.
arp_interval=100 : Set the ARP link monitoring frequency to 100 milliseconds (You can setup any keeping your network equipment in mind). This is important to be there.
arp_ip_target=192.168.15.45, 192.168.15.5 : Use the 192.168.15.5 (router ip) and 192.168.15.45 IP addresses to use as ARP monitoring peers when arp_interval is > 0. This is used determine the health of the link to the targets. Multiple IP addresses must be separated by a comma. At least one IP address must be given (usually I set it to router IP) for ARP monitoring to function. The maximum number of targets that can be specified is 16.
arp_ip_target=192.168.15.45, 192.168.15.5 : Use the 192.168.15.5 (router ip) and 192.168.15.45 IP addresses to use as ARP monitoring peers when arp_interval is > 0. This is used determine the health of the link to the targets. Multiple IP addresses must be separated by a comma. At least one IP address must be given (usually I set it to router IP) for ARP monitoring to function. The maximum number of targets that can be specified is 16.
Thats all. Just restart the networking service and if you've any ethernet interface configured then you might need to shutdown that interface and start the network service again.
[root@ASTERISK-A ~]# /etc/init.d/network restart
Creating NIC Bond interface on Ubuntu 12.04
On Ubuntu Server the steps for configuring are 90% the same except that we need to install the package which gets the bonding kernel module.
root@OpenSIPS-A:~# apt-get install ifenslave
We just need to edit one file here.
root@OpenSIPS-A:~# vim /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
bond-master bond0
auto eth1
iface eth1 inet manual
bond-master bond0
auto bond0
iface bond0 inet static
mtu 9000
address 192.168.15.30
netmask 255.255.255.0
network 192.168.15.0
broadcast 192.168.15.255
gateway 192.168.15.45
dns-nameservers 192.168.15.45
bond-miimon 100
bond-downdelay 200
bond-updelay 200
bond-mode active-backup
bond-slaves none
To make sure that the bonding kernel module is loaded on reboots edit the file /etc/modules
add the word "bonding" at the end save, and exit. To Load bonding module right away execute the following command:
root@OpenSIPS-A:~# modprobe bonding
Now restart the networking service and bond0 interface should be up and ready.
root@OpenSIPS-A:~# /etc/init.d/networking restart
Creating NIC Bond interface on Vyatta 6.6
Vyatta is one of my favorite subject, huge thanks to Mr. Asim Ansari who introduced me to it back in 2010 and I've been using it and loving it ever since. There are other cool stuff Vyatta is doing for me which I'll cover later on. Lets see how to create a Bond Interface on Vyatta.
vyatta@FW-A:~$ configure vyatta@FW-A# set interfaces bonding bond0 address 192.168.15.45/24 vyatta@FW-A# set interfaces bonding bond0 arp-monitor interval 100 vyatta@FW-A# set interfaces bonding bond0 mode adaptive-load-balance vyatta@FW-A# set interfaces bonding bond0 mtu 9000 vyatta@FW-A# set interfaces ethernet eth0 bond-group bond0 vyatta@FW-A# set interfaces ethernet eth1 bond-group bond0 vyatta@FW-A# commit vyatta@FW-A# save
WARNING: Once again ensure that the eth0 and eth1 are not assigned with IP address already, if so please delete them before assigning that ethX interface to bond-group.
Thats all for tonight, I'm sleepy now and should take rest while you guys enjoy having good time with your servers and setups.
Recommended Articles:
Thanks for this article - helped me get CentOS 6.4 going with 4x 1GB links!
ReplyDelete