admin管理员组

文章数量:1530518

双网卡机器

确认内核中由编译bonding


我的编译为模块

(编译为module的话,很容易做参数控制)

修改/etc/conf.d/modules

echo "modules=\"bonding\"" >> /etc/conf.d/modules


添加模块参数


[python] view plain copy
  1. cd /etc/modprobe.d/  

  2. touch bonding.conf  

在bonding.conf中添加



[python] view plain copy
  1. alias bond0 bonding  

  2.   options bond0 miimon=100 mode=6



注:

#miimon是指定隔多长时间进行链路监测,单位是ms miimon=100的意思就是,每100毫秒检测网卡和交换机之间是否连通,如不通则使用另外的链路。

#mode是表示绑定口的工作模式,有0-6共7种,常用的是0和1 ,6,  

#mode=0表示是round-robin的方式, 同时工作工作在负载均衡状态

#mode=1 表示冗余方式,网卡只有一个工作,一个出问题启用另外的

#mode=2 表示是balance-xor,表示提供负载均衡和和耐故障功能  

#mode=3表示是broadcast,广播策略,耐故障功能。把数据以广播的方式,发给包含在该bond口内的所有网口

#mode=4表示是802.3ad  ,IEEE 802.3ad动态链接集合

#mode=5表示是自动适应传输负载均衡策略。

#mode=6表示是自动适应负载均衡策略

mode:我的就选用一下6吧




安装ifenslave



修改/etc/conf.d/net


[python] view plain copy
  1. ####### eth0    ######### 关闭原来的配置

  2. #config_eth0="dhcp"

  3. ####### eth0    ######### 关闭原来的配置

  4. #config_eth1="dhcp"

  5. #config_eth1="192.168.0.50 netmask 255.255.255.0"

  6. #routes_eth1="default via 192.168.0.51"

  7. ####### bond0   #########

  8. config_eth0="null"

  9. config_eth1="null"

  10. slaves_bond0="eth0 eth1"

  11. config_bond0="dhcp"

  12. depend_bond0() {  

  13. need net.eth0 net.eth1  

  14. }  

我内网上用dhcp,按需配置,静态配置如下

DGtest ~ # cat /etc/conf.d/net

config_eth0="null"

config_eth1="null"

rc_net_bond0_need="net.eth0 net.eth1"

slaves_bond0="eth0 eth1"

mtu_bond0="9000"

config_bond0="10.64.128.97/29"

routes_bond0="default via 10.64.128.102"

手动执行

[python] view plain copy
  1. /etc/init.d/net.bond0 start  

直接运行

在local.d中会出错,/var/log/rc.log记录如下



一个/etc/local.d/*.start脚本应该是


[python] view plain copy
  1. #!/bin/bash

[python] view plain copy
  1. #

  2. /etc/init.d/net.eth0 start  

  3. /etc/init.d/net.eth1 start  

  4. /etc/init.d/net.bond0 start  


DGtest ~ # ip a |grep -v va

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default

   link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

   inet 127.0.0.1/8 brd 127.255.255.255 scope host lo

   inet6 ::1/128 scope host

2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master bond0 state UP group default qlen 1000

   link/ether 00:15:17:60:60:b9 brd ff:ff:ff:ff:ff:ff

3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master bond0 state UP group default qlen 1000

   link/ether 00:15:17:60:60:b8 brd ff:ff:ff:ff:ff:ff

4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default

   link/ether 00:15:17:60:60:b8 brd ff:ff:ff:ff:ff:ff

   inet 10.64.128.97/29 brd 10.64.128.103 scope global bond0

   inet6 fe80::215:17ff:fe60:60b8/64 scope link

DGtest ~ # dmesg

[ 1678.110694] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready

[ 1680.858623] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None

[ 1680.859133] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready

[20634.890619] bonding: Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

[20634.890628] bonding: In ALB mode you might experience client disconnections upon reconnection of a link if the bonding module updelay parameter (0 msec) is incompatible with the forwarding delay time of the switch

[20634.890630] bonding: MII link monitoring set to 100 ms

[20634.891338] Loading kernel module for a network device with CAP_SYS_MODULE (deprecated).  Use CAP_NET_ADMIN and alias netdev-bond0 instead.

[20634.891429] IPv6: ADDRCONF(NETDEV_UP): bond0: link is not ready

[20635.048300] IPv6: ADDRCONF(NETDEV_UP): bond0: link is not ready

[20635.052038] bonding: bond0: Adding slave eth0.

[20635.078215] e1000e 0000:05:00.0: irq 66 for MSI/MSI-X

[20635.179615] e1000e 0000:05:00.0: irq 66 for MSI/MSI-X

[20635.180670] bonding: bond0: enslaving eth0 as an active interface with a down link.

[20635.183930] bonding: bond0: Adding slave eth1.

[20635.210094] e1000e 0000:05:00.1: irq 67 for MSI/MSI-X

[20635.311611] e1000e 0000:05:00.1: irq 67 for MSI/MSI-X

[20635.312669] bonding: bond0: enslaving eth1 as an active interface with a down link.

[20635.383033] e1000e 0000:05:00.0 eth0: changing MTU from 1500 to 9000

[20635.457330] e1000e 0000:05:00.1 eth1: changing MTU from 1500 to 9000

[20637.971508] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None

[20638.048012] bonding: bond0: link status definitely up for interface eth1, 1000 Mbps full duplex.

[20638.048020] bonding: bond0: making interface eth1 the new active one.

[20638.048201] device eth1 entered promiscuous mode

[20638.048213] bonding: bond0: first active interface up!

[20638.048221] IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready

[20638.068400] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None

[20638.148014] bonding: bond0: link status definitely up for interface eth0, 1000 Mbps full duplex.

[20648.048006] device eth1 left promiscuous mode

eth0 down

[21416.148014] bonding: bond0: link status definitely down for interface eth0, disabling it

[21416.148021] device eth1 entered promiscuous mode

eth0 up

[21426.148010] device eth1 left promiscuous mode

[21557.851610] e1000e 0000:05:00.0: irq 66 for MSI/MSI-X

[21557.953612] e1000e 0000:05:00.0: irq 66 for MSI/MSI-X

[21561.071397] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None

[21561.148014] bonding: bond0: link status definitely up for interface eth0, 1000 Mbps full duplex.

转载于:https://blog.51cto/dg123/1344150

本文标签: 双网卡Gentoobond