Hadoop-HBASE 热添加新节点
环境:192.168.137.101 hd1192.168.137.102 hd2192.168.137.103 hd3192.168.137.104 hd4四节点hadoop和hbase1、设置hostnamevi /etc/sysconfig/networkhostname hd5设置完毕后需要退出重新登录下方可生效查看防火墙状态:service iptables status停用防火墙:service iptables stop2、hd5中修改/etc/hosts加入 192.168.137.105 hd53、分发其到所有hd1, hd2, hd3, hd4scp /etc/hosts hd1:/etcscp /etc/hosts hd2:/etcscp /etc/hosts hd3:/etcscp /etc/hosts hd4:/etc4、在hd5节点删除原来的.ssh中的共钥私钥文件,重新生成cd ~/.sshrm id_rsarm id_rsa.pubssh-keygen -t rsa5、将原先hd1节点中的authorized_keys文件拷贝到hd5,再加入新共钥cat ~/.ssh/id_rsa.pub >> authorized_keys6、分发改文件到其它各节点scp ~/.ssh/authorized_keys hd1:/home/hadoop/.sshscp ~/.ssh/authorized_keys hd2:/home/hadoop/.sshscp ~/.ssh/authorized_keys hd3:/home/hadoop/.sshscp ~/.ssh/authorized_keys hd4:/home/hadoop/.ssh7、前往各个节点进行第一次到hd5的ssh登录(hd5本地的也做一次回环ssh登录比较好)在hd1, ssh hd5 date在hd2, ssh hd5 date在hd3, ssh hd5 date在hd4, ssh hd5 date在hd5, ssh hd5 date8、将某节点上的hadoop和hbase安装文件拷贝到新节点上,然后修改配置文件在hd5修改hadoop的slave文件vim /home/hadoop/hadoop/etc/hadoop/slaves加入hd5分发其到其它节点scp /home/hadoop/hadoop/etc/hadoop/slaves hd1:/home/hadoop/etc/hadoopscp /home/hadoop/hadoop/etc/hadoop/slaves hd2:/home/hadoop/etc/hadoopscp /home/hadoop/hadoop/etc/hadoop/slaves hd3:/home/hadoop/etc/hadoopscp /home/hadoop/hadoop/etc/hadoop/slaves hd4:/home/hadoop/etc/hadoop9、在hd5启动datanode./hadoop-daemon.sh start datanode10、在hd5启动start-balancer.sh均衡当前hdfs块start-balancer.sh11、如果还有hbase在上面运行则需要部署hbase的hserver修改vim /home/hadoop/hbase/conf/regionservers加入hd5 并复制regionservers文件到hd1,hd2,hd3,hd4scp regionservers hd1:/home/hadoop/hbase/confscp regionservers hd2:/home/hadoop/hbase/confscp regionservers hd3:/home/hadoop/hbase/confscp regionservers hd4:/home/hadoop/hbase/conf13、在hd5启动hbase regionserverhbase-daemon.sh start regionserver14、在hd1和hd5启动hbase shell用status命令确认一下集群情况