Hadoop分布式环境高可用配置
时间: 2019-11-26来源:OSCHINA
前景提要
前面文章介绍过 Hadoop分布式的配置 ,但是设计到高可用,这次使用zookeeper配置Hadoop高可用。
1.环境准备
1)修改IP
2)修改主机名及主机名和IP地址的映射
3)关闭防火墙
4)ssh免密登录
5)创建hadoop用户和用户组
6)安装更新安装源、JDK、配置环境变量等
2.服务器规划
Node1 Node2 Node3
NameNode NameNode
JournalNode JournalNode JournalNode
DataNode DataNode DataNode
ZK ZK ZK
ResourceManager NodeManager
NodeManager
ResourceManager NodeManager

3.配置Zookeeper集群
参考我的之前的文章 Zookeeper安装和配置说明
4.安装Hadoop
1)官方下载地址:http://hadoop.apache.org/
2)解压hadoop2.7.2至/usr/local/hadoop2.7
3)修改hadoop2.7的所属组和所属者为hadoop chown -R hadoop:hadoop /usr/local/hadoop2.7
4)配置HADOOP_HOME vim /etc/profile #HADOOP_HOME export HADOOP_HOME=/usr/local/hadoop2.7 export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
5.配置Hadoop集群高可用
5.1配置HDFS集群
hadoop-env.sh export JAVA_HOME=/usr/local/jdk1.8.0_221
hadoop-site.xml <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <!-- 完全分布式集群名称 --> <property> <name>dfs.nameservices</name> <value>hadoopCluster</value> </property> <!-- 集群中NameNode节点都有哪些 --> <property> <name>dfs.ha.namenodes.hadoopCluster</name> <value>nn1,nn2</value> </property> <!-- nn1的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.hadoopCluster.nn1</name> <value>node1:9000</value> </property> <!-- nn2的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.hadoopCluster.nn2</name> <value>node2:9000</value> </property> <!-- nn1的http通信地址 --> <property> <name>dfs.namenode.http-address.hadoopCluster.nn1</name> <value>node1:50070</value> </property> <!-- nn2的http通信地址 --> <property> <name>dfs.namenode.http-address.hadoopCluster.nn2</name> <value>node2:50070</value> </property> <!-- 指定NameNode元数据在JournalNode上的存放位置 --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://node1:8485;node2:8485;node3:8485/hadoopCluster</value> </property> <!-- 配置隔离机制,即同一时刻只能有一台服务器对外响应 --> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <!-- 使用隔离机制时需要ssh无秘钥登录--> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> </property> <!-- 声明journalnode服务器存储目录--> <property> <name>dfs.journalnode.edits.dir</name> <value>/data_disk/hadoop/jn</value> </property> <!-- 关闭权限检查--> <property> <name>dfs.permissions.enable</name> <value>false</value> </property> <!-- 访问代理类:client,hadoopCluster,active配置失败自动切换实现方式--> <property> <name>dfs.client.failover.proxy.provider.hadoopCluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:///data_disk/hadoop/name</value> <description>为了保证元数据的安全一般配置多个不同目录</description> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///data_disk/hadoop/data</value> <description>datanode 的数据存储目录</description> </property> <property> <name>dfs.replication</name> <value>3</value> <description>HDFS的数据块的副本存储个数,默认是3</description> </property> </configuration>
core-site.xml <configuration> <!-- 指定HDFS中NameNode的地址 --> <property> <name>fs.defaultFS</name> <value>hdfs://hadoopCluster</value> </property> <!-- 指定Hadoop运行时产生文件的存储目录 --> <property> <name>hadoop.tmp.dir</name> <value>file:///data_disk/hadoop/tmp</value> </property> </configuration>
启动hadoop集群 (1)在各个JournalNode节点上,输入以下命令启动journalnode服务 sbin/hadoop-daemon.sh start journalnode (2)在[nn1]上,对其进行格式化,并启动 bin/hdfs namenode -format sbin/hadoop-daemon.sh start namenode (3)在[nn2]上,同步nn1的元数据信息 bin/hdfs namenode -bootstrapStandby (4)启动[nn2] sbin/hadoop-daemon.sh start namenode (5)在[nn1]上,启动所有datanode sbin/hadoop-daemons.sh start datanode (6)将[nn1]切换为Active bin/hdfs haadmin -transitionToActive nn1 (7)查看是否Active bin/hdfs haadmin -getServiceState nn1
打开浏览器查看namenode的状态


5.2配置HDFS自动故障转移
在hdfs-site.xml中增加 <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property>
在core-site.xml文件中增加 <property> <name>ha.zookeeper.quorum</name> <value>node1:2181,node2:2181,node3:2181</value> </property>
5.2.1启动 (1)关闭所有HDFS服务: sbin/stop-dfs.sh (2)启动Zookeeper集群: bin/zkServer.sh start (3)初始化HA在Zookeeper中状态: bin/hdfs zkfc -formatZK (4)启动HDFS服务: sbin/start-dfs.sh (5)在各个NameNode节点上启动DFSZK Failover Controller,先在哪台机器启动,哪个机器的NameNode就是Active NameNode sbin/hadoop-daemon.sh start zkfc
5.2.2验证 (1)将Active NameNode进程kill kill -9 namenode的进程id (2)将Active NameNode机器断开网络 service network stop
如果kill nn1后nn2没有变成active,可能有以下原因
(1)ssh免密登录没配置好
(2)未找到fuster程序,导致无法进行fence,参考 博文
5.3YARN-HA配置
yarn-site.xml <?xml version="1.0"?> <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <!--启用resourcemanager ha--> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <!--声明两台resourcemanager的地址--> <property> <name>yarn.resourcemanager.cluster-id</name> <value>cluster-yarn1</value> </property> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>node1</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>node3</value> </property> <!--指定zookeeper集群的地址--> <property> <name>yarn.resourcemanager.zk-address</name> <value>node1:2181,node2:2181,node3:2181</value> </property> <!--启用自动恢复--> <property> <name>yarn.resourcemanager.recovery.enabled</name> <value>true</value> </property> <!--指定resourcemanager的状态信息存储在zookeeper集群--> <property> <name>yarn.resourcemanager.store.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value> </property> </configuration>
5.3.1启动HDFS (1)在各个JournalNode节点上,输入以下命令启动journalnode服务: sbin/hadoop-daemon.sh start journalnode (2)在[nn1]上,对其进行格式化,并启动: bin/hdfs namenode -format sbin/hadoop-daemon.sh start namenode (3)在[nn2]上,同步nn1的元数据信息: bin/hdfs namenode -bootstrapStandby (4)启动[nn2]: sbin/hadoop-daemon.sh start namenode (5)启动所有DataNode sbin/hadoop-daemons.sh start datanode (6)将[nn1]切换为Active bin/hdfs haadmin -transitionToActive nn1
5.3.2启动YARN (1)在node1中执行: sbin/start-yarn.sh (2)在node3中执行: sbin/yarn-daemon.sh start resourcemanager (3)查看服务状态 bin/yarn rmadmin -getServiceState rm1

科技资讯:

科技学院:

科技百科:

科技书籍:

网站大全:

软件大全:

热门排行