本帖最后由 xiaozuoquan 于 2018-11-29 20:23 编辑
一、linux服务器ssh免密码登录
假设有三台服务器的ip分别是10.9.1.101、10.9.1.102、10.9.1.103 分别修改三台服务器的hosts的文件(vi /etc/hosts), 在hosts文件中增加
10.9.1.101 node101 10.9.1.102 node102 10.9.1.103 node103 2.在101机器上生成公钥和私钥 ssh-keygen -t rsa - 回车之后会提示输入公钥和私钥的存储位置,直接回车默认是在home目录下
- 提示输入密码和确认密码,为了ssh访问过程无需密码直接回车即可
3.将101上生成的id_rsa.pub文件拷贝到102的相同目录下 由于我在101上使用的hadoop用户,生成的文件目录在/home/hadoop目录下,所在102上同样使用hadoop用户操作 将101上的/home/hadoop/id_rsa.put文件拷贝到102的/home/hadoop目录下 4.在102上创建.ssh目录- 检查102上/home/hadoop目录是否存在.ssh文件夹(ls不显示,直接cd .ssh),如果不存在则创建.ssh目录并设置权限为700(mkdir -m=700 .ssh)
- 将id_rsa.pub文件复制到.ssh目录下的authorized_keys的文件中(cp id_rsa.pub .ssh/authorized_keys)
- 设置authorized_keys文件的权限为664(chmod 644 .ssh/authorized_keys)
5.ssh访问 ssh node103
6.多台机器部署 上述过程只配置了101到103免密码登录,如果还需要配置102到103免密码登录时,只需要在102机器上重复上述步骤2,然后将生成的id_rsa.pub文件中的内容追加到103的authorized_keys文件末尾即可。 二、hadoop集群搭建(hdfs) 1、下载hadoop安装包 wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-3.0.0/hadoop-3.0.0-src.tar.gz 2、解压安装包 tar zxvf hadoop-3.0.0-src.tar.gz 3、配置hadoop的环境变量 vi /etc/profile(三台机器) 增加以下配置 1、下载hadoop安装包 wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-3.0.0/hadoop-3.0.0-src.tar.gz 2、解压安装包 tar zxvf hadoop-3.0.0-src.tar.gz -C /home/hadoop/ 3、配置hadoop的环境变量 vi /etc/profile(三台机器) 增加以下配置 [AppleScript] 纯文本查看 复制代码 #Hadoop 3.0
export HADOOP_PREFIX=/home/hadoop/hadoop-3.0.0
export PATH=$PATH:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin
export HADOOP_COMMON_HOME=$HADOOP_PREFIX
export HADOOP_HDFS_HOME=$HADOOP_PREFIX
export HADOOP_MAPRED_HOME=$HADOOP_PREFIX
export HADOOP_YARN_HOME=$HADOOP_PREFIX
export HADOOP_INSTALL=$HADOOP_PREFIX
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_PREFIX/lib/native
export HADOOP_CONF_DIR=$HADOOP_PREFIX
export HADOOP_PREFIX=$HADOOP_PREFIX
export HADOOP_LIBEXEC_DIR=$HADOOP_PREFIX/libexec
export JAVA_LIBRARY_PATH=$HADOOP_PREFIX/lib/native:$JAVA_LIBRARY_PATH
export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop source /etc/profile 4、修改配置文件 vi /etc/hosts(三台机器) 增加以下配置 [AppleScript] 纯文本查看 复制代码 10.9.1.101 node101
10.9.1.102 node102
10.9.1.103 node103 vi /home/hadoop/hadoop-3.0.0/etc/hadoop/core-site.xml(三台机器) [AppleScript] 纯文本查看 复制代码 <?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://node101:9000</value>
<description>HDFS的URI,文件系统://namenode标识:端口</description>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop</value>
<description>namenode上传到hadoop的临时文件夹</description>
</property>
<property>
<name>fs.checkpoint.period</name>
<value>3600</value>
<description>用来设置检查点备份日志的最长时间</description>
</property>
</configuration> vi /home/hadoop/hadoop-3.0.0/etc/hadoop/hdfs-site.xml(三台机器)[AppleScript] 纯文本查看 复制代码 <?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
<description>副本个数,默认配置是3,应小于datanode机器数量</description>
</property>
<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/hadoop-3.0.0/hdfs/name</value>
<description>datanode上存储hdfs名字空间元数据</description>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/hadoop/hadoop-3.0.0/hdfs/data</value>
<description>datanode上数据块的物理存储位置</description>
</property>
</configuration>
vi /home/hadoop/hadoop-3.0.0/etc/hadoop/hadoop-env.sh (三台机器) 设置java_home(54行左右) export JAVA_HOME=/usr/java/jdk1.8.0_11 vi /home/hadoop/hadoop-3.0.0/etc/hadoop/worker(namenode节点机器)[AppleScript] 纯文本查看 复制代码 node101
node102
node103 备注:node101、node102、node103分别是三台服务器设置的名称 5、初始化namenode节点 /home/hadoop/hadoop-3.0.0/bin/hadoop namenode -format 6、启动hdfs /home/hadoop/hadoop-3.0.0/sbin/start-dfs.sh 7、检查hdfs集群启动情况 jps 在namenode节点的机器上能看到namenode和datanode两个进程,在datanode节点的机器上只能看到datanode进程,我安装的namenode在node101机器上,datanode是101~103
备注:当启动出错的时候可以去hadoop安装的根目录下的logs目录下查看错误日志 (因为我只需要使用hdfs文件存储,所以暂时只配置这么多,如果还需要map-reduce等其他的功能还要配置其他的东西,这个只能以后有机会再整理了)
|