黑马程序员技术交流社区

标题: 【上海校区】【Hadoop集群】-集群搭建踩的那些坑之hadoop篇 [打印本页]

作者: 不二晨    时间: 2018-12-17 10:20
标题: 【上海校区】【Hadoop集群】-集群搭建踩的那些坑之hadoop篇
本帖最后由 不二晨 于 2018-12-18 17:57 编辑

上篇文章说了在集群搭建的时候首先要开启ssh的公钥验证,只有开启了公钥验证后才能够使用公钥登录,但是在开启并生成公钥后还需要对秘钥文件进行授权处理,一般生成的authorized_keys可能文件的归属并不是ssh权限,所以在使用公钥验证的时候无法访问该文件,需要修改文件的归属权。
       ssh公钥互相通信搭建好后,这只是第一步,第二步就需要安装配置hadoop和对应的hadoop执行环境了,在搭建环境的时候hadoop也会有一些启动问题,大部分主要的问题是集群配置环境造成的,接下来反应一些主要的出错场景,及解决方法。

搭建的hadoop集群的具体环境如下:
    主节点:
        系统:CentOS Linux release 7.3.1611 (Core)
        系统名称:hadoop-master
        系统ip:192.168.1.130
        hadoop:hadoop2.8.4
        java:1.7.0
        ssh2:OpenSSH_7.4p1, OpenSSL 1.0.2k-fips
    子节点:
        系统:Ubuntu 15.04
        系统名称:hadoop-salve1
        系统ip:192.168.1.128
        hadoop:hadoop2.8.4
        java:1.7.0
        ssh2:OpenSSH_6.7p1 Ubuntu-5ubuntu1, OpenSSL 1.0.1f

hadoop官方下载地址:http://hadoop.apache.org/
hadoop官方文档地址:http://hadoop.apache.org/docs/r2 ... n/ClusterSetup.html

一、集群搭建的配套环境

      在搭建hadoop集群环境时需要的机器和java不再细说,这里主要说些在搭建的时候遇到的一些主要问题,以及搭建过程中需要注意的地方。
     在搭建时需要配置系统的名称以及对应的hosts,在hadoop各项配置文件中最好使用系统名称配置,不要使用ip,因为集群的ip变更的话很容易导致集群出问题。所以首先要做的就是修改系统的系统名称。

1.1 修改系统名称及hosts

    1.1.1 修改主机

       主机的系统名称修改为hadoop-master,从机系统名称修改为hadoop-slave1,并配置响应的ip,主机外部配置如下:

-bash-4.2$ sudo vim /etc/hostname
[sudo] password for hadoop:

hadoop-master

-bash-4.2$ vim /etc/hosts

127.0.0.1   localhost
#localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost
#localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.132 hadoop-slave1
192.168.1.130 hadoop-master
       Note:这里主机的hosts一定要配置成局域网的ip,并将localhost.localdomain注释掉,修改完后一定要重启系统才能生效,重启完成后保证主节点的服务器防火墙已关闭,否则启动hadoop会报如下的错误:org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop-master/192.168.1.130:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=100, sleepTime=10000 MILLISECONDS),这是由于主机hadoop没有正常启动或者启动后9000端口没有对外开放,排查方法可以查看hadoop日志,一般启动不报错的话,这个问题就主要是hadoop配置的9000对应的域名是127.0.0.1,没有配置局域网可访问的原因。
       Note:如果上面的选项配置没问题,从机能访问主机的9000端口,但是仍会出现上面的问题应该考虑是格式化的问题,这时需要停掉hadoop重新进行格式化。

   1.1.2 修改从机

      从机外部配置如下:

root@hadoop-slave1:/home/hadoop# sudo vim /etc/hostname

hadoop-slave1
root@hadoop-slave1:/home/hadoop# sudo vim /etc/hosts

127.0.0.1   localhost hadoop-slave1 #localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost hadoop-slave1  #localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.130 hadoop-master
#127.0.1.1  localhost.localdomain
        Note:这里一定要将127.0.1.1注释掉,网上查的资料据说是会在本机形成网络闭环,导致hadoop无法访问主机。

二、集群运行的主从hadoop配置

       上面的配置是hadoop在运行时必须要做的,磨刀不误砍柴工,只有环境搭建好后hadoop才能正常运行,另外换需要安装jdk,并配置好环境变量,接下来就是要安装hadoop了。
      其实hadoop的安装非常简单,在网上下载一个版本包后添加hadoop的环境变量就可以了,最新版的hadoop下载地址:https://hadoop.apache.org/releases.html

2.1 hadoop环境变量

       解压后,需要配置hadoop环境,下面提供一种hadoop的环境变量配置

export HADOOP_HOME=/Library/hadoop/hadoop284
export PATH=$HADOOP_HOME/bin:$PATH
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_ROOT_LOGGER=DEBUG,console
2.2 hadoop集群配置

       hadoop集群运行的基本环境已经搭建完成,主机搭建好后把hadoop的安装源码复制到从机,并和主机配置相同环境变量,下面提供一种hadoop集群运行的配置,可以作为一种参考。

2.2.1 core-site.xml

<configuration>
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://hadoop-master:9000/</value>
        </property>
        <property>
                <name>dfs.permissions</name>
                <value>false</value>
        </property>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/var/log/hadoop/tmp</value>
                <description>A base for other temporary directores</description>
        </property>
</configuration>
     配置中的hadoop.tmp.dir路径必须自己创建,也就是说必须创建好/var/log/hadoop路径,这样在启动的时候才不至于出问题。配置项的解释可以看如下的解释,还能够设置读写文件的大小

Parameter
Value
Notes
fs.defaultFS
NameNode URI
hdfs://host:port/
io.file.buffer.size
131072
Size of read/write buffer used in SequenceFiles.
2.2.2 hdfs-site.xml

<configuration>
        <property>
                <name>dfs.replication</name>
                <value>1</value>
        </property>
        <property>
                <name>dfs.hosts.exclude</name>
                <value>/Library/hadoop/hadoop284/etc/hadoop/hdfs_exclude.txt</value>
                <description>DFS exclude</description>
        </property>
        <property>
                <name>dfs.data.dir</name>
                <value>/Library/hadoop/hadoop284/hdfs/data</value>
                <final>true</final>
        </property>
        <property>
                <name>dfs.name.dir</name>
                <value>/Library/hadoop/hadoop284/hdfs/name</value>
                <final>true</final>
        </property>
        <property>
                <name>dfs.namenode.secondary.http-address</name>
                <value>hadoop-master:9001</value>
        </property>
        <property>
                <name>dfs.webhdfs.enabled</name>
                <value>true</value>
        </property>
        <property>        
                <name>dfs.permissions</name>
                <value>false</value>
        </property>
</configuration>
    配置中的dfs.namedir以及dfs.data.dir对应的路径必须提前创建好。配置项说明,以及其它可以进行配置的内容:

Parameter
Value
Notes
dfs.namenode.name.dir
Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently.
If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
dfs.hosts / dfs.hosts.exclude
List of permitted/excluded DataNodes.
If necessary, use these files to control the list of allowable datanodes.
dfs.blocksize
268435456
HDFS blocksize of 256MB for large file-systems.
dfs.namenode.handler.count
100
More NameNode server threads to handle RPCs from large number of DataNodes.
dfs.datanode.data.dir        
Comma separated list of paths on the local filesystem of a DataNode where it should store its blocks.        
If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices.
2.2.3 yarn-site.xml

<configuration>
        <property>
                <name>yarn.resourcemanager.address</name>
                <value>hadoop-master:18040</value>
        </property>
        <property>
                <name>yarn.resourcemanager.scheduler.address</name>
                <value>hadoop-master:18030</value>
        </property>
        <property>
                <name>yarn.resourcemanager.webapp.address</name>
                <value>hadoop-master:18088</value>
        </property>
        <property>
                <name>yarn.resourcemanager.resource-tracker.address</name>
                <value>hadoop-master:18025</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        <property>
                <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
                <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>
</configuration>
配置项说明,以及其它可以进行配置的内容:

Parameter
Value
Notes
yarn.acl.enable
true / false
Enable ACLs? Defaults to false.
yarn.admin.acl
Admin ACL
ACL to set admins on the cluster. ACLs are of for comma-separated-usersspacecomma-separated-groups. Defaults to special value of * which means anyone. Special value of just space means no one has access.
yarn.log-aggregation-enable
FALSE
Configuration to enable or disable log aggregation
yarn.resourcemanager.address
ResourceManager host:port for clients to submit jobs.
host:port If set, overrides the hostname set in yarn.resourcemanager.hostname.
yarn.resourcemanager.scheduler.address
ResourceManager host:port for ApplicationMasters to talk to Scheduler to obtain resources.
host:port If set, overrides the hostname set in yarn.resourcemanager.hostname.
yarn.resourcemanager.resource-tracker.address
ResourceManager host:port for NodeManagers.
host:port If set, overrides the hostname set in yarn.resourcemanager.hostname.
yarn.resourcemanager.admin.address
ResourceManager host:port for administrative commands.
host:port If set, overrides the hostname set in yarn.resourcemanager.hostname.
yarn.resourcemanager.webapp.address
ResourceManager web-ui host:port.
host:port If set, overrides the hostname set in yarn.resourcemanager.hostname.
yarn.resourcemanager.hostname
ResourceManager host.
host Single hostname that can be set in place of setting all yarn.resourcemanager*address resources. Results in default ports for ResourceManager components.
yarn.resourcemanager.scheduler.class
ResourceManager Scheduler class.
CapacityScheduler (recommended), FairScheduler (also recommended), or FifoScheduler
yarn.scheduler.minimum-allocation-mb
Minimum limit of memory to allocate to each container request at the Resource Manager.
In MBs
yarn.scheduler.maximum-allocation-mb
Maximum limit of memory to allocate to each container request at the Resource Manager.
In MBs
yarn.resourcemanager.nodes.include-path / yarn.resourcemanager.nodes.exclude-path
List of permitted/excluded NodeManagers.
If necessary, use these files to control the list of allowable NodeManagers.
yarn.nodemanager.resource.memory-mb
Resource i.e. available physical memory, in MB, for given NodeManager
Defines total available resources on the NodeManager to be made available to running containers
yarn.nodemanager.vmem-pmem-ratio
Maximum ratio by which virtual memory usage of tasks may exceed physical memory
The virtual memory usage of each task may exceed its physical memory limit by this ratio. The total amount of virtual memory used by tasks on the NodeManager may exceed its physical memory usage by this ratio.
yarn.nodemanager.local-dirs
Comma-separated list of paths on the local filesystem where intermediate data is written.
Multiple paths help spread disk i/o.
yarn.nodemanager.log-dirs
Comma-separated list of paths on the local filesystem where logs are written.
Multiple paths help spread disk i/o.
yarn.nodemanager.log.retain-seconds
10800
Default time (in seconds) to retain log files on the NodeManager Only applicable if log-aggregation is disabled.
yarn.nodemanager.remote-app-log-dir
/logs
HDFS directory where the application logs are moved on application completion. Need to set appropriate permissions. Only applicable if log-aggregation is enabled.
yarn.nodemanager.remote-app-log-dir-suffix
logs
Suffix appended to the remote log dir. Logs will be aggregated to ${yarn.nodemanager.remote-app-log-dir}/${user}/${thisParam} Only applicable if log-aggregation is enabled.
yarn.nodemanager.aux-services
mapreduce_shuffle
Shuffle service that needs to be set for Map Reduce applications.
yarn.log-aggregation.retain-seconds
-1
How long to keep aggregation logs before deleting them. -1 disables. Be careful, set this too small and you will spam the name node.
yarn.log-aggregation.retain-check-interval-seconds
-1
Time between checks for aggregated log retention. If set to 0 or a negative value then the value is computed as one-tenth of the aggregated log retention time. Be careful, set this too small and you will spam the name node.
2.2.4 mapred-site.xml

<configuration>
        <property>
                <name>mapred.job.tracker</name>
                <value>hadoop-master:9001</value>
        </property>
</configuration>
配置项说明,以及其它可以进行配置的内容:

Parameter
Value
Notes
mapreduce.framework.name
yarn
Execution framework set to Hadoop YARN.
mapreduce.map.memory.mb
1536
Larger resource limit for maps.
mapreduce.map.java.opts
-Xmx1024M
Larger heap-size for child jvms of maps.
mapreduce.reduce.memory.mb
3072
Larger resource limit for reduces.
mapreduce.reduce.java.opts
-Xmx2560M
Larger heap-size for child jvms of reduces.
mapreduce.task.io.sort.mb
512
Higher memory-limit while sorting data for efficiency.
mapreduce.task.io.sort.factor
100
More streams merged at once while sorting files.
mapreduce.reduce.shuffle.parallelcopies
50
Higher number of parallel copies run by reduces to fetch outputs from very large number of maps.
mapreduce.jobhistory.address
MapReduce JobHistory Server host:port
Default port is 10020.
mapreduce.jobhistory.webapp.address
MapReduce JobHistory Server Web UI host:port
Default port is 19888.
mapreduce.jobhistory.intermediate-done-dir
/mr-history/tmp
Directory where history files are written by MapReduce jobs.
mapreduce.jobhistory.done-dir
/mr-history/done
Directory where history files are managed by the MR JobHistory Server.
2.3 hadoop集群

2.2.5 hadoop启动及验证

上面的配置完成后,hadoop的运行就基本上完成了,最后格式化hadoop,完成后就可以进行启动了,如下

-bash-4.2$ hadoop namenode -format
-bash-4.2$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hadoop-master]
hadoop-master: starting namenode, logging to /Library/hadoop/hadoop284/logs/hadoop-hadoop-namenode-hadoop-master.out
hadoop-slave1: starting datanode, logging to /Library/hadoop/hadoop284/logs/hadoop-hadoop-datanode-hadoop-slave1.out
Starting secondary namenodes [hadoop-master]
hadoop-master: starting secondarynamenode, logging to /Library/hadoop/hadoop284/logs/hadoop-hadoop-secondarynamenode-hadoop-master.out
starting yarn daemons
starting resourcemanager, logging to /Library/hadoop/hadoop284/logs/yarn-hadoop-resourcemanager-hadoop-master.out
hadoop-slave1: starting nodemanager, logging to /Library/hadoop/hadoop284/logs/yarn-hadoop-nodemanager-hadoop-slave1.out
        Note:主从集群在启动时一定要保证主机的防火墙关闭,或者主从在进行通信的时候使用的端口允许外部访问,比如9000端口等,不然从机是无法访问主机的,这是由于hadoop的工作原理导致的,详细的hadoop运行原理以后会讨论。

        启动成功后,可以使用如下的实例来验证下:

-bash-4.2$ hadoop jar /Library/hadoop/hadoop284/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.4.jar pi 10 10

#如果运行正常那么会打印出最终结果,如果集群运行不正常,那么会抛出异常的,或者也可以用下面的命令查看主从机的运行状况

-bash-4.2$ hdfs dfsadmin -report
18/12/09 12:21:20 DEBUG util.Shell: setsid exited with exit code 0
18/12/09 12:21:21 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(value=[Rate of successful kerberos logins and latency (milliseconds)], about=, valueName=Time, type=DEFAULT, always=false, sampleName=Ops)
18/12/09 12:21:21 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(value=[Rate of failed kerberos logins and latency (milliseconds)], about=, valueName=Time, type=DEFAULT, always=false, sampleName=Ops)
18/12/09 12:21:21 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(value=[GetGroups], about=, valueName=Time, type=DEFAULT, always=false, sampleName=Ops)
18/12/09 12:21:21 DEBUG lib.MutableMetricsFactory: field private org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal with annotation @org.apache.hadoop.metrics2.annotation.Metric(value=[Renewal failures since startup], about=, valueName=Time, type=DEFAULT, always=false, sampleName=Ops)
18/12/09 12:21:21 DEBUG lib.MutableMetricsFactory: field private org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with annotation @org.apache.hadoop.metrics2.annotation.Metric(value=[Renewal failures since last successful login], about=, valueName=Time, type=DEFAULT, always=false, sampleName=Ops)
18/12/09 12:21:21 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics
18/12/09 12:21:21 DEBUG util.KerberosName: Kerberos krb5 configuration not found, setting default realm to empty
18/12/09 12:21:21 DEBUG security.Groups:  Creating new Groups object
18/12/09 12:21:21 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
18/12/09 12:21:21 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
18/12/09 12:21:21 DEBUG security.JniBasedUnixGroupsMapping: Using JniBasedUnixGroupsMapping for Group resolution
18/12/09 12:21:21 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
18/12/09 12:21:21 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
18/12/09 12:21:21 DEBUG core.Tracer: sampler.classes = ; loaded no samplers
18/12/09 12:21:21 DEBUG core.Tracer: span.receiver.classes = ; loaded no span receivers
18/12/09 12:21:21 DEBUG security.UserGroupInformation: hadoop login
18/12/09 12:21:21 DEBUG security.UserGroupInformation: hadoop login commit
18/12/09 12:21:21 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: hadoop
18/12/09 12:21:21 DEBUG security.UserGroupInformation: Using user: "UnixPrincipal: hadoop" with name hadoop
18/12/09 12:21:21 DEBUG security.UserGroupInformation: User entry: "hadoop"
18/12/09 12:21:21 DEBUG security.UserGroupInformation: Assuming keytab is managed externally since logged in from subject.
18/12/09 12:21:21 DEBUG security.UserGroupInformation: UGI loginUser:hadoop (auth:SIMPLE)
18/12/09 12:21:21 DEBUG core.Tracer: sampler.classes = ; loaded no samplers
18/12/09 12:21:21 DEBUG core.Tracer: span.receiver.classes = ; loaded no span receivers
18/12/09 12:21:22 DEBUG impl.DfsClientConf: dfs.client.use.legacy.blockreader.local = false
18/12/09 12:21:22 DEBUG impl.DfsClientConf: dfs.client.read.shortcircuit = false
18/12/09 12:21:22 DEBUG impl.DfsClientConf: dfs.client.domain.socket.data.traffic = false
18/12/09 12:21:22 DEBUG impl.DfsClientConf: dfs.domain.socket.path =
18/12/09 12:21:22 DEBUG hdfs.DFSClient: Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
18/12/09 12:21:22 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
18/12/09 12:21:22 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcProtobufRequest, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@55cc2562
18/12/09 12:21:22 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@7ee77dc0
18/12/09 12:21:23 DEBUG unix.DomainSocketWatcher: org.apache.hadoop.net.unix.DomainSocketWatcher$2@732e9042: starting with interruptCheckPeriodMs = 60000
18/12/09 12:21:23 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled.
18/12/09 12:21:23 DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
18/12/09 12:21:23 DEBUG ipc.Client: The ping interval is 60000 ms.
18/12/09 12:21:23 DEBUG ipc.Client: Connecting to hadoop-master/192.168.1.130:9000
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop: starting, having connections 1
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop sending #0 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFsStats
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop got value #0
18/12/09 12:21:23 DEBUG ipc.ProtobufRpcEngine: Call: getFsStats took 91ms
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop sending #1 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFsStats
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop got value #1
18/12/09 12:21:23 DEBUG ipc.ProtobufRpcEngine: Call: getFsStats took 3ms
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop sending #2 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFsStats
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop got value #2
18/12/09 12:21:23 DEBUG ipc.ProtobufRpcEngine: Call: getFsStats took 1ms
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop sending #3 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFsStats
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop got value #3
18/12/09 12:21:23 DEBUG ipc.ProtobufRpcEngine: Call: getFsStats took 3ms
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop sending #4 org.apache.hadoop.hdfs.protocol.ClientProtocol.setSafeMode
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop got value #4
18/12/09 12:21:23 DEBUG ipc.ProtobufRpcEngine: Call: setSafeMode took 9ms
Configured Capacity: 19945680896 (18.58 GB)
Present Capacity: 10668183552 (9.94 GB)
DFS Remaining: 10668146688 (9.94 GB)
DFS Used: 36864 (36 KB)
DFS Used%: 0.00%
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop sending #5 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFsStats
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop got value #5
18/12/09 12:21:23 DEBUG ipc.ProtobufRpcEngine: Call: getFsStats took 2ms
Under replicated blocks: 0
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop sending #6 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFsStats
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop got value #6
18/12/09 12:21:23 DEBUG ipc.ProtobufRpcEngine: Call: getFsStats took 1ms
Blocks with corrupt replicas: 0
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop sending #7 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFsStats
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop got value #7
18/12/09 12:21:23 DEBUG ipc.ProtobufRpcEngine: Call: getFsStats took 1ms
Missing blocks: 0
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop sending #8 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFsStats
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop got value #8
18/12/09 12:21:23 DEBUG ipc.ProtobufRpcEngine: Call: getFsStats took 1ms
Missing blocks (with replication factor 1): 0
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop sending #9 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFsStats
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop got value #9
18/12/09 12:21:23 DEBUG ipc.ProtobufRpcEngine: Call: getFsStats took 2ms
Pending deletion blocks: 0

-------------------------------------------------
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop sending #10 org.apache.hadoop.hdfs.protocol.ClientProtocol.getDatanodeReport
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop got value #10
18/12/09 12:21:23 DEBUG ipc.ProtobufRpcEngine: Call: getDatanodeReport took 9ms
Live datanodes (1):

Name: 192.168.1.128:50010 (hadoop-slave1)
Hostname: hadoop-slave1
Decommission Status : Normal
Configured Capacity: 19945680896 (18.58 GB)
DFS Used: 36864 (36 KB)
Non DFS Used: 8240717824 (7.67 GB)
DFS Remaining: 10668146688 (9.94 GB)
DFS Used%: 0.00%
DFS Remaining%: 53.49%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun Dec 09 12:21:20 CST 2018


18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop sending #11 org.apache.hadoop.hdfs.protocol.ClientProtocol.getDatanodeReport
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop got value #11
18/12/09 12:21:23 DEBUG ipc.ProtobufRpcEngine: Call: getDatanodeReport took 2ms
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop sending #12 org.apache.hadoop.hdfs.protocol.ClientProtocol.getDatanodeReport
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop got value #12
18/12/09 12:21:23 DEBUG ipc.ProtobufRpcEngine: Call: getDatanodeReport took 2ms
18/12/09 12:21:23 DEBUG tools.DFSAdmin: Exception encountered:
18/12/09 12:21:23 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@7ee77dc0
18/12/09 12:21:23 DEBUG ipc.Client: removing client from cache: org.apache.hadoop.ipc.Client@7ee77dc0
18/12/09 12:21:23 DEBUG ipc.Client: stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@7ee77dc0
18/12/09 12:21:23 DEBUG ipc.Client: Stopping client
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop: closed
18/12/09 12:21:23 DEBUG ipc.Client: IPC Client (1338614742) connection to hadoop-master/192.168.1.130:9000 from hadoop: stopped, remaining connections 0
18/12/09 12:21:23 DEBUG util.ShutdownHookManager: ShutdownHookManger complete shutdown.
运行成功说明,集群已经搭建成功啦,如果有问题问题可以留言互相讨论,hadoop的运行原理以后再慢慢讨论。
---------------------
【转载】仅作分享,侵删
作者:zhang_xinxiu
原文:https://blog.csdn.net/zhang_xinxiu/article/details/84894692



作者: 不二晨    时间: 2018-12-18 17:57
奈斯
作者: 梦缠绕的时候    时间: 2018-12-20 16:43





欢迎光临 黑马程序员技术交流社区 (http://bbs.itheima.com/) 黑马程序员IT技术论坛 X3.2