请选择 进入手机版 | 继续访问电脑版

hadoop集群安装配置Kerberos(二):hadoop集群配置 kerberos 认证

[复制链接]
余峻 发表于 2021-1-2 17:37:20 | 显示全部楼层 |阅读模式 打印 上一主题 下一主题
目次
前言
一、设置 SASL 认证证书
二、修改集群设置文件
1.hdfs添加以下设置
 2.yarn添加以下设置
3.hive添加以下设置
4.hbase添加以下设置 
三、kerberos相关下令
四、快速测试
五、问题办理
1、Caused by: java.io.IOException: Failed on local exception: java.io.IOException: Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections.; Host Details : local host is: "v0106-c0a8003a.4eb06.c.local/192.168.0.58"; destination host is: "v0106-c0a80019.4eb06.c.local":8020;
2、bash: keytools: command not found
3、java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "v0106-c0a8003a.4eb06.c.local/192.168.0.58"; destination host is: "v0106-c0a80019.4eb06.c.local":8020;
4、retry.RetryInvocationHandler: Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over /192.168.0.92:8020 after 4 fail over attempts.
5、Caused by: GSSException: No valid credentials provided (Mechanism level: Ticket expired (32) - PROCESS_TGS)。Caused by: KrbException: Identifier doesn't match expected value (906)。Failed on local exception: java.io.IOException: Couldn't setup connection for hdfs/v0106-c0a8005c.4eb06.c.local@HADOOP.COM to /192.168.0.25:8485; Host Details : local host is: "v0106-c0a8005c.4eb06.c.local/192.168.0.92"; destination host is: "v0106-c0a80019.4eb06.c.local":8485;
6、java.net.ConnectException: Call From xxx to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused
7、WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN
8、Incompatible clusterID for journal Storage Directory /export/data/hadoop/journal/yinheforyanghang: NameNode has clusterId 'CID-7c4fa72e-afa0-4334-bafd-68af133d4ffe' but storage has clusterId 'CID-52b9fb86-24b4-43a3-96d3-faa584e96d69'
9、There appears to be a gap in the edit log. We expected txid 1, but got txid 2941500.
10、Caused by: ExitCodeException exitCode=24: File /home/hadoop/core/hadoop-2.7.6/etc/hadoop/container-executor.cfg must be owned by root, but is owned by 1000
11、ulimit -a for user hdfs
12、Caused by: MetaException(message:Version information not found in metastore. )
13、Caused by: java.lang.IllegalStateException: Cannot skip to less than the current value (=492685), where newValue=16385。
14、This node has namespaceId '279103593 and clusterId 'xxx' but the requesting node expected '279103593' and 'xxx'。
前言

前一篇纪录了如何安装主从kdc,这篇纪录下kerberos如何与hadoop集群集成。设定:
hadoop 的设置文件目次为:/export/common/hadoop/conf/
hadoop 的安装目次为:/export/hadoop/ 
 
一、设置 SASL 认证证书

因为启用kerberos 认证后,会使用HTTPS 访问,需要访问证书。随意在一台服务器好比:v0106-c0a8000e.4e462.c.local 上执行:
   $ openssl req -newkey rsa:2048 -keyout rsa_private.key -x509 -days 365 -out cert.crt -subj /C=CN/ST=BJ/L=BJ/O=test/OU=dev/CN=jd.com/emailAddress=test@126.com
    将生成的 cert.crt rsa_private.key 复制到 每个服务器的hadoop 设置文件目次里
   $ scp cert.crt rsa_private.key 192.168.0.25:/export/common/hadoop/conf/(每个节点都要有)
    
   在每个服务器上执行执行中需要输入暗码:123456。(每个节点都要有java情况才可以执行)
   $ cd /export/common/hadoop/conf/
   $ keytool -keystore keystore -alias localhost -validity 9999 -genkey -keyalg RSA -keysize 2048 -dname "CN=v0106-c0a8000e.4e462.c.local, OU=dev, O=test, L=BJ, ST=BJ, C=CN"
   $ keytool -keystore truststore -alias CARoot -import -file cert.crt;
   $ keytool -certreq -alias localhost -keystore keystore -file cert;
   $ openssl x509 -req -CA cert.crt -CAkey rsa_private.key -in cert -out cert_signed -days 9999 -CAcreateserial;
   $ keytool -keystore keystore -alias CARoot -import -file cert.crt ;
   $ keytool -keystore keystore -alias localhost -import -file cert_signed ;
二、修改集群设置文件

1.hdfs添加以下设置

core-site.xml
  1.  hadoop.security.authorizationtruehadoop.security.authenticationkerberos
复制代码
hdfs-site.xml
  1. dfs.namenode.keytab.file/export/common/kerberos5/hdfs.keytabdfs.namenode.kerberos.principalhdfs/_HOST@HADOOP.COMdfs.namenode.kerberos.internal.spnego.principalHTTP/_HOST@HADOOP.COMdfs.web.authentication.kerberos.principalHTTP/_HOST@HADOOP.COMdfs.web.authentication.kerberos.keytab/export/common/kerberos5/hdfs.keytabdfs.datanode.keytab.file/export/common/kerberos5/hdfs.keytabdfs.datanode.kerberos.principalhdfs/_HOST@HADOOP.COMdfs.http.policyHTTPS_ONLYdfs.data.transfer.protectionintegritydfs.datanode.data.dir.perm700dfs.journalnode.keytab.file/export/common/kerberos5/hdfs.keytabdfs.journalnode.kerberos.principalhdfs/_HOST@HADOOP.COMdfs.journalnode.kerberos.internal.spnego.principalHTTP/_HOST@HADOOP.COM
复制代码
 hadoop_env.sh
  1. export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=${JAVA_HOME}/lib -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=192.168.0.49:88"
复制代码
ssl-server.xml(放在hadoop设置目次下:/export/common/hadoop/conf,赋权hdfs:hadoop)
         
  1.    ssl.server.truststore.location  /export/common/hadoop/conf/truststore  Truststore to be used by NN and DN. Must be specified.    ssl.server.truststore.password  123456  Optional. Default value is "".    ssl.server.truststore.type  jks  Optional. The keystore file format, default value is "jks".    ssl.server.truststore.reload.interval  10000  Truststore reload check interval, in milliseconds.  Default value is 10000 (10 seconds).    ssl.server.keystore.location  /export/common/hadoop/conf/keystore  Keystore to be used by NN and DN. Must be specified.    ssl.server.keystore.password  123456  Must be specified.    ssl.server.keystore.keypassword  123456  Must be specified.    ssl.server.keystore.type  jks  Optional. The keystore file format, default value is "jks".    ssl.server.exclude.cipher.list  TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA,  SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,  SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA,  SSL_RSA_WITH_RC4_128_MD5  Optional. The weak security cipher suites that you want excluded  from SSL communication.
复制代码
 ssl-client.xml(放在hadoop设置目次下:/export/common/hadoop/conf,赋权hdfs:hadoop)
  1.   ssl.client.truststore.location  /export/common/hadoop/conf/truststore  Truststore to be used by clients like distcp. Must be  specified.    ssl.client.truststore.password  123456  Optional. Default value is "".    ssl.client.truststore.type  jks  Optional. The keystore file format, default value is "jks".    ssl.client.truststore.reload.interval  10000  Truststore reload check interval, in milliseconds.  Default value is 10000 (10 seconds).    ssl.client.keystore.location  /export/common/hadoop/conf/keystore  Keystore to be used by clients like distcp. Must be  specified.    ssl.client.keystore.password  123456  Optional. Default value is "".    ssl.client.keystore.keypassword  123456  Optional. Default value is "".    ssl.client.keystore.type  jks  Optional. The keystore file format, default value is "jks".  
复制代码
 2.yarn添加以下设置

yarn-site.xml
  1. yarn.web-proxy.principalHTTP/_HOST@HADOOP.COMyarn.web-proxy.keytab/export/common/kerberos5/hdfs.keytabyarn.resourcemanager.principalhdfs/_HOST@HADOOP.COMyarn.resourcemanager.keytab/export/common/kerberos5/hdfs.keytabyarn.nodemanager.principalhdfs/_HOST@HADOOP.COMyarn.nodemanager.keytab/export/common/kerberos5/hdfs.keytabyarn.nodemanager.container-executor.classorg.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutoryarn.nodemanager.linux-container-executor.grouphdfsyarn.timeline-service.http-authentication.typekerberosDefines authentication used for the timeline server HTTP endpoint. Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME#yarn.timeline-service.principalhdfs/_HOST@HADOOP.COMyarn.timeline-service.keytab/export/common/kerberos5/hdfs.keytabyarn.timeline-service.http-authentication.kerberos.principalHTTP/_HOST@HADOOP.COM   yarn.timeline-service.http-authentication.kerberos.keytab  /export/common/kerberos5/hdfs.keytabyarn.nodemanager.container-localizer.java.opts-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=192.168.0.49 :88yarn.nodemanager.health-checker.script.opts-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=192.168.0.49:88
复制代码
mapred-site.xml
  1. mapreduce.map.java.opts-Xmx1638M -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=192.168.0.49:88mapreduce.reduce.java.opts-Xmx3276M -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=192.168.0.49:88mapreduce.jobhistory.keytab/export/common/kerberos5/hdfs.keytabmapreduce.jobhistory.principalhdfs/_HOST@HADOOP.COMmapreduce.jobhistory.webapp.spnego-keytab-file/export/common/kerberos5/hdfs.keytabmapreduce.jobhistory.webapp.spnego-principalHTTP/_HOST@HADOOP.COMmapred.child.java.opts-Xmx1024m -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=192.168.0.49:88yarn.app.mapreduce.am.command-opts-Xmx3276m -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=192.168.0.49:88
复制代码
 
3.hive添加以下设置

hive-site.xml
  1. hive.server2.authenticationKERBEROShive.server2.authentication.kerberos.principalhdfs/_HOST@HADOOP.COMhive.server2.authentication.kerberos.keytab/export/common/kerberos5/hdfs.keytabhive.metastore.sasl.enabledtruehive.metastore.kerberos.keytab.file/export/common/kerberos5/hdfs.keytabhive.metastore.kerberos.principalhdfs/_HOST@HADOOP.COM
复制代码
4.hbase添加以下设置 

hbase-site.xml
  1.             hbase.security.authentication        kerberos                    hbase.rpc.engine        org.apache.hadoop.hbase.ipc.SecureRpcEngine                hbase.coprocessor.region.classes        org.apache.hadoop.hbase.security.token.TokenProvider                    hbase.master.kerberos.principal        hdfs/_HOST@HADOOP.COM                    hbase.master.keytab.file        /export/common/kerberos5/hdfs.keytab                    hbase.regionserver.kerberos.principal        hdfs/_HOST@HADOOP.COM                    hbase.regionserver.keytab.file        /export/common/kerberos5/hdfs.keytab                hbase.thrift.keytab.file        /export/common/kerberos5/hdfs.keytab                  hbase.thrift.kerberos.principal         hdfs/_HOST@HADOOP.COM                   hbase.rest.keytab.file         /export/common/kerberos5/hdfs.keytab                   hbase.rest.kerberos.principal         hdfs/_HOST@HADOOP.COM                   hbase.rest.authentication.type         kerberos                   hbase.rest.authentication.kerberos.principal         HTTP/_HOST@HADOOP.COM                   hbase.rest.authentication.kerberos.keytab         /export/common/kerberos5/hdfs.keytab     
复制代码

三、kerberos相关下令

退出授权:kdestroy
主kdc打开kadmin管理:kadmin.local
检察当前系统使用的Kerberos账户:klist
使用keytab获取用户根据: kinit -kt /export/common/kerberos5/kadm5.keytab admin/admin@HADOOP.COM
检察keytab内容:klist -k -e /export/common/kerberos5/hdfs.keytab
生成keytab文件:kadmin.local -q "xst -k /export/common/kerberos5/hdfs.keytab admin/admin@HADOOP.COM"
四、快速测试

测试hdfs:切换到hdfs用户,键入下令:hdfs dfs -ls /后,需要认证。再次键入下令“kinit -kt /export/common/kerberos5/hdfs.keytab hdfs/`hostname | awk '{print tolower($0)}'`”后可以查出效果即为与hdfs集成乐成。
五、问题办理

1、Caused by: java.io.IOException: Failed on local exception: java.io.IOException: Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections.; Host Details : local host is: "v0106-c0a8003a.4eb06.c.local/192.168.0.58"; destination host is: "v0106-c0a80019.4eb06.c.local":8020;

分析:设置好某一个hadoop节点后,切换到hdfs再执行hive下令,报上述错。刚开始排查以为是读的目的呆板与发送请求呆板的密钥验证不一致导致,需要把全部hadoop节点变为kdc客户端,但是在客户端执行hive下令,仍然报相同错。厥后查资料发现原因是代码中编写了登录kerberos的代码,而hdfs没有开kerberos,故还需查抄hdfs设置是否生效。
办理:重启了下集群后生效了。
2、bash: keytools: command not found

办理: keytool下令不识别,请设置java_home
3、java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "v0106-c0a8003a.4eb06.c.local/192.168.0.58"; destination host is: "v0106-c0a80019.4eb06.c.local":8020;

分析:重启集群后,有hdfs用户访问下令“hdfs dfs -ls /”时,报上述错。刚开始认为没有赋权,故用kinit -kt /export/common/kerberos5/hdfs.keytab hdfs/v0106-c0a8003a.4eb06.c.local@HADOOP.COM下令赋权,之后照旧报同样的错误。然后开查资料举行调试。
调试:先执行如下下令:export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true -Dlog.enable-console=true -Dsun.security.krb5.debug=true ${HADOOP_OPTS}"
export HADOOP_ROOT_LOGGER=DEBUG,console
再执行hadoop操纵:hadoop dfs -ls /,在控制台上即可看到相关的debug输出。调试效果仍然是看不出什么。
办理1:由于之前kdc.conf设置master_key_type和supported_enctypes默认使用aes256-cts。厥后在网上查资料,发现有说jdk问题,由于出口管制原因,JAVA8使用aes256-cts验证方式需要安装额外的jar包。厥后按照提示把JDK8暗码扩展无限制权限计谋文件下载下来(我的是jdk8)https://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html,下载后,覆盖到$JAVA_HOME/jre/lib/security中去,主要是local_policy.jar和 US_export_policy.jar中即可。
办理2:上面的办理方案需要我们替换java包,比力贫苦。所以我们在安装kdc时不消aec256加密,在安装时就修改默认的kdc.conf文件的master_key_type和supported_enctypes中去掉aes256-cts加密。
4、retry.RetryInvocationHandler: Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over /192.168.0.92:8020 after 4 fail over attempts.

形貌:办理完问题3后,再用下令“hdfs dfs -ls /”,又报上面的错。
分析:hdfs haadmin -getServiceState nn2/nn1,发现两个namenode都是standby状态。zkfc也起来了。(nn1 nn2的名称定名可以去看hdfs-site.xml设置)
办理:1)如果zkfc没有起来,启动zkfc就OK了。进入/export/hadoop/sbin目次下 ./hadoop-daemon.sh start zkfc;
2)手动切换主备状态:hdfs haadmin -transitionToActive nn1 --forcemanual(切换为standby状态下令:hdfs haadmin –transitionToStandby nn2)
5、Caused by: GSSException: No valid credentials provided (Mechanism level: Ticket expired (32) - PROCESS_TGS)。Caused by: KrbException: Identifier doesn't match expected value (906)。Failed on local exception: java.io.IOException: Couldn't setup connection for hdfs/v0106-c0a8005c.4eb06.c.local@HADOOP.COM to /192.168.0.25:8485; Host Details : local host is: "v0106-c0a8005c.4eb06.c.local/192.168.0.92"; destination host is: "v0106-c0a80019.4eb06.c.local":8485;

分析:1)查抄/etc/krb5.conf中"default_tgs_enctypes" ; "default_tkt_enctypes" ; "permitted_enctypes",把/etc/krb5.conf中的arcfour-hmac-md5加密方式去掉,因为在kdc.conf中没有匹配。
2)查抄hdfs安全模式:hdfs dfsadmin -safemode get,发现进入了安全模式,然后输入下令:hdfs dfsadmin -safemode leave脱离安全模式,重启namenode,又进入安全模式。
办理:厥后接纳更新krb5版原来测试:
1)先卸载原来的版本yum remove -y libkadm5-1.15.1-18.el7.x86_64;rpm -e --nodeps krb5-libs-1.15.1-18.el7.x86_64
(注:yum有时卸载不了,就用rpm忽略依赖来卸载,其间可用rpm -qa | grep krb5或yum list installed |grep krb5来看安装的依赖包)
2) 安装新版本
rpm -ivh krb5-libs-1.15.1-37.el7_6.x86_64.rpm
rpm -ivh   libkadm5-1.15.1-37.el7_6.x86_64.rpm
       rpm -ivh krb5-server-1.15.1-37.el7_6.x86_64.rpm(客户端不消安装)
       rpm -ivh krb5-workstation-1.15.1-37.el7_6.x86_64.rpm
6、java.net.ConnectException: Call From xxx to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused

形貌:办理完问题5后,启动namenode时报上述错。
办理:重新格式化namenode,下令如下:hadoop namenode -format,之后分别重启两个namenode,发现照旧都为standby状态,报错日志如下问题7
7、WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category JOURNAL is not supported in state standby。
形貌:办理完问题6后,报上面的错。然后执行切换HA namenode active状态下令:hdfs haadmin -transitionToActive --forcemanual nn1后,namenode进程没了,同时报下面错:Incompatible namespaceID for journal Storage Directory /export/data/hadoop/journal/yinheforyanghang: NameNode has nsId 420017674 but storage has nsId 1693249738
办理:vim /export/data/hadoop/namenode/current/VERSION,把420017674改为1693249738。重启namenode发现照旧都为standby状态。
8、Incompatible clusterID for journal Storage Directory /export/data/hadoop/journal/yinheforyanghang: NameNode has clusterId 'CID-7c4fa72e-afa0-4334-bafd-68af133d4ffe' but storage has clusterId 'CID-52b9fb86-24b4-43a3-96d3-faa584e96d69'

形貌:办理完问题7后,重启NN,然后照旧standby,再执行切换HA namenode active状态下令:hdfs haadmin -transitionToActive --forcemanual nn1后,namenode进程没了,报上面的错。
办理:vim /export/data/hadoop/namenode/current/VERSION,把clusterId的值改成 storage的值。
9、There appears to be a gap in the edit log. We expected txid 1, but got txid 2941500.

形貌:办理完问题8后,nn1节点为active了,但nn2节点进程自动消失,而且日志报上面的错。这个错误表明该节点namenode元数据发生了损坏。需要规复元数据以后,才气启动namenode
办理:cd $HADOOP_HOME/bin&&hadoop namenode -recover,一路选择c 举行元数据的规复。规复完元数据以后,使用如下下令重新启动namenode节点。
10、Caused by: ExitCodeException exitCode=24: File /home/hadoop/core/hadoop-2.7.6/etc/hadoop/container-executor.cfg must be owned by root, but is owned by 1000

形貌:启动nodemanager节点时,报的错
办理:1)hadoop版本2.7.6时,这个问题的原因是 LinuxContainerExecutor 通过container-executor来启动容器,但是出于安全的思量,要求其所依赖的设置文件container-executor.cfg及其各级父路径owner必须是root用户。下令格式为:chown root 每个层级目次名(云情况目次条理为:/export/server/hadoop-2.7.6/etc/hadoop/container-executor.cfg)
2)设置了之后又出现别的问题,如下:
Caused by: ExitCodeException exitCode=22: Invalid permissions on container-executor binary.
这里确实是权限问题,我们还需要执行以下下令修改$HADOOP_HOME/bin/container-executor权限:
chown root:hadoop $HADOOP_HOME/bin/container-executor
chmod 6050 $HADOOP_HOME/bin/container-executor
然后再启动nodemanager节点,又出现下面问题:
3)Caused by: ExitCodeException exitCode=24: Can't get configured value for yarn.nodemanager.linux-container-executor.group
vim /export/server/hadoop-2.7.6/etc/hadoop/container-executor.cfg

然后再启动nodemanager节点:cd /export/hadoop/sbin && ./yarn-daemon.sh start nodemanager,发现已经乐成启动了。
11、ulimit -a for user hdfs

分析:重启datanode时启动失败。检察日志/export/server/hadoop_home/logs/hadoop-hdfs-datanode-v0106-c0a80049.4eb06.c.local.out发现“ulimit -a for user hdfs”,之后查了下/export/server/hadoop_home/logs/hadoop-hdfs-datanode-v0106-c0a80049.4eb06.c.local.log发现“Address already in use;”,说明50070端口被占。
办理:netstat -anlp | grep 50070输入如下:
tcp 0 0 192.168.0.73:50070 192.168.0.73:50070 ESTABLISHED 2931329/python2
kill -9 2931329,之后再重启即可。
12、Caused by: MetaException(message:Version information not found in metastore. )

形貌 :启动metastore下令:nohup hive --service metastore &后,报此错,到 hive-site.xml中检察:   hive.metastore.schema.verification=false,设置也没问题。
办理:su - hdfs
vim ~/.bashrc,在内里添加以下情况变量
export HIVE_CONF_DIR=/export/common/hive/conf
export HIVE_HOME=/export/hive
export PATH=".:$HIVE_HOME/bin:$HIVE_HOME/sbin"
13、Caused by: java.lang.IllegalStateException: Cannot skip to less than the current value (=492685), where newValue=16385。

形貌:运行一段时间后,发现standby的namenode挂了,启动时报如上错误。之后format该namenode节点,再重启发现虽然启来了。但是datanode节点全部挂了,并报错: java.io.IOException: Incompatible clusterIDs in /home/hduser/mydata/hdfs/datanode: namenode clusterID = **CID-8e09ff25-80fb-4834-878b-f23b3deb62d0**; datanode clusterID = **CID-cd85e59a-ed4a-4516-b2ef-67e213cfa2a1,后vim /export/data/hadoop/namenode/current/VERSION,把clusterId的值改成 datanode的值。再次启动namenode时报错:got premature end-of-file at txid 0; expected file to go up to 4,再执行下令:hdfs namenode -bootstrapStandby。
14、This node has namespaceId '279103593 and clusterId 'xxx' but the requesting node expected '279103593' and 'xxx'。

办理:办理了13问题后,又报上面错,之后vim /export/data/hadoop/jounalnode/集群名/current/VERSION下的clusterid改成与namenode的一样
 
 
 

来源:https://blog.csdn.net/changlina_1989/article/details/112058879
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则


专注素材教程免费分享
全国免费热线电话

18768367769

周一至周日9:00-23:00

反馈建议

27428564@qq.com 在线QQ咨询

扫描二维码关注我们

Powered by Discuz! X3.4© 2001-2013 Comsenz Inc.( 蜀ICP备2021001884号-1 )