手机支付宝钱包借条在哪?怎么用?
826 2023-04-03 05:06:13
一、虚拟机环境准备
主机
hadoop版本
jdk版本
centos9
hadoop01
10.211.55.4
3.2.2
1.8.0_322
centos9
hadoop02
10.211.55.7
3.2.2
1.8.0_322
centos9
hadoop03
10.211.55.6
3.2.2
1.8.0_322
二、分别在三台虚拟机中进行hosts 配置 ,配置内容如下
10.211.55.4 hadoop0110.211.55.7 hadoop0210.211.55.6 hadoop03
新增用户
groupadd hadoopuseradd -d /home/hadoop -g hadoop -s /bin/bash -m hadoopvisudo#在文中root行下添加hadoop行hadoop ALL=(ALL) ALL
#第一台注解配置
hostnamectl set-hostname hadoop01
#重启
reboot
#查看是否更改成功
hostname
#第二台注解配置
hostnamectl set-hostname hadoop01
#重启
reboot
#查看是否更改成功
hostname
#查看是否更改成功
username
#第三台注解配置
hostnamectl set-hostname hadoop01
#重启
reboot
#查看是否更改成功
hostname
关闭防火墙
关闭防⽕墙: systemctl stop firewalld查看状态: systemctl status firewalld开机禁⽤: systemctl disable firewalld
免密登陆配置
#切换到hadoop用户su hadoop#生城公钥,先查看本地有没有生成密钥,如果有的话,再次生成会影响前面已经设置好的,用下面这条命令就可以cat ~/.ssh/id_rsa.pub#如果没有的话,输入下面的命令来在本机上生成公钥和私钥ssh-keygen -t rsa#把公钥复制到远程主机上,此处主要是将三台虚拟机的公钥相互复制,以支持三台服务器可以使用hadoop账号使用ssh直接登录ssh-copy-id -i ~/.ssh/id_rsa.pub root@ip地址
四、集群规划
注意1:NameNode和SecondaryNameNode不要部署在同一台服务器
注意2:ResourceManager也很消耗内存,不要和NameNode、SecondaryNameNode部署在同一台服务器上
hadoop01
hadoop02
hadoop03
HDFS
DataNode
DataNode
DataNode
NameNode
SecondaryNameNode
YARN
NodeManager
NodeManager
NodeManager
ResourceManager
五、开始安装
#下载解压
midir -p /opt/module /opt/softwarecd /opt/sofwarewget https://dlcdn.apache.org/hadoop/common/hadoop-3.3.2/hadoop-3.3.2.tar.gztar -xzvf hadoop-3.3.2.tar.gz -C /opt/module
cd /opt/module
mv hadoop-3.3.2.tar.gz hadoop
编辑hdfs-site.xml,内容如下:
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop01:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/module/hadoop/data/tmp</value> </property></configuration>
hdfis-site.xml
<configuration> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>hadoop03:50090</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/opt/module/hadoop/dfs/name</value> <description> Path on the local filesystem where theNameNode stores the namespace and transactions logs persistently. </description> </property> <!-- <property> <name>dfs.hosts</name> <value>hadoop01</value> <description> If necessary, use these files to control the list of allowable datanodes. </description> </property> --> <property> <name>dfs.datanode.data.dir</name> <value>/opt/module/hadoop/dfs/data</value> <description> Comma separated list of paths on the localfilesystem of a DataNode where it should store itsblocks. </description> </property></configuration>
yarn-site.xml
<configuration><!-- Site specific YARN configuration properties --> <property> <name>yarn.resourcemanager.hostname</name> <value>hadoop02</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property></configuration>
workers
hadoop01hadoop02hadoop03
hadoop-env.sh
export HDFS_NAMENODE_USER=hadoopexport HADOOP_LOG_DIR=${HADOOP_HOME}/logsexport HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoopexport HADOOP_HOME=/opt/module/hadoopexport JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk