-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hdfs-dptst-example.cfg配置文件相关参数问题咨询 #27
Comments
会根据base_port的设置生成hadoop的配置文件,rpc端口对应着base_port 2014-05-15 9:07 GMT+08:00 xiaoguanyu [email protected]:
|
但是我已经在集群中部署了hadoop2,如果根据base_port的设置生成的hadoop配置文件是hadoop2/etc/hadoop/hdfs-site.xml文件吗? 发件人: 勇幸 2014-05-15 9:07 GMT+08:00 xiaoguanyu [email protected]:
|
minos的client是什么原理只要执行./deploy install zookeeper dptst 和 ./deploy install hdfs dptst-example 即使集群中部署了zk和hdfs,也会自动安装一个zk和hdfs吗? 发件人: 勇幸 2014-05-15 9:07 GMT+08:00 xiaoguanyu [email protected]:
|
你的hadoop2是minos部署的?使用minos生成的hadoop配置文件会放在集群机器上job的run路径下,例如/home/work/app/hdfs/dptst-example/journalnode/hdfs-site.xml |
我的hadoop2是自己部署的,我理解错了,我以为是执行install就会自动安装hadoop,我将client/supervisor_client.py中的 发件人: 勇幸 |
连接错误,请问你10.38.11.59:9001能访问不?部署前需要先部署好Tank,并且需要在所有的production machines上部署supervisord |
您好,请问是不是所有部署supervisord的production machines上也都要部署tank 发件人: 勇幸 |
tank只需要一个就可以,supervisord需要在每以机器上布署,详细参考readme.md里的架构图 |
我只在10.38.11.59的机器上部署了tank和supervisord,我在10.38.11.8只部署了supervisord,两台机器共享59上的tank,10.38.11.8的supervisor的访问状态如图2所示,显示的失败的两条信息是什么意思? 图2:10.38.11.8:9001 发件人: Zesheng Wu |
看不到图 |
我只在10.38.11.59的机器上部署了tank和supervisord,我在10.38.11.8只部署了supervisord,两台机器共享59上的tank,10.38.11.8的supervisor的访问状态如图2所示,显示的失败的两条信息是什么意思?请查看附件中图,谢谢 发件人: Zesheng Wu |
文件目录:monos/config/conf/hdfs/hdfs-dptst-example.cfg
[journalnode]
base_port=12100
host.0=10.38.11.59
host.1=10.38.11.134
host.2=10.38.11.135
[namenode]
base_port=12200
host.0=10.38.11.59
host.1=10.38.11.134
[zkfc]
base_port=12300
[datanode]
base_port=12400
host.0=10.38.11.134
host.1=10.38.11.135
请问这些参数中base_port的设置是同hadoop的配置文件中的journalnode、namenode等的rpc端口号相同,还是自己随意定义的
The text was updated successfully, but these errors were encountered: