You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
HUE is not available at that URL and it isn't even installed on that node. What actually happens is that HUE is installed on all the scmnodes, and then one of these nodes seems to get picked as the host for HUE. Once you get around the *.pyc problem, HUE does start up on 10.195.38.207:8088, but this private IP isn't visible so I can't browse to the UI.
When I try to delete HUE from the SCM and then create another HUE service, my only options are to install it on one of these three scmnodes when what I really want to do is select the scmserver node or the CDH client machine, as those have public IP's and DNS names.
Should I just install HUE manually on the scmserver node or is there a workaround?
The text was updated successfully, but these errors were encountered:
(As you've noticed by now, more SCM developers are hanging out on scm-users@ than whirr-scm's github, so you might get better answers on that mailing list.)
Why don't you install the scm-agent on your scm server node? Then, SCM will let you add Hue on that machine, and you'll be off to the races. You might end up adding a Hue safety valve to get Hue's web server to bind to 0.0.0.0 (or the public IP). I think http_host is the config option, but that's from memory.
You can also configure the agent to pretend that it's address is the public one as opposed to the private one. I haven't experimented sufficiently on EC2 to know which is the better system, but it's likely the case that you have to choose all-private or all-public. Hue, fortunately, can bind to 0.0.0.0, so that'll hopefully work for you.
I'm trying to get whirr-scm working on EC2 with these templates:
whirr.instance-templates=1 scmserver,1 cdhclient,3 scmnode
During the launch, I see this:
SCM Admin Console available at http://ec2-184-73-104-3.compute-1.amazonaws.com:7180
Nodes in cluster (copy into text area when setting up the cluster):
10.38.17.106
10.80.117.34
10.195.38.207
Hue UI will be available at http://ec2-184-73-104-3.compute-1.amazonaws.com:8088
HUE is not available at that URL and it isn't even installed on that node. What actually happens is that HUE is installed on all the scmnodes, and then one of these nodes seems to get picked as the host for HUE. Once you get around the *.pyc problem, HUE does start up on 10.195.38.207:8088, but this private IP isn't visible so I can't browse to the UI.
When I try to delete HUE from the SCM and then create another HUE service, my only options are to install it on one of these three scmnodes when what I really want to do is select the scmserver node or the CDH client machine, as those have public IP's and DNS names.
Should I just install HUE manually on the scmserver node or is there a workaround?
The text was updated successfully, but these errors were encountered: