Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenStack Nova enable/disable service #6996

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 32 additions & 5 deletions app/models/ems_refresh/save_inventory.rb
Original file line number Diff line number Diff line change
Expand Up @@ -246,12 +246,39 @@ def save_networks_inventory(hardware, hashes, mode = :refresh)
def save_system_services_inventory(parent, hashes, mode = :refresh)
return if hashes.nil?

deletes = case mode
when :refresh then nil
when :scan then :use_association
end
#######
# tripleo specific
#######

# if parent is OpenStack Cloud
# and if parent have Infra provider
if parent.kind_of?(ManageIQ::Providers::Openstack::CloudManager)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the pluggable providers, we are trying not to add any more specific providers into the generic code. So there should not be a condition about Openstack, can it be rewritten without that?

infra_ems = parent.provider \
&& parent.provider.kind_of?(ManageIQ::Providers::Openstack::Provider) \
&& parent.provider.infra_ems
if infra_ems
# for each host
infra_ems.hosts.map do |host|
# select hashes with that hostname
hashes_for_host = hashes.select do |hash|
hash[:host] == host.hypervisor_hostname
# and put host instead of hostname there
end.map do |hash|
hash[:host] = host
hash
end
# save system_services for one host
save_inventory_multi(host.system_services, hashes_for_host, [], [:name])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, it's bad it's so hard to access the one table from multiple places. Maybe we should bring new table cloud_services, which would optionally belong to host and system_service? What do you think? It could clean the code a bit.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Solved by using new table for CloudService approach - see #7996

end
end
else
deletes = case mode
when :refresh then nil
when :scan then :use_association
end

save_inventory_multi(parent.system_services, hashes, deletes, [:typename, :name])
save_inventory_multi(parent.system_services, hashes, deletes, [:typename, :name])
end
end

def save_guest_applications_inventory(parent, hashes)
Expand Down
3 changes: 2 additions & 1 deletion app/models/ems_refresh/save_inventory_cloud.rb
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,8 @@ def save_ems_cloud_inventory(ems, hashes, target = nil)
:cloud_resource_quotas,
:cloud_object_store_containers,
:cloud_object_store_objects,
:resource_groups
:resource_groups,
:system_services,
]

# Save and link other subsections
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,7 @@ def ems_inv_to_hashes
get_volumes
get_snapshots
get_object_store
get_services

$fog_log.info("#{log_header}...Complete")

Expand Down Expand Up @@ -456,5 +457,63 @@ def clean_up_extra_flavor_keys
def add_instance_disk(disks, size, location, name)
super(disks, size, location, name, "openstack")
end

def get_services
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Ladas, I thought you were already picking up the services off the hosts and displaying them in the UI. Is this different?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@blomquisg yeah the API call for get_services gets only subset of the services, with the OpenStack enabled/disabled info

# TODO(pblaho): use handled_list(:services) after fog with https://github.com/fog/fog/pull/3838 is released
# services = @compute_service.handled_list(:services)
services = @compute_service.services
process_collection(services, :system_services) { |service| parse_service(service) }
end

def parse_service(service)
# <Fog::Compute::OpenStack::Service
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delete these large comments please

# id=1,
# binary="nova-scheduler",
# host="overcloud-controller-0.localdomain",
# state="up",
# status="enabled",
# updated_at="2016-02-10T14:32:32.000000",
# zone="internal",
# disabled_reason=nil
# >

# <SystemService:0x0056064ec2e7f0
# id: 58,
# name: "openstack-nova-compute",
# svc_type: nil,
# typename: "linux_systemd",
# start: nil,
# image_path: nil,
# display_name: nil,
# depend_on_service: nil,
# depend_on_group: nil,
# object_name: nil,
# description: "OpenStack Nova Compute Server",
# vm_or_template_id: nil,
# enable_run_levels: nil,
# disable_run_levels: nil,
# host_id: 3,
# running: true,
# dependencies: {},
# systemd_load: "loaded",
# systemd_active: "active",
# systemd_sub: "running",
# host_service_group_id: 1,
# scheduling_status: nil>

uid = service.id

new_result = {
# TODO(pblaho): solve the issue with openstack- prefix
# maybe remove storing that prefix at all
# prefix is used only at RH systems with systemd for openstack services
:name => "openstack-#{service.binary}",
# hostname without domain[s]
:host => service.host.split('.').first,
:scheduling_status => service.status,
}

return uid, new_result
end
end
end
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,33 @@ def unset_node_maintenance
def external_get_node_maintenance
ironic_fog_node.maintenance
end

def nova_system_service
# we need to be sure that host has compute service
system_services.find_by(:name => 'openstack-nova-compute')
end

def nova_fog_service
# TODO: check if host is part of OpenStack Infra
# host's cluster needs cloud assigned
cloud = ems_cluster.cloud
# hostname of host in hypervisor is used to properly select service from OpenStack
host_name = hypervisor_hostname
fog_services = cloud.openstack_handle.compute_service.services
fog_services.find { |s| s.host =~ /#{host_name}/ && s.binary == 'nova-compute' }
end

def nova_fog_enable_service
nova_fog_service.enable
end

def nova_fog_disable_service
nova_fog_service.disable
end

def nova_service_refresh_scheduling_status
new_status = nova_fog_service.status
nova_system_service.scheduling_status = new_status if %w(enabled disabled).include? new_status
nova_system_service.save
end
end
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
class AddSchedulingStatusToSystemServices < ActiveRecord::Migration
def change
add_column :system_services, :scheduling_status, :string
end
end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@petrblaho Can you please pull this migration out into a separate PR so that it can be merged independent of the rest of the changes in this PR? We are approaching the deadline for schema changes for the darga branch, and I don't want the rest of the requested fixes in this PR to hold that up.

Original file line number Diff line number Diff line change
@@ -1,72 +1,128 @@
describe ManageIQ::Providers::Openstack::InfraManager::Refresher do
before(:each) do
_guid, _server, zone = EvmSpecHelper.create_guid_miq_server_zone
@ems = FactoryGirl.create(:ems_openstack_infra, :zone => zone, :hostname => "192.0.2.1",
_guid, _server, @zone = EvmSpecHelper.create_guid_miq_server_zone
@ems = FactoryGirl.create(:ems_openstack_infra, :zone => @zone, :hostname => "192.0.2.1",
:ipaddress => "192.0.2.1", :port => 5000, :api_version => 'v2',
:security_protocol => 'no-ssl')
@ems.update_authentication(
:default => {:userid => "admin", :password => "c51a4689e1df2153987f8a42f04185430d462186"})
:default => {:userid => "admin", :password => "2fc1310c997dfeafaf3920115306a086943c63db"})
end

it "will perform a full refresh" do
2.times do # Run twice to verify that a second run with existing data does not change anything
@ems.reload
# Caching OpenStack info between runs causes the tests to fail with:
# VCR::Errors::UnusedHTTPInteractionError
# Reset the cache so HTTP interactions are the same between runs.
@ems.reset_openstack_handle
context "without overcloud" do
it "will perform a full refresh" do
2.times do # Run twice to verify that a second run with existing data does not change anything
@ems.reload
# Caching OpenStack info between runs causes the tests to fail with:
# VCR::Errors::UnusedHTTPInteractionError
# Reset the cache so HTTP interactions are the same between runs.
@ems.reset_openstack_handle

# We need VCR to match requests differently here because fog adds a dynamic
# query param to avoid HTTP caching - ignore_awful_caching##########
# https://github.com/fog/fog/blob/master/lib/fog/openstack/compute.rb#L308
VCR.use_cassette("#{described_class.name.underscore}_rhos_juno",
:match_requests_on => [:method, :host, :path]) do
EmsRefresh.refresh(@ems)
EmsRefresh.refresh(@ems.network_manager)
end
@ems.reload

assert_table_counts_without_overcloud
assert_ems
assert_specific_host
assert_specific_public_template
end
end

it "will verify maintenance mode" do
# We need VCR to match requests differently here because fog adds a dynamic
# query param to avoid HTTP caching - ignore_awful_caching##########
# https://github.com/fog/fog/blob/master/lib/fog/openstack/compute.rb#L308
VCR.use_cassette("#{described_class.name.underscore}_rhos_juno", :match_requests_on => [:method, :host, :path]) do
VCR.use_cassette("#{described_class.name.underscore}_rhos_juno_maintenance",
:match_requests_on => [:method, :host, :path]) do
@ems.reload
@ems.reset_openstack_handle
EmsRefresh.refresh(@ems)
EmsRefresh.refresh(@ems.network_manager)
end
@ems.reload
@ems.reload

@host = ManageIQ::Providers::Openstack::InfraManager::Host.all.detect { |x| x.name.include?('(NovaCompute)') }

expect(@host.maintenance).to eq(false)
expect(@host.maintenance_reason).to eq(nil)

assert_table_counts
assert_ems
assert_specific_host
assert_specific_public_template
@host.set_node_maintenance
EmsRefresh.refresh(@ems)
@ems.reload
@host.reload
expect(@host.maintenance).to eq(true)
expect(@host.maintenance_reason).to eq("CFscaledown")

@host.unset_node_maintenance
EmsRefresh.refresh(@ems)
@ems.reload
@host.reload
expect(@host.maintenance).to eq(false)
expect(@host.maintenance_reason).to eq(nil)
end
end
end

it "will verify maintenance mode" do
# We need VCR to match requests differently here because fog adds a dynamic
# query param to avoid HTTP caching - ignore_awful_caching##########
# https://github.com/fog/fog/blob/master/lib/fog/openstack/compute.rb#L308
VCR.use_cassette("#{described_class.name.underscore}_rhos_juno_maintenance",
:match_requests_on => [:method, :host, :path]) do
@ems.reload
@ems.reset_openstack_handle
EmsRefresh.refresh(@ems)
EmsRefresh.refresh(@ems.network_manager)
@ems.reload

@host = ManageIQ::Providers::Openstack::InfraManager::Host.all.detect { |x| x.name.include?('(NovaCompute)') }

expect(@host.maintenance).to eq(false)
expect(@host.maintenance_reason).to eq(nil)

@host.set_node_maintenance
EmsRefresh.refresh(@ems)
@ems.reload
@host.reload
expect(@host.maintenance).to eq(true)
expect(@host.maintenance_reason).to eq("CFscaledown")

@host.unset_node_maintenance
EmsRefresh.refresh(@ems)
@ems.reload
@host.reload
expect(@host.maintenance).to eq(false)
expect(@host.maintenance_reason).to eq(nil)
context "with overcloud" do
before(:each) do
@provider = FactoryGirl.create(:provider_openstack, :name => "undercloud")
@cloud = FactoryGirl.create(:ems_openstack, :zone => @zone, :hostname => "172.16.23.10",
:ipaddress => "172.16.23.10", :port => 5000, :api_version => 'v2',
:security_protocol => 'no-ssl', :provider => @provider)
@ems.provider = @provider
@cloud.update_authentication(
:default => {:userid => "admin", :password => "6220ebad3efea28fc31da81911ffa99e077bc437"})
end

it "will perform a full refresh" do
2.times do # Run twice to verify that a second run with existing data does not change anything
@ems.reload
@cloud.reload
# Caching OpenStack info between runs causes the tests to fail with:
# VCR::Errors::UnusedHTTPInteractionError
# Reset the cache so HTTP interactions are the same between runs.
@ems.reset_openstack_handle
@cloud.reset_openstack_handle

# We need VCR to match requests differently here because fog adds a dynamic
# query param to avoid HTTP caching - ignore_awful_caching##########
# https://github.com/fog/fog/blob/master/lib/fog/openstack/compute.rb#L308
VCR.use_cassette("#{described_class.name.underscore}_rhos_juno_tripleo",
:match_requests_on => [:method, :host, :path]) do
EmsRefresh.refresh(@ems)
EmsRefresh.refresh(@ems.network_manager)
EmsRefresh.refresh(@cloud)
end
@ems.reload
@cloud.reload

assert_table_counts_with_overcloud
assert_ems
# assert_specific_host
assert_specific_public_template
end
end
end

def assert_table_counts

def assert_table_counts_without_overcloud
expect(ExtManagementSystem.count).to eq 2
expect(Vm.count).to eq 0
assert_table_counts
end

def assert_table_counts_with_overcloud
expect(ExtManagementSystem.count).to eq 4
expect(Vm.count).to eq 7
assert_table_counts
end

def assert_table_counts
expect(EmsCluster.count).to be > 0
expect(Host.count).to be > 0
expect(OrchestrationStack.count).to be > 0
Expand All @@ -82,7 +138,6 @@ def assert_table_counts
expect(Hardware.count).to be > 0
expect(Disk.count).to be > 0
expect(ResourcePool.count).to eq 0
expect(Vm.count).to eq 0
expect(CustomAttribute.count).to eq 0
expect(CustomizationSpec.count).to eq 0
# expect(GuestDevice.count).to eq > 0
Expand Down Expand Up @@ -143,7 +198,8 @@ def assert_specific_host
)

expect(@host.private_networks.count).to be > 0
expect(@host.private_networks.first).to be_kind_of(ManageIQ::Providers::Openstack::NetworkManager::CloudNetwork::Private)
expect(@host.private_networks.first).to be_kind_of(
ManageIQ::Providers::Openstack::NetworkManager::CloudNetwork::Private)
expect(@host.network_ports.count).to be > 0
expect(@host.network_ports.first).to be_kind_of(ManageIQ::Providers::Openstack::NetworkManager::NetworkPort)

Expand Down
Loading