-
Notifications
You must be signed in to change notification settings - Fork 897
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix queueing of historical metrics collection #14695
Changes from 2 commits
155ddd6
8b5bf74
baed845
9bf613e
004569f
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -46,20 +46,28 @@ def perf_capture_queue(interval_name, options = {}) | |
|
||
log_target = "#{self.class.name} name: [#{name}], id: [#{id}]" | ||
|
||
# Determine the items to queue up | ||
# cb is the task used to group cluster realtime metrics | ||
cb = nil | ||
items = [] | ||
if interval_name == 'historical' | ||
start_time = Metric::Capture.historical_start_time if start_time.nil? | ||
end_time = Time.now.utc if end_time.nil? | ||
end_time ||= 1.day.from_now.utc.beginning_of_day # Ensure no more than one historical collection is queue up in the same day | ||
items = split_capture_intervals(interval_name, start_time, end_time) | ||
else | ||
start_time = last_perf_capture_on unless start_time | ||
end_time = Time.now.utc unless end_time | ||
# if last_perf_capture_on is earlier than 4.hour.ago.beginning_of_day, | ||
# then create *one* realtime capture for start_time = 4.hours.ago.beginning_of_day (no end_time) | ||
# and create historical captures for each day from last_perf_capture_on until 4.hours.ago.beginning_of_day | ||
realtime_cut_off = 4.hours.ago.utc.beginning_of_day | ||
if last_perf_capture_on && last_perf_capture_on < realtime_cut_off | ||
items = split_capture_intervals("historical", last_perf_capture_on, realtime_cut_off) | ||
end | ||
# push realtime item to the front of the array | ||
items.unshift([interval_name, realtime_cut_off]) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think we only want to do this when the time is outside the realtime_cut_off that is, I think something like realtime_cut_off = 4.hours.ago.utc.beginning_of_day
if last_perf_capture_on && last_perf_capture_on < realtime_cut_off
items = [[interval_name, realtime_cut_off]] +
split_capture_intervals("historical", last_perf_capture_on, realtime_cut_off)
else
items = [interval_name]
end There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In the case of VMware this will be fine, I think. Since the max we can get from the provider is So, without a start date, I think it will try to get the maximum amount of data available. But, I'm not positive. EditAnd, that leads us back to the "we should fix this in |
||
|
||
cb = {:class_name => self.class.name, :instance_id => id, :method_name => :perf_capture_callback, :args => [[task_id]]} if task_id | ||
end | ||
|
||
# Determine what items we should be queuing up | ||
items = [interval_name] | ||
items = split_capture_intervals(interval_name, start_time, end_time) if start_time | ||
|
||
# Queue up the actual items | ||
queue_item = { | ||
:class_name => self.class.name, | ||
|
@@ -78,15 +86,17 @@ def perf_capture_queue(interval_name, options = {}) | |
if msg.nil? | ||
qi[:priority] = priority | ||
qi.delete(:state) | ||
qi[:miq_callback] = cb if cb | ||
if cb and item_interval == "realtime" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Prefer |
||
qi[:miq_callback] = cb | ||
end | ||
qi | ||
elsif msg.state == "ready" && (task_id || MiqQueue.higher_priority?(priority, msg.priority)) | ||
qi[:priority] = priority | ||
# rerun the job (either with new task or higher priority) | ||
qi.delete(:state) | ||
if task_id | ||
existing_tasks = (((msg.miq_callback || {})[:args] || []).first) || [] | ||
qi[:miq_callback] = cb.merge(:args => [existing_tasks + [task_id]]) | ||
qi[:miq_callback] = cb.merge(:args => [existing_tasks + [task_id]]) if item_interval == "realtime" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If we can get rid of the check on upgrading priority, then it will be easier to go over to Real-time will potentially still need this |
||
end | ||
qi | ||
else | ||
|
@@ -177,6 +187,8 @@ def perf_capture(interval_name, start_time = nil, end_time = nil) | |
|
||
if start_range.nil? | ||
_log.info "#{log_header} Skipping processing for #{log_target} as no metrics were captured." | ||
# Set the last capture on to end_time to prevent forever queueing up the same collection range | ||
update_attribute(:last_perf_capture_on, end_time) if interval_name == 'realtime' | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This needs to be |
||
else | ||
if expected_start_range && start_range > expected_start_range | ||
_log.warn "#{log_header} For #{log_target}, expected to get data as of [#{expected_start_range}], but got data as of [#{start_range}]." | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1