Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

XML document element count exceeds configured maximum 500000 #65

Closed
pypb opened this issue Nov 16, 2018 · 34 comments
Closed

XML document element count exceeds configured maximum 500000 #65

pypb opened this issue Nov 16, 2018 · 34 comments

Comments

@pypb
Copy link

pypb commented Nov 16, 2018

When collecting metrics from a fairly large vCenter (+5000 VM's) vsphere-graphite fails with the error:

Error: ServerFaultCode: XML document element count exceeds configured maximum 500000

Complete log:

2018/11/16 13:31:11 Version information: - - b926f35 (clean)
2018/11/16 13:31:11 Starting daemon: vsphere-graphite
2018/11/16 13:31:11 Initializing vCenter vcenter.host.domain
2018/11/16 13:31:11 connecting to vcenter: vcenter.host.domain
2018/11/16 13:31:11 disconnecting from vcenter: vcenter.host.domain
2018/11/16 13:31:11 Intializing graphite backend
2018/11/16 13:31:11 Retrieving metrics
2018/11/16 13:31:11 Setting up query inventory of vcenter:  vcenter.host.domain
2018/11/16 13:31:11 connecting to vcenter: vcenter.host.domain
2018/11/16 13:31:15 Issuing 5223 queries to vcenter vcenter requesting 161298 metrics.
2018/11/16 13:31:16 Could not request perfs from vcenter: vcenter
2018/11/16 13:31:16 Error:  ServerFaultCode: XML document element count exceeds configured maximum 500000

while parsing property "counterId" of static type int

while parsing serialized DataObject of type vim.PerformanceManager.MetricId
at line 2, column 11873447

while parsing property "metricId" of static type ArrayOfPerfMetricId

while parsing serialized DataObject of type vim.PerformanceManager.QuerySpec
at line 2, column 11871573

while parsing call information for method QueryPerf
at line 2, column 66

while parsing SOAP body
at line 2, column 60

while parsing SOAP envelope
at line 2, column 0

while parsing HTTP request for method queryStats
on object of type vim.PerformanceManager
at line 1, column 0
2018/11/16 13:31:16 disconnecting from vcenter: vcenter.host.domain
2018/11/16 13:31:21 Sent 0 logs to backend
2018/11/16 13:31:21 Memory usage : sys=65.5M alloc=9.3M
@cblomart
Copy link
Owner

Looks that this is a limitation from vcenter.

I would think a solution is to limit the amount of managed object properties are collected from.

@pypb
Copy link
Author

pypb commented Nov 19, 2018

Could you elaborate?

I've tried minimizing the number of metrics collected to the bare minimum we want to measure, unfortunately it's not enough to make the XML reply sufficiently small. Since we have a steady growth of VM's, it would just be a matter of time until we hit the limit again though.

Would it be possible to divide the queries in to batches?

@cblomart
Copy link
Owner

right @pypb, that is what i had in mind: limiting the amount of managed objects (vm/host) by batching them (i.e. per 4000).

@MnrGreg
Copy link

MnrGreg commented Dec 18, 2018

@pypb what does your vcenter topology look like? I've managed worked around this by appending only the desired vsphere datacenters and ignoring out the rest:

for _, child := range dcs {
datacenters = append(datacenters, child.Reference())
}

Similarly, I'm only appending points where the cluster strings match my specified desired clusters.

vmhost, cluster := cache.FindHostAndCluster(vcName, pem.Entity.Value)

This also helped improve sampling performance and reduce long term storage capacity.

@cblomart, perhaps a 'filters' section could be written into the vsphere-graphite.json for datacenters and clusters?

@cblomart
Copy link
Owner

I wanted to think about how to overcome the limit.
Effectively filtering would be a workaround. I will take time to look at it.
To my point of view the optimal way to filter would be via vcenter rights. This way the objects are never know to the vspheregraphite daemon (so no filtering needed).

@pypb
Copy link
Author

pypb commented Dec 22, 2018

@MnrGreg We do have multiple datacenters in vCenter, however, the largest one has by far the most VM's. Filtering out all the other ones would most likely not make a difference.

@cblomart Interesting idea, we have a cluster or two I could filter out by using permissions. I'll give it a try.

@cblomart
Copy link
Owner

I worked on a branch to overcome this limit.

I tested it with success by using a docker image
# docker run -d -v /etc/vsphere-graphite.json:/etc/vsphere-graphite.json cblomart/vsphere-graphite:50ab88b

@MnrGreg and certainly @pypb could you confirm it does work.

One detail is that the amount of returned metric needs to be evaluated and changes in function of whether or not instances are requested. To cope for that i introduced an evaluation ration of 1,5 times the amount of requested metrics.

We will need to validate if this is sufficient.

@cblomart
Copy link
Owner

cblomart commented Jan 8, 2019

@MnrGreg, @pypb any news on this?

@pypb
Copy link
Author

pypb commented Jan 8, 2019

@MnrGreg, @pypb any news on this?

Sorry, I've been on leave over the holidays and just got back yesterday. I hope I get a chance to try it out later this week. I'll let you know!

@pypb
Copy link
Author

pypb commented Jan 10, 2019

@cblomart Unfortunately, it's still the same error:

2019/01/10 14:31:40 Version information: - - 50ab88b (clean)
../..
2019/01/10 14:31:41 Error:  ServerFaultCode: XML document element count exceeds configured maximum 500000

while parsing property "counterId" of static type int

while parsing serialized DataObject of type vim.PerformanceManager.MetricId
at line 2, column 11822256

while parsing property "metricId" of static type ArrayOfPerfMetricId

while parsing serialized DataObject of type vim.PerformanceManager.QuerySpec
at line 2, column 11820392

while parsing call information for method QueryPerf
at line 2, column 66

while parsing SOAP body
at line 2, column 60

while parsing SOAP envelope
at line 2, column 0

while parsing HTTP request for method queryStats
on object of type vim.PerformanceManager
at line 1, column 0

@cblomart
Copy link
Owner

Can you provide more extended logs...
I would like to see how many metrics it expects back and if effectively split requests.
In function of what I see I may need to change code or add logging.

@pypb
Copy link
Author

pypb commented Jan 11, 2019

Certainly, here's the full session log:

2019/01/11 07:48:00 Version information: - - 50ab88b (clean)
2019/01/11 07:48:00 Starting daemon: vsphere-graphite
2019/01/11 07:48:00 Initializing vCenter vcenter.domain
2019/01/11 07:48:00 connecting to vcenter: vcenter.domain
2019/01/11 07:48:00 disconnecting from vcenter: vcenter.domain
2019/01/11 07:48:00 Intializing graphite backend
2019/01/11 07:48:00 Retrieving metrics
2019/01/11 07:48:00 Setting up query inventory of vcenter:  vcenter.domain
2019/01/11 07:48:00 connecting to vcenter: vcenter.domain
2019/01/11 07:48:01 Queries generated:
2019/01/11 07:48:01 5550 queries to vcenter vcenter
2019/01/11 07:48:01 171435 total metricIds from vcenter vcenter
2019/01/11 07:48:01 257153 total counter from vcenter vcenter (accounting for 1.5 instances ratio)
2019/01/11 07:48:01 Could not request perfs from vcenter: vcenter
2019/01/11 07:48:01 Error:  ServerFaultCode: XML document element count exceeds configured maximum 500000

while parsing property "counterId" of static type int

while parsing serialized DataObject of type vim.PerformanceManager.MetricId
at line 2, column 11822256

while parsing property "metricId" of static type ArrayOfPerfMetricId

while parsing serialized DataObject of type vim.PerformanceManager.QuerySpec
at line 2, column 11820392

while parsing call information for method QueryPerf
at line 2, column 66

while parsing SOAP body
at line 2, column 60

while parsing SOAP envelope
at line 2, column 0

while parsing HTTP request for method queryStats
on object of type vim.PerformanceManager
at line 1, column 0
2019/01/11 07:48:10 Sent 0 logs to backend
2019/01/11 07:48:10 Memory usage : sys=68.7M alloc=1.9M
^C2019/01/11 07:48:13 Got signal: interrupt
2019/01/11 07:48:13 Disconnecting from graphite

@cblomart
Copy link
Owner

2019/01/11 07:48:01 5550 queries to vcenter vcenter
2019/01/11 07:48:01 171435 total metricIds from vcenter vcenter
2019/01/11 07:48:01 257153 total counter from vcenter vcenter (accounting for 1.5 instances ratio)

My interpretation of the issue is that vcenter returns more metrics because you may requests point for each instances (vcpu, nic, disks, ...) and i introduced a ratio to take that into account. In function of that it will split requests in the minimum amount of threads to respect the 500000 limit.

So here, arrount 5550 objects in vcenter (vm & hosts) and 171435 different metric requested.
With the current ratio (1.5) it expects 257153 points back but recieves more than 500000.

Looks like the ratio is at least 500000/171435 ~ 3.
I will adapt it in the code and add some logging to know how much each thread returned.

@cblomart
Copy link
Owner

A new docker image of the element-limit branch is available at "cblomart/vsphere-graphite:9c1e612"

@pypb
Copy link
Author

pypb commented Jan 11, 2019

A new docker image of the element-limit branch is available at "cblomart/vsphere-graphite:9c1e612"

Now it's able to fetch metrics, but the output to Graphite contains only metrics for 2003 VM's and no hosts. The vCenter contains 5400 VM's and at this moment 4127 are powered on, plus 123 hosts.

2019/01/11 11:20:44 Version information: - - 9c1e612 (clean)
2019/01/11 11:20:44 Starting daemon: vsphere-graphite
2019/01/11 11:20:44 Initializing vCenter vcenter.domain
2019/01/11 11:20:44 connecting to vcenter: vcenter.domain
2019/01/11 11:20:45 disconnecting from vcenter: vcenter.domain
2019/01/11 11:20:45 Intializing graphite backend
2019/01/11 11:20:45 Retrieving metrics
2019/01/11 11:20:45 Setting up query inventory of vcenter:  vcenter.domain
2019/01/11 11:20:45 connecting to vcenter: vcenter.domain
2019/01/11 11:20:45 Queries generated:
2019/01/11 11:20:45 5549 queries to vcenter vcenter
2019/01/11 11:20:45 171404 total metricIds from vcenter vcenter
2019/01/11 11:20:45 514212 total counter from vcenter vcenter (accounting for 3 instances ratio)
2019/01/11 11:20:45 2775 threads generated to execute queries
2019/01/11 11:20:48 Thread 1 returned 2005 metrics
2019/01/11 11:20:48 VM vm-20952 has no host.
2019/01/11 11:20:48 VM vm-24471 has no host.
(repeated for 4010 VM's)
2019/01/11 11:20:48 Sent 10000 logs to backend
2019/01/11 11:20:48 Sent 10000 logs to backend
2019/01/11 11:20:48 Sent 10000 logs to backend
2019/01/11 11:20:48 Sent 10000 logs to backend
2019/01/11 11:20:48 Sent 10000 logs to backend
2019/01/11 11:20:48 Sent 10000 logs to backend
2019/01/11 11:20:48 Sent 10000 logs to backend
2019/01/11 11:20:48 Sent 10000 logs to backend
2019/01/11 11:20:48 Sent 10000 logs to backend
2019/01/11 11:20:48 Sent 10000 logs to backend
2019/01/11 11:20:48 Sent 10000 logs to backend
2019/01/11 11:20:48 Sent 10000 logs to backend
2019/01/11 11:20:48 Sent 10000 logs to backend
2019/01/11 11:20:48 Sent 10000 logs to backend
2019/01/11 11:20:48 Sent 10000 logs to backend
2019/01/11 11:20:48 Sent 10000 logs to backend
2019/01/11 11:20:49 Sent 10000 logs to backend
2019/01/11 11:20:49 Sent 10000 logs to backend
2019/01/11 11:20:49 Sent 10000 logs to backend
2019/01/11 11:20:49 Sent 10000 logs to backend
2019/01/11 11:20:49 Sent 10000 logs to backend
2019/01/11 11:20:54 Sent 7594 logs to backend
2019/01/11 11:20:54 Memory usage : sys=137.2M alloc=5.8M
2019/01/11 11:21:02 Got signal: interrupt
2019/01/11 11:21:02 Disconnecting from graphite

@cblomart
Copy link
Owner

The first thing i want to see is why it appears to split in 2775 threads as two should be suffisient.
Concerning the lines with "Thread x returned y metrics", do you have multiple of them?

@cblomart
Copy link
Owner

Ok the thread count is most probably a wrong variable used... i corrected it... it still doesn't explain why you don't have everything comming back

@pypb
Copy link
Author

pypb commented Jan 11, 2019

Concerning the lines with "Thread x returned y metrics", do you have multiple of them?

No, just the one.

@cblomart
Copy link
Owner

if only one threads comes to an end... this might explains why you don't have everything... the waithgroup should ensure every thread has finished before continuing...

@cblomart
Copy link
Owner

Ok... i tought i had it right over sync groups but apparently something was missing...
Can we see how the latest build works? (cblomart/vsphere-graphite:bda3df4)

@cblomart
Copy link
Owner

cblomart commented Jan 12, 2019

had to review a few things and make a new build:
cblomart/vsphere-graphite:142f905

@pypb
Copy link
Author

pypb commented Jan 21, 2019

Sorry for the delay. I've tried build 142f905 and it seems to give about the same result, metrics for 2030 VM's returned and no hosts.

2019/01/21 15:09:10 Version information: NA - 142f905@element-limit (clean)
2019/01/21 15:09:10 Starting daemon: vsphere-graphite
2019/01/21 15:09:10 Initializing vCenter vcenter.domain
2019/01/21 15:09:10 connecting to vcenter: vcenter.domain
2019/01/21 15:09:10 disconnecting from vcenter: vcenter.domain
2019/01/21 15:09:10 Intializing graphite backend
2019/01/21 15:09:10 Retrieving metrics
2019/01/21 15:09:10 Setting up query inventory of vcenter:  vcenter.domain
2019/01/21 15:09:10 connecting to vcenter: vcenter.domain
2019/01/21 15:09:11 Queries generated:
2019/01/21 15:09:11 5584 queries to vcenter vcenter
2019/01/21 15:09:11 172474 total metricIds from vcenter vcenter
2019/01/21 15:09:11 517422 total counter from vcenter vcenter (accounting for 3 instances ratio)
2019/01/21 15:09:11 2 threads generated to execute queries
2019/01/21 15:09:11 Thread 1 requests 2792 metrics
2019/01/21 15:09:11 Thread 2 requests 2792 metrics
2019/01/21 15:09:11 Thread 2 requesting 2792 metrics
2019/01/21 15:09:11 Thread 1 requesting 2792 metrics
2019/01/21 15:09:13 Thread 1 returned 2030 metrics
2019/01/21 15:09:13 VM vm-24169 has no host.
2019/01/21 15:09:13 VM vm-24021 has no host.
2019/01/21 15:09:13 VM vm-20929 has no host.
2019/01/21 15:09:13 VM vm-24486 has no host.
2019/01/21 15:09:13 VM vm-20980 has no host.
2019/01/21 15:09:13 VM vm-23563 has no host.
2019/01/21 15:09:13 VM vm-22645 has no host.
2019/01/21 15:09:13 VM vm-22773 has no host.
2019/01/21 15:09:13 VM vm-24009 has no host.
2019/01/21 15:09:13 VM vm-5855 has no host.
2019/01/21 15:09:13 VM vm-20658 has no host.
2019/01/21 15:09:13 VM vm-17597 has no host.
2019/01/21 15:09:13 VM vm-23552 has no host.
2019/01/21 15:09:13 Thread 2 returned 2030 metrics
2019/01/21 15:09:13 VM vm-17276 has no host.
(lots more of these...)
2019/01/21 15:09:13 disconnecting from vcenter: vcenter.domain
2019/01/21 15:09:13 Sent 10000 logs to backend
2019/01/21 15:09:13 Sent 10000 logs to backend
2019/01/21 15:09:13 Sent 10000 logs to backend
2019/01/21 15:09:13 Sent 10000 logs to backend
2019/01/21 15:09:14 Sent 10000 logs to backend
2019/01/21 15:09:14 Sent 10000 logs to backend
2019/01/21 15:09:14 Sent 10000 logs to backend
2019/01/21 15:09:14 Sent 10000 logs to backend
2019/01/21 15:09:14 Sent 10000 logs to backend
2019/01/21 15:09:14 Sent 10000 logs to backend
2019/01/21 15:09:14 Sent 10000 logs to backend
2019/01/21 15:09:14 Sent 10000 logs to backend
2019/01/21 15:09:14 Sent 10000 logs to backend
2019/01/21 15:09:14 Sent 10000 logs to backend
2019/01/21 15:09:14 Sent 10000 logs to backend
2019/01/21 15:09:14 Sent 10000 logs to backend
2019/01/21 15:09:14 Sent 10000 logs to backend
2019/01/21 15:09:14 Sent 10000 logs to backend
2019/01/21 15:09:14 Sent 10000 logs to backend
2019/01/21 15:09:14 Sent 10000 logs to backend
2019/01/21 15:09:14 Sent 10000 logs to backend
2019/01/21 15:09:14 Sent 10000 logs to backend
2019/01/21 15:09:14 Sent 10000 logs to backend
2019/01/21 15:09:14 Sent 10000 logs to backend
2019/01/21 15:09:14 Sent 10000 logs to backend
2019/01/21 15:09:19 Sent 1000 logs to backend
2019/01/21 15:09:19 Memory usage : sys=137.2M alloc=5.8M
2019/01/21 15:09:42 Got signal: interrupt
2019/01/21 15:09:42 Disconnecting from graphite

@cblomart
Copy link
Owner

Thanks... at least it confirms that it splits requests amongs two threads... i will look at it further

@cblomart
Copy link
Owner

Just for our good understanding, the "VM vm-* has no host" simply states that the link between a VM an its host cannot be found.
With graphite this link cannot be saved anyways.

@cblomart
Copy link
Owner

Looks like between the two threads metrics of 4060 objects were collected. does this match your expectation.
In the last code (still need to bump the ci/cd on it as it had an issue) i tried to give more clarity on properties collected or not and removed the warning about "vm has no host" as in graphite situation it has no impact (graphite doesn't store host).

If i understand you correctly while collection happens some vm are missing and no metrics for hosts are collected.

@cblomart
Copy link
Owner

cblomart commented Feb 5, 2019

I gues that despite all the changes made to the code in the element-limit branch you are still in this situation?

@pypb
Copy link
Author

pypb commented Feb 7, 2019

Sorry for the delay. If I remember correctly, using build 142f905 the output to Graphite did not contain everything. But I should verify, I'll get back to you.

@pypb
Copy link
Author

pypb commented Feb 25, 2019

OK, I have finally had time to double check my results.

I've re-done tests with build 142f905. Our vCenter now have 6095 VM's, most of them are powered on. In the Graphite output I get metrics for 2579 VM's and 0 hosts. vsphere-graphite logs "Thread 1 returned 2005 metrics" and "Thread 2 returned 2005 metrics".

@cblomart
Copy link
Owner

Thanks again to take time on this @pypb

there has been work done on other issues #78, #77, #79 related to graphite where all metrics weren't returned.
I did bring the element-limit branch up to date on the corrections of this.

I also added more logging to see the amount of object requested and returned in each threads (vms and hosts).

This should be present in 063c6f2

@pypb
Copy link
Author

pypb commented Feb 25, 2019

OK, I've run a test with build 063c6f2. It looks like both threads have the same working set, they work on the exact same amount of objects which is also equal to the total number of VM's I see in the Graphite output (2710 VM's, 0 hosts). In fact, now that I look in the output, all metrics are doubled for each VM.

Here's the log:

2019/02/25 11:06:09 Version information: NA - 063c6f2@element-limit (clean)
2019/02/25 11:06:09 Starting daemon: vsphere-graphite
2019/02/25 11:06:09 main: requested properties 
2019/02/25 11:06:09 vcenter vcenter: initializing
2019/02/25 11:06:09 vcenter vcenter: connecting
2019/02/25 11:06:09 vcenter vcenter: disconnecting
2019/02/25 11:06:09 backend graphite: intializing
2019/02/25 11:06:09 main: properties filtered to  (no metadata in backend)
2019/02/25 11:06:09 Retrieving metrics
2019/02/25 11:06:09 vcenter vcenter: setting up query inventory
2019/02/25 11:06:09 vcenter vcenter: connecting
2019/02/25 11:06:10 vcenter vcenter: skipped 825 objects because they are either not connected or not powered on
2019/02/25 11:06:10 vcenter vcenter: 5421 objects (5287 vm and 134 hosts)
2019/02/25 11:06:10 vcenter vcenter: queries generated
2019/02/25 11:06:10 vcenter vcenter: 5421 queries
2019/02/25 11:06:10 vcenter vcenter: 167381 total metricIds
2019/02/25 11:06:10 vcenter vcenter: 502143 total counter (accounting for 3 instances ratio)
2019/02/25 11:06:10 2 threads generated to execute queries
2019/02/25 11:06:10 Thread 1 requests 2711 metrics
2019/02/25 11:06:10 Thread 2 requests 2710 metrics
2019/02/25 11:06:10 vcenter vcenter thread 2: requesting 2710 metrics
2019/02/25 11:06:10 vcenter vcenter thread 1: requesting 2710 metrics
2019/02/25 11:06:14 vcenter vcenter thread 2: retuned 2710 metrics
2019/02/25 11:06:14 vcenter vcenter thread 1: retuned 2710 metrics
2019/02/25 11:06:14 vcenter vcenter: disconnecting
2019/02/25 11:06:14 vcenter vcenter thread 2: 2710 objects (2710 vm and 0 hosts)
2019/02/25 11:06:14 vcenter vcenter thread 1: 2710 objects (2710 vm and 0 hosts)
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:14 sent 10000 logs to backend
2019/02/25 11:06:19 sent last 7996 logs to backend
2019/02/25 11:06:19 Memory usage : sys=204.8M alloc=12.4M
2019/02/25 11:06:35 Got signal: interrupt
2019/02/25 11:06:35 Disconnecting from graphite
Daemon was interrupted by system signal

@cblomart
Copy link
Owner

I have added soem logging and reviewed "pointer" usage in the definition of batches.
This shall be available in cb34cea

@pypb
Copy link
Author

pypb commented Mar 5, 2019

Now we're talking! I'm getting metrics for 130 hosts and 5344 VM's, which matches the number of connected hosts and powered on VM's.

2019/03/05 11:19:25 Version information: NA - cb34cea@element-limit (clean)
2019/03/05 11:19:25 Starting daemon: vsphere-graphite
2019/03/05 11:19:25 main: requested properties 
2019/03/05 11:19:25 vcenter vcenter: initializing
2019/03/05 11:19:25 vcenter vcenter: connecting
2019/03/05 11:19:25 vcenter vcenter: disconnecting
2019/03/05 11:19:25 backend graphite: intializing
2019/03/05 11:19:25 main: properties filtered to  (no metadata in backend)
2019/03/05 11:19:25 Retrieving metrics
2019/03/05 11:19:25 vcenter vcenter: setting up query inventory
2019/03/05 11:19:25 vcenter vcenter: connecting
2019/03/05 11:19:26 vcenter vcenter: skipped 658 objects because they are either not connected or not powered on
2019/03/05 11:19:26 vcenter vcenter: 5474 objects (5344 vm and 130 hosts)
2019/03/05 11:19:26 vcenter vcenter: queries generated
2019/03/05 11:19:26 vcenter vcenter: 5474 queries
2019/03/05 11:19:26 vcenter vcenter: 169044 total metricIds
2019/03/05 11:19:26 vcenter vcenter: 507132 total counter (accounting for 3 instances ratio)
2019/03/05 11:19:26 vcenter vcenter: created batch 0 from queries 1 - 2737
2019/03/05 11:19:26 vcenter vcenter: created batch 1 from queries 2738 - 5474
2019/03/05 11:19:26 vcenter vcenter: 2 threads generated to execute queries
2019/03/05 11:19:26 vcname vcenter: thread 1 requests 2737 metrics
2019/03/05 11:19:26 vcname vcenter: thread 2 requests 2737 metrics
2019/03/05 11:19:26 vcenter vcenter thread 2: requesting 2737 metrics
2019/03/05 11:19:26 vcenter vcenter thread 1: requesting 2737 metrics
2019/03/05 11:19:30 vcenter vcenter thread 2: retuned 2737 metrics
2019/03/05 11:19:30 vcenter vcenter thread 2: 2737 objects (2737 vm and 0 hosts)
2019/03/05 11:19:30 sent 10000 logs to backend
2019/03/05 11:19:30 sent 10000 logs to backend
2019/03/05 11:19:30 sent 10000 logs to backend
2019/03/05 11:19:30 sent 10000 logs to backend
2019/03/05 11:19:30 sent 10000 logs to backend
2019/03/05 11:19:30 sent 10000 logs to backend
2019/03/05 11:19:30 sent 10000 logs to backend
2019/03/05 11:19:30 sent 10000 logs to backend
2019/03/05 11:19:30 sent 10000 logs to backend
2019/03/05 11:19:30 sent 10000 logs to backend
2019/03/05 11:19:30 sent 10000 logs to backend
2019/03/05 11:19:30 sent 10000 logs to backend
2019/03/05 11:19:30 sent 10000 logs to backend
2019/03/05 11:19:30 sent 10000 logs to backend
2019/03/05 11:19:30 sent 10000 logs to backend
2019/03/05 11:19:30 sent 10000 logs to backend
2019/03/05 11:19:30 sent 10000 logs to backend
2019/03/05 11:19:30 vcenter vcenter thread 1: retuned 2737 metrics
2019/03/05 11:19:30 vcenter vcenter: disconnecting
2019/03/05 11:19:30 sent 10000 logs to backend
2019/03/05 11:19:30 vcenter vcenter thread 1: 2737 objects (2607 vm and 130 hosts)
2019/03/05 11:19:30 sent 10000 logs to backend
2019/03/05 11:19:30 sent 10000 logs to backend
2019/03/05 11:19:31 sent 10000 logs to backend
2019/03/05 11:19:31 sent 10000 logs to backend
2019/03/05 11:19:31 sent 10000 logs to backend
2019/03/05 11:19:31 sent 10000 logs to backend
2019/03/05 11:19:31 sent 10000 logs to backend
2019/03/05 11:19:31 sent 10000 logs to backend
2019/03/05 11:19:31 sent 10000 logs to backend
2019/03/05 11:19:31 sent 10000 logs to backend
2019/03/05 11:19:31 sent 10000 logs to backend
2019/03/05 11:19:31 sent 10000 logs to backend
2019/03/05 11:19:31 sent 10000 logs to backend
2019/03/05 11:19:31 sent 10000 logs to backend
2019/03/05 11:19:31 sent 10000 logs to backend
2019/03/05 11:19:31 sent 10000 logs to backend
2019/03/05 11:19:31 sent 10000 logs to backend
2019/03/05 11:19:31 sent 10000 logs to backend
2019/03/05 11:19:31 sent 10000 logs to backend
2019/03/05 11:19:31 sent 10000 logs to backend
2019/03/05 11:19:36 sent last 8155 logs to backend
2019/03/05 11:19:36 Memory usage : sys=204.6M alloc=12.1M
2019/03/05 11:20:12 Got signal: interrupt
2019/03/05 11:20:12 Disconnecting from graphite
Daemon was interrupted by system signal

@cblomart
Copy link
Owner

cblomart commented Mar 5, 2019

Thanks... good to be back on track... pointers and go func...
i will merge the changes to the master branche and call it a release...

@cblomart
Copy link
Owner

cblomart commented Mar 5, 2019

merged #81.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants