-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
json_v2 wrong timestamp on records #10651
Comments
Added timestamp column as tag: metric=weight field=weight created=1.644528693201e+12 customer_name=Shakey's host=$HOSTNAME imei=356611075967160 serial_number=0004G topic=telegraf-test2 52.35499954223633 1644826706
metric=weight field=weight created=1.644820026857e+12 customer_name=Shakey's host=$HOSTNAME imei=356611075967160 serial_number=0004G topic=telegraf-test2 69.56500244140625 1644820026 As you can see the timestamp columns clearly doesn't match on the first line. |
More debugging: Added this to parser.go in json_v2 115 p.timestamp = time.Now()
116 if c.TimestampPath != "" {
117 result := gjson.GetBytes(input, c.TimestampPath)
118 test, err := strconv.ParseInt(result.String(), 10, 64)
119 ts := result.Int()
120 if err != nil {
121 p.Log.Debugf("Error parsing string ts")
122 }
123 //p.Log.Debugf("Timestamp type: %s / %s / %s", result.Type, result.String(), test)
124 //p.Log.Debugf("Message: %s", input)
125 if result.Type == gjson.Null {
126 p.Log.Debugf("Message: %s", input)
127 return nil, fmt.Errorf(GJSONPathNUllErrorMSG)
128 }
129 if !result.IsArray() && !result.IsObject() {
130 if c.TimestampFormat == "" {
131 err := fmt.Errorf("use of 'timestamp_query' requires 'timestamp_format'")
132 return nil, err
133 }
134
135 var err error
136 p.timestamp, err = internal.ParseTimestamp(c.TimestampFormat, ts, c.TimestampTimezone)
137 if err != nil {
138 return nil, err
139 }
140 }
141
142 if test != p.timestamp.UnixMilli() || ts != p.timestamp.UnixMilli() {
143 p.Log.Debugf("%s / %s / %s / %s", p.timestamp, p.timestamp.UnixMilli(), test, ts)
144
145 p.Log.Debugf("TIMESTAMP DOESNT MATCH!!")
146 p.Log.Debugf("TIMESTAMP DOESNT MATCH!!")
147 p.Log.Debugf("TIMESTAMP DOESNT MATCH!!")
148 p.Log.Debugf("%s", p.timestamp.UnixMilli())
149 p.Log.Debugf("%s", test)
150 p.Log.Debugf("%s", ts)
151 p.Log.Debugf("%s", p.timestamp.UnixMilli() - test)
152 p.Log.Debugf("%s", p.timestamp.UnixMilli() - ts)
153
154 }
155 } Output: 2022-02-15T07:46:17Z D! [parsers.json_v2::kafka_consumer] 2022-01-15 15:21:25.256 +0000 UTC / %!s(int64=1642260085256) / %!s(int64=1641975171000) / %!s(int64=1641975171000)
2022-02-15T07:46:17Z D! [parsers.json_v2::kafka_consumer] TIMESTAMP DOESNT MATCH!!
2022-02-15T07:46:17Z D! [parsers.json_v2::kafka_consumer] TIMESTAMP DOESNT MATCH!!
2022-02-15T07:46:17Z D! [parsers.json_v2::kafka_consumer] TIMESTAMP DOESNT MATCH!!
2022-02-15T07:46:17Z D! [parsers.json_v2::kafka_consumer] %!s(int64=1644911177034)
2022-02-15T07:46:17Z D! [parsers.json_v2::kafka_consumer] %!s(int64=1641975171000)
2022-02-15T07:46:17Z D! [parsers.json_v2::kafka_consumer] %!s(int64=1641975171000)
2022-02-15T07:46:17Z D! [parsers.json_v2::kafka_consumer] %!s(int64=284971882)
2022-02-15T07:46:17Z D! [parsers.json_v2::kafka_consumer] %!s(int64=284971882)
---
2022-02-15T07:48:27Z D! [parsers.json_v2::kafka_consumer] 2022-02-12 17:00:15 +0000 UTC / %!s(int64=1642366748911) / %!s(int64=1642315404036) / %!s(int64=1642315404036)
2022-02-15T07:48:27Z D! [parsers.json_v2::kafka_consumer] TIMESTAMP DOESNT MATCH!!
2022-02-15T07:48:27Z D! [parsers.json_v2::kafka_consumer] TIMESTAMP DOESNT MATCH!!
2022-02-15T07:48:27Z D! [parsers.json_v2::kafka_consumer] TIMESTAMP DOESNT MATCH!!
2022-02-15T07:48:27Z D! [parsers.json_v2::kafka_consumer] %!s(int64=1642366833244)
2022-02-15T07:48:27Z D! [parsers.json_v2::kafka_consumer] %!s(int64=1642315404036)
2022-02-15T07:48:27Z D! [parsers.json_v2::kafka_consumer] %!s(int64=1642315404036)
2022-02-15T07:48:27Z D! [parsers.json_v2::kafka_consumer] %!s(int64=2373416964)
2022-02-15T07:48:27Z D! [parsers.json_v2::kafka_consumer] %!s(int64=2595903061) |
Tested with PR, problem still occurs. |
This seemed to solve my problem (I'm not using objects): 115 timestamp := time.Now()
116 p.timestamp = timestamp
117 if c.TimestampPath != "" {
118 result := gjson.GetBytes(input, c.TimestampPath)
119 test, err := strconv.ParseInt(result.String(), 10, 64)
120 ts := result.Int()
121
122 wresult := gjson.GetBytes(input, "weight_weight").Float()
123 if err != nil {
124 p.Log.Debugf("Error parsing string ts")
125 }
126
127 //p.Log.Debugf("Timestamp type: %s / %s / %s", result.Type, result.String(), test)
128 //p.Log.Debugf("Message: %s", input)
129 if result.Type == gjson.Null {
130 p.Log.Debugf("Message: %s", input)
131 return nil, fmt.Errorf(GJSONPathNUllErrorMSG)
132 }
133 if !result.IsArray() && !result.IsObject() {
134 if c.TimestampFormat == "" {
135 err := fmt.Errorf("use of 'timestamp_query' requires 'timestamp_format'")
136 return nil, err
137 }
138
139 var err error
140 timestamp, err = internal.ParseTimestamp(c.TimestampFormat, result.String(), c.TimestampTimezone)
141 p.timestamp = timestamp
142
143 if err != nil {
144 return nil, err
145 }
146 }
----
167 fields, err := p.processMetric(input, c.Fields, false, timestamp)
168 if err != nil {
169 return nil, err
170 }
171
172 tags, err := p.processMetric(input, c.Tags, true, timestamp)
173 if err != nil {
174 return nil, err
175 }
----
203func (p *Parser) processMetric(input []byte, data []DataSet, tag bool, timestamp time.Time) ([]telegraf.Metric, error) {
----
233 mNode := MetricNode{
234 OutputName: setName,
235 SetName: setName,
236 DesiredType: c.Type,
237 Tag: tag,
238 Metric: metric.New(
239 p.measurementName,
240 map[string]string{},
241 map[string]interface{}{},
242 timestamp,
243 ),
244 Result: result,
245 } |
Could this be related to the fact that my kafka topic has 3 partitions, and I think sarama spawns a consumer (and therfor parser) for each partition. And what I'm experiencing is somehow that one parser overwrites the |
Yeah, it doesn't seem this class variable is needed indeed. You are right it is better a local var. Great you figured it out.. |
Relevant telegraf.conf
Logs from Telegraf
No relevant logs
System info
Telegraf 1.21.3 tested on MacOs and Kubernetes(Docker image)
Docker
No response
Steps to reproduce
...
This is tested with data from kafka to both InfluxDB Cloud and Carbon output file. Both outputs have the same error, but different results.
Expected behavior
Expect carbon output to mirror kafka topic
Actual behavior
Additional info
No response
The text was updated successfully, but these errors were encountered: