-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Offset Fetch/Commit Support #2
Comments
After the refactoring this needs to be redone, although much of the code I wrote for the initial version can probably be reused. There's not much point in continuing until we get an answer on https://issues.apache.org/jira/browse/KAFKA-993 though... |
The ticket I filed finally got an answer: they postponed the inclusion of this API in order to get 0.8 out the door faster, so we will have to wait for 0.8.1 or 0.9 or whatever comes next. |
Just checking in on the status of this (but not even sure if this is my issue). I'm trying to manually set an initial offset (a previously read value from ConsumerEvent.Offset) and getting the error message "The requested offset is outside the range of offsets maintained by the server for the given topic/partition.". I'm not sure if this is expected behavior (not implemented yet), if I'm doing something wrong (forgetting to commit an offset or setting an auto commit flag somewhere), or if something else environmental is going on. Any suggestions/insights would be greatly appreciated. |
eh. Ok. I figured out my own issue for the time being, (but still looking to understand the use case for committing / auto-committing offsets). We currently track this externally. I'm guessing this feature/enhancement will allow us to track it within Kafka and elliminate the need for our external tracking. In any case, I resolved my issue by setting OffsetValue = (lastEvent.Offset + 1). the + 1 was important. |
If you're getting that error when trying to use |
Yes, eventually kafka/sarama will be able to automatically track offsets so you won't have to do it externally. |
Hey Evan, What's the state of this issue? We're using Sarama and want to manually commit offsets once we're sure we've safely processed messages. Thanks! |
Per https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-OffsetCommit/FetchAPI we're still waiting on kafka to release 0.8.2 for these APIs to be functional. |
Great. Thanks for responding. |
Not as far as I know - the API exists, but does nothing. |
As @kane-sendgrid pointed out in #135 Kafka 0.8.2 is going to be released relatively soon so we can actually start working on this finally. |
How's this coming along? The API seems to be available and stable from 0.8.1.1 and up: https://cwiki.apache.org/confluence/display/KAFKA/Committing+and+fetching+consumer+offsets+in+Kafka |
In 0.8.1.1 the API is stable in that it has a version coded and that won't change. It does not yet actually store the offsets in Kafka. Kafka 0.8.2-beta, which is already released, does support the implementation for OffsetCommit and OffsetFetch with internal topics as their storage mechanism. |
Joe is correct. Also note that this ticket is tagged for the consumer, not the protocol - Sarama has supported the OffsetFetch, OffsetCommit, and ConsumerMetadata request/response message formats for a while already. This ticket is simply for building automatic support into Sarama's consumer. |
Any estimation when it will be available on master branch? Is there any clean way to commit offset to broker (having client and consumer) with current master code base? |
As mentioned, you can manually construct an |
Has there been any recent work on this? I got a copy of Sarama a couple weeks ago and I'm trying to do manual offset commits to Kafka, but when I do a OffsetFetch to confirm that I committed, I get back an empty map. As far as I know, Kafka is configured correctly (offsets.storage=kafka), so I'm wondering if perhaps the version I have is missing something. |
There's no progress on this. You could check out https://github.com/wvanbergen/kafka/tree/master/consumergroup, which, besides load balancing and failover, manages offsets using zookeeper. |
Any ETA on when manual commits will work? |
Kafka 0.8.2.0 introduced a new version of this API which still has not been documented [1]. There's not much we can do until we know how it actually works. That email thread seems to imply that the old v0 API should still work, but we do implement the documented spec so I'm not sure why it wouldn't be working for you. |
Thanks @wvanbergen . 👍 I've got too much time in wounding my own kafka cluster's config, I'm so curious about in kafka 0.8.2-beta I fetch |
Unfortunately we have no experience with this API ourselves. Your best guess is to ask on the kafka mailing list on how this API is usppoed to work: http://kafka.apache.org/contact.html |
The Coordinator is being worked on apache kafka trunk now for next release. Some more details here https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+consumer+protocol about that. It doesn't have any overlap here except that the new consumer will also use this method of storage offset maybe even default that way. Kafka 0.8.2.1 supports kafka as the storage for offsets. The storage of offets by kafka is the offset fetch and commit request/response usage. This is great because in your code you can subvert using zookeeper in consumers and use kafka elodina/go_kafka_client#80 for offset storage instead. |
I started working on this in #379. There's still some outstanding questions and gaps in the implementation, but it's mostly working. @jsvisa the reason why you get |
#461 (released in v1.6) implemented Kafka-based offset management. It's not fully integrated into the consumer yet, but there's not much point until the 0.9 consumer protocol is available. |
The Consumer should make use of https://cwiki.apache.org/confluence/display/KAFKA/Offset+Management
The golang protocol backend already supports those request/response types, but the current Kafka 0.8 beta 1 seems to choke on them so I didn't add them to the API.
I suspect this will consist of doing a Fetch on construction of a Consumer, then adding a Commit(offset) api call that the consumer user can call as appropriate. The python bindings have an autocommit option, but I think that's overcomplicated for our needs, at least to start.
The text was updated successfully, but these errors were encountered: