- upgraded deps
- BREAKING node-rdkafka has been removed as optional dependency (see below)
You will have to manually install node-rdkafka
alongside sinek.
(This requires a Node.js version between 9 and 12 and will not work with Node.js >= 13, last tested with 12.16.1)
On Mac OS High Sierra / Mojave:
CPPFLAGS=-I/usr/local/opt/openssl/include LDFLAGS=-L/usr/local/opt/openssl/lib yarn add --frozen-lockfile [email protected]
Otherwise:
yarn add --frozen-lockfile [email protected]
(Please also note: Doing this with npm does not work, it will remove your deps, npm i -g yarn
)
- upgraded deps
- node-rdkafka is now capabale of dealing with node.js versions > 11.15.0
- this is a major version bump and might break your setup
- upgraded all dependencies
- BREAKING node-rdkafka is now an optional dependency
- BREAKING added as default kafkajs client dependency
batchSize
on kafkajs is not used, so it is deprecatedconsumer.commit()
function on kafkajs is not natively supported, needs to be handled on the callback function, to commit manually- some statistics/analytics functions will not work with kafkajs
- use best practice from the
examples
directory
- pinned node-rdkafka to 2.7.0, please only use this version with node 11.15.0
- upgraded dependencies
async ~2.6.2 → ~3.1.0
bluebird ~3.5.4 → ~3.5.5
node-rdkafka ~2.6.1 → ~2.7.0
eslint ~5.16.0 → ~6.0.1
express ~4.16.4 → ~4.17.1
- for now you have to stick with a maximum node version of v11.15.0 (node-rdkafka does not support any higher version yet)
- dependency upgrade
- "enable.auto.commit" now defaults to
false
to prevent frustration with batching logic
- fixed bug in produce where partition and key would be ignored
- moving away from semver minor resetting after major
- removed rd-lt (old RD Load Test)
- upgraded dependencies latest node-rdkafka
- BREAKING marked (JS) Consumer as deprecated
- BREAKING marked (JS) Producer as deprecated
- BREAKING swapped node-rdkafka from optional to dependencies
- BREAKING swapped kafka-node from dependencies to optional
- cleaned-up documentation
- added best-practice example for consumer and producer
- adjusted test configuration to fit the best-practice example
- BREAKING changed NProducer from Producer to HighLevelProducer to ensure message delivery based on the send promise
- preventing undefined offset value commits for commitLocalOffsetsForTopic
- added additional debug logs to NConsumer
- fixed timings for analytics and lag interval in NConsumer and NProducer
- fixed sortedManualBatch type error
- (NConsumer) added batch option sortedManualBatch
- (NConsumer) added commitLocalOffsetsForTopic method
- SEMI-BREAKING (NConsumer) removed experimental resetTopicPartitionsToEarliest method
- SEMI-BREAKING (NConsumer) removed experimental resetTopicPartitionsToLatest method
- SEMI-BREAKING (NConsumer) removed experimental commitOffsetForAllPartitionsOfTopic method
- SEMI-BREAKING (NConsumer) removed deprecated consumeOnce method
- added "### Consuming Multiple Topics efficiently" to lib/librdkafka/README.md
- (NConsumer) passing an empty array to "adjustSubscription" will unsubscribe from all topics
- upgrade dependencies
- fix typescript bug (string for topic missing)
- removed custom configs where possible to fallback to librdkafka defaults
- added
manualBatching
option to NConsumer batch mode options, it will enable you to process messages faster and controll you own commits easily (via callback) setting it to true will result in syncEvent() being called with the whole batch of messages instead of a single message - SEMI-BREAKING changed types to allow syncEvent single or multi message first argument
- fixed bug in metadata partition for topic call
- fixed error message typo
- updated dependencies (node-rdkafka upgraded)
- added tombstone function call to types
- fixed missing exports
- small release optimisations
- fixed missing return type in tombstone declaration
- added advanced configuration declaration to typescript declarations, thanks to Juri Wiens
- permitted passing of null as message value
- added .tombstone() function to NProducer to easily delete kafka messages
- upgraded dependencies (node-rdkafka and kafka-node were updated)
- fixed bug in typescript declaration, thanks to @holgeradam
- fixed bug in NProducer partition identification, result of murmur was not passed correctly thanks to @elizehavi
- typescript declaration optimizations
- fixed some typescript declaration bugs
- removed warning from Consumer and Producer, didnt feel too good
- updated dependencies: kafka-node 2.6.1 -> 3.0.0
- added TypeScript declarations for NConsumer, NProducer, Consumer, Producer and Config
- Added new Experimental functions to NConsumer resetTopicPartitionsToEarliest, resetTopicPartitionsToLatest
- Marked old Clients as deprecated, added suggestion for Consumer and Producer to move to native versions
- Updated dependencies:
node-rdkafka ~2.3.4 → ~2.4.1 eslint ~5.0.1 → ~5.5.0 sinon ~6.1.0 → ~6.1.5
- Health Thresholds are now configurable, pass them via health subobject in the parent config to consumer or producer
- updated uuid and sinon deps
- brought getTopicMetadata and getMetadata to NConsumer, was only available on NProducer so far
- added getTopicList to NProducer and NConsumer to retrieve a list of available topics
- updated dependencies:
node-rdkafka ~2.3.3 → ~2.3.4 (also sinon, eslint and uuid)
- switched default encoding for messages value and key for JS Kafka client to Buffer
- simplified integration tests
- updated dependencies:
kafka-node ~2.4.1 → ~2.6.1 eslint ~4.18.2 → ~4.19.1 mocha ~5.0.4 → ~5.2.0 sinon ^4.4.6 → ^6.0.0 node-rdkafka ~2.2.3 → ~2.3.3 async ~2.6.0 → ~2.6.1
- updated NConsumer and NProducer to debug and concat errors of require of native lib
- node-rdkafka has seg fault bugs in 2.3.1 -> falling back to 2.2.3
- corrected consumer callback error pass (now also logging warning to not do it)
- now allows to pass correlation-id (opaque key) when producing with NProducer
- updated dependencies:
uuid ~3.1.0 → ~3.2.1 bluebird ~3.5.0 → ~3.5.1 debug ^3.0.0 → ^3.1.0 kafka-node ^2.3.0 → ^2.4.1 eslint ^4.11.0 → ^4.18.2 express ^4.16.2 → ^4.16.3 mocha ~5.0.2 → ~5.0.4 sinon ^4.1.2 → ^4.4.6 node-rdkafka ^2.2.0 → ^2.3.1
- now starting analytics immediately
- propagating connection promise correctly
- now proxying consumer_commit_cb
- upgraded dependencies: [email protected], [email protected], [email protected]
- upgraded node-librdkafka dependency to 2.1.1
- added pause and resume functions for NConsumer
- added commitMessage method to NConsumer
- added option to switch partition selection to murmurv2
- intelligent healthcheck, checkout librdkafka/Health.md
- average batch processing time in getStats() for nconsumer
- clear rejects for operations, when the clients are not connected
- added unit tests for Health.js
- refactored readme
- intelligent fetch grace times in batch mode
- small optimisations on nconsumer
- BREAKING CHANGE nconsumer 1:n (batch mode) does not commit on every x batches now, it will only commit when a certain amount of messages has been consumed and processed requiredAmountOfMessagesForCommit = batchSize * commitEveryNBatch
- this increases performance and makes less commit requests when a topic's lag has been resolved and the amount of "freshly" produced messages is clearly lower than batchSize.
- comes with the new analytics class for nproducers and nconsumers
- checkout librdkafka/Analytics.md
- new offset info functions for NConsumer (checkout librdkafka/README.md)
- new getLagStatus() function for NConsumer that fetches and compares partition offsets
- updates
node-rdkafka
to @2.1.0 which ships fixes
- added librdkafka/Metadata class
- added new metadata functions to NProducer
- send, buffer and _sendBufferFormat are now async functions
- ^ BREAKING CHANGE sinek now requires min. Node.js Version 7.6
- added
auto
mode for NProducer (automatically produces to latest partition count event if it changes during runtime of a producer -> updates every 5 minutes) - refactored and optimized NProducer send logic
- updated librdkafka/README.md
- added new tests for NProducer
- fixed bug in NConsumer consume() consume options, where commitSync field was always true
- added JSDOC for NConsumer and NProducer
- new 1:N consumer mode (making 1:1 mode configurable with params -> see lib/librdkafka/README.md)
- more stats for consumer batch mode
- new consumer batch event
- BREAKING CHANGE as consumer.consume(syncEvent) now rejects if you have
enabled.auto.commit: true
- updated librdkafka/README.md
- Updated depdendencies
- Re-created lockfile
- fixed bug in sync commit (now catching timeout errors)
- NConsumer automatically sets memory related configs (easier start if you missed those config params..)
- NConsumer in 1:1 mode will now use commitMessageSync instead of commitMessage (this reduces performance, but ensures we do not stack tons of commit-requests in the consumers-queue), sinek 6.5.0 will follow with an option to set the amount of messages that are consumed & committed in one step 1-10000
- bugfix on NProducer (partitions ranged from 1-30 instead of 0-29)
- added streaming mode to NConsumer, you can pass true to .connect(true) and omit .consume() to enable streaming mode consuming
- adjusted sasl example
- fixed connection event (ready) for connect/ consumers
- fixed a few small things
- added tconf fields to config
- updated docs
- more and better examples
- updated NProducer api to allow new node-rdkafka 2.0.0 (as it had breaking changes regarding its topic api)
- sinek now ships with an optional dependency to node-rdkafka
- 2 native clients embbed rdkafka in the usual sinek connector api interface
- NConsumer and NProducer
- sasl support
- additional config params through noptions
- fixed a few option reference passes to allow for better ssl support
- added /kafka-setup that allows for an easy local ssl kafka broker setup
- added /ssl-example to show how ssl connections are configured
- updated readme
- added eslint and updated code style accordingly
- Updated to latest kafka-node 2.2.0
- Fixed bug in logging message value length
- Added 3 new format methhods publish, unpublish, update to connect producer
- Added partitionKey (optional) to all bufferFormat operations of publisher and connect producer
- Updated all dependencies
- Clients can now omit Zookeeper and connect directly to a Broker by omitting zkConStr and passing kafkaHost in the config
Producer/Consumer Key Changes #704
- BREAKING CHANGE The
key
is decoded as astring
by default. Previously was aBuffer
. The preferred encoding for the key can be defined by thekeyEncoding
option on any of the consumers and will fallback toencoding
if omitted
- First entry in CHANGELOG.md