Skip to content

Commit

Permalink
Merge pull request #13 from memsql/PLAT-7337
Browse files Browse the repository at this point in the history
Added publishing of zip archive with alll dependencies
  • Loading branch information
AdalbertMemSQL authored Jan 30, 2025
2 parents d7edae7 + 9e64022 commit 6d73b79
Show file tree
Hide file tree
Showing 7 changed files with 65 additions and 39 deletions.
2 changes: 1 addition & 1 deletion .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ jobs:
path: target/surefire-reports
- store_artifacts: # store the uberjar as an artifact
# Upload test summary for display in Artifacts: https://circleci.com/docs/2.0/artifacts/
path: target/singlestore-kafka-connector-1.2.1.jar
path: target/singlestore-kafka-connector-1.2.2.jar
# See https://circleci.com/docs/2.0/deployment-integrations/ for deploy examples
publish:
machine: true
Expand Down
24 changes: 24 additions & 0 deletions .github/release.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
name: Release

on:
push:
tags: "v*"

jobs:
build:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4
- name: Set up JDK 8
uses: actions/setup-java@v4
with:
java-version: '8'
distribution: 'temurin'
cache: maven
- name: Build archive
run: mvn -B package --file pom.xml -DskipTests
- name: Release Plugin zip Archive
uses: softprops/action-gh-release@v1
with:
files: target/components/packages/singlestore-singlestore-kafka-connector-*.zip
2 changes: 2 additions & 0 deletions CHANGELOG
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
2025-29-01 Version 1.2.2
* Added publishing of zip archive to GitHub
2023-11-06 Version 1.2.1
* Updated JDBC driver. This is done because the old driver doesn't support long column names.
2023-04-05 Version 1.2.0
Expand Down
64 changes: 32 additions & 32 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# SingleStore Kafka Connector
## Version: 1.2.1 [![Continuous Integration](https://circleci.com/gh/memsql/singlestore-kafka-connector/tree/master.svg?style=shield)](https://circleci.com/gh/memsql/memsql-kafka-connector) [![License](http://img.shields.io/:license-Apache%202-brightgreen.svg)](http://www.apache.org/licenses/LICENSE-2.0.txt)
## Version: 1.2.2 [![Continuous Integration](https://circleci.com/gh/memsql/singlestore-kafka-connector/tree/master.svg?style=shield)](https://circleci.com/gh/memsql/memsql-kafka-connector) [![License](http://img.shields.io/:license-Apache%202-brightgreen.svg)](http://www.apache.org/licenses/LICENSE-2.0.txt)

## Getting Started

Expand All @@ -15,23 +15,23 @@ You can find the latest version of the connector on [Maven](https://mvnrepositor
The `singlestore-kafka-connector` is configurable via property file that should be
specified before starting kafka-connect job.

| Option | Description
| - | -
| `connection.ddlEndpoint` (On-Premise deployment) (required) | The hostname or IP address of the SingleStoreDB Master Aggregator in the `host[:port]` format, where `port` is an optional parameter. Example: `master-agg.foo.internal:3308` or `master-agg.foo.internal`.
| `connection.dmlEndpoints` (On-Premise deployment) | The hostname or IP address of SingleStoreDB Aggregator nodes to run queries against in the `host[:port],host[:port],...` format, where `port` is an optional parameter (multiple hosts separated by comma). Example: `child-agg:3308,child-agg2`. (default: `ddlEndpoint`)
| `connection.clientEndpoint` (Cloud deployment) (required) | The hostname or IP address of the SingleStoreDB Cloud workspace to run queries against in the `host[:port]` format, where `port` is an optional parameter. Example: `svc-XXXX-ddl.aws-oregon-2.svc.singlestore.com:3306`.
| `connection.database` (required) | If set, all connections will default to using this database (default: empty)
| `connection.user` | SingleStore username (default: `root`)
| `connection.password` | SingleStore password (default: no password)
| `params.<name>` | Specify a specific MySQL or JDBC parameter which will be injected into the connection URI (default: empty)
| `max.retries` | The maximum number of times to retry on errors before failing the task. (default: 10)
| `fields.whitelist` | Specify fields to be inserted to the database. (default: all keys will be used)
| `retry.backoff.ms` | The time in milliseconds to wait following an error before a retry attempt is made. (default 3000)
| `tableKey.<index_type>[.name]` | Specify additional keys to add to tables created by the connector; value of this property is the comma separated list with names of the columns to apply key; <index_type> one of (`PRIMARY`, `COLUMNSTORE`, `UNIQUE`, `SHARD`, `KEY`);
| `singlestore.loadDataCompression` | Compress data on load; one of (`GZip`, `LZ4`, `Skip`) (default: GZip)
| `singlestore.metadata.allow` | Allows or denies the use of an additional meta-table to save the recording results (default: true)
| `singlestore.metadata.table` | Specify the name of the table to save kafka transaction metadata (default: `kafka_connect_transaction_metadata`)
| `singlestore.tableName.<topicName>=<tableName>` | Specify an explicit table name to use for the specified topic
| Option | Description |
|----------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `connection.ddlEndpoint` (On-Premise deployment) (required) | The hostname or IP address of the SingleStoreDB Master Aggregator in the `host[:port]` format, where `port` is an optional parameter. Example: `master-agg.foo.internal:3308` or `master-agg.foo.internal`. |
| `connection.dmlEndpoints` (On-Premise deployment) | The hostname or IP address of SingleStoreDB Aggregator nodes to run queries against in the `host[:port],host[:port],...` format, where `port` is an optional parameter (multiple hosts separated by comma). Example: `child-agg:3308,child-agg2`. (default: `ddlEndpoint`) |
| `connection.clientEndpoint` (Cloud deployment) (required) | The hostname or IP address of the SingleStoreDB Cloud workspace to run queries against in the `host[:port]` format, where `port` is an optional parameter. Example: `svc-XXXX-ddl.aws-oregon-2.svc.singlestore.com:3306`. |
| `connection.database` (required) | If set, all connections will default to using this database (default: empty) |
| `connection.user` | SingleStore username (default: `root`) |
| `connection.password` | SingleStore password (default: no password) |
| `params.<name>` | Specify a specific MySQL or JDBC parameter which will be injected into the connection URI (default: empty) |
| `max.retries` | The maximum number of times to retry on errors before failing the task. (default: 10) |
| `fields.whitelist` | Specify fields to be inserted to the database. (default: all keys will be used) |
| `retry.backoff.ms` | The time in milliseconds to wait following an error before a retry attempt is made. (default 3000) |
| `tableKey.<index_type>[.name]` | Specify additional keys to add to tables created by the connector; value of this property is the comma separated list with names of the columns to apply key; <index_type> one of (`PRIMARY`, `COLUMNSTORE`, `UNIQUE`, `SHARD`, `KEY`); |
| `singlestore.loadDataCompression` | Compress data on load; one of (`GZip`, `LZ4`, `Skip`) (default: GZip) |
| `singlestore.metadata.allow` | Allows or denies the use of an additional meta-table to save the recording results (default: true) |
| `singlestore.metadata.table` | Specify the name of the table to save kafka transaction metadata (default: `kafka_connect_transaction_metadata`) |
| `singlestore.tableName.<topicName>=<tableName>` | Specify an explicit table name to use for the specified topic |

### Config example
```
Expand Down Expand Up @@ -89,20 +89,20 @@ To overwrite the name of this table, use `singlestore.metadata.table` option.

`singlestore-kafka-connector` makes such conversions from Kafka types to SingleStore types:

| Kafka Type | SingleStore Type
| - | -
| STRUCT | JSON
| MAP | JSON
| ARRAY | JSON
| INT8 | TINYINT
| INT16 | SMALLINT
| INT32 | INT
| INT64 | BIGINT
| FLOAT32 | FLOAT
| FLOAT64 | DOUBLE
| BOOLEAN | TINYINT
| BYTES | TEXT
| STRING | VARBINARY(1024)
| Kafka Type | SingleStore Type |
|------------|------------------|
| STRUCT | JSON |
| MAP | JSON |
| ARRAY | JSON |
| INT8 | TINYINT |
| INT16 | SMALLINT |
| INT32 | INT |
| INT64 | BIGINT |
| FLOAT32 | FLOAT |
| FLOAT64 | DOUBLE |
| BOOLEAN | TINYINT |
| BYTES | TEXT |
| STRING | VARBINARY(1024) |

## Table keys

Expand Down
4 changes: 2 additions & 2 deletions demo/setup-script.ps1
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ docker run `
--name singlestore-kafka-connect-short-demo `
singlestore-kafka-connect-short-demo `
tail -f /dev/null > $null
docker cp singlestore-kafka-connect-short-demo:/home/app/target/singlestore-kafka-connector-1.2.1.jar "$env:TEMP"
docker cp singlestore-kafka-connect-short-demo:/home/app/target/singlestore-kafka-connector-1.2.2.jar "$env:TEMP"
docker stop singlestore-kafka-connect-short-demo > $null
Write-Output "Success!"

Expand All @@ -128,7 +128,7 @@ function Start-Kafka-Connect {
# replace backslashes with slashes, colons with nothing,
# convert to lower case and trim last /
$nixTempPath = (("$env:TEMP" -replace "\\","/") -replace ":","").ToLower().Trim("/")
$kafkaConnectorVolumes = $nixTempPath + "/singlestore-kafka-connector-1.2.1.jar:/usr/share/java/singlestore-kafka-connector-1.2.1.jar"
$kafkaConnectorVolumes = $nixTempPath + "/singlestore-kafka-connector-1.2.2.jar:/usr/share/java/singlestore-kafka-connector-1.2.2.jar"

docker run -d `
--name=kafka-connect-short-demo `
Expand Down
4 changes: 2 additions & 2 deletions demo/setup-script.sh
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ docker run \
-v /tmp/quickstart/connect:/tmp/quickstart/connect \
singlestore-kafka-connect-short-demo \
tail -f /dev/null >/dev/null 2>/dev/null
docker exec singlestore-kafka-connect-short-demo cp /home/app/target/singlestore-kafka-connector-1.2.1.jar /tmp/quickstart/connect
docker exec singlestore-kafka-connect-short-demo cp /home/app/target/singlestore-kafka-connector-1.2.2.jar /tmp/quickstart/connect
docker stop singlestore-kafka-connect-short-demo >/dev/null 2>/dev/null
echo ". Success!"

Expand Down Expand Up @@ -152,7 +152,7 @@ kafka-connect-start() {
-e CONNECT_PLUGIN_PATH=/usr/share/java \
-e CONNECT_REST_HOST_NAME="kafka-connect-short-demo" \
-v /tmp/quickstart/file:/tmp/quickstart \
-v /tmp/quickstart/connect/singlestore-kafka-connector-1.2.1.jar:/usr/share/java/singlestore-kafka-connector-1.2.1.jar \
-v /tmp/quickstart/connect/singlestore-kafka-connector-1.2.2.jar:/usr/share/java/singlestore-kafka-connector-1.2.2.jar \
confluentinc/cp-kafka-connect:5.0.0 >/dev/null
echo ". Started!"
}
Expand Down
4 changes: 2 additions & 2 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
<artifactId>singlestore-kafka-connector</artifactId>

<name>singlestore-kafka-connector</name>
<version>1.2.1</version>
<version>1.2.2</version>
<description>
The official SingleStore connector for Kafka Confluent Connect.
</description>
Expand Down Expand Up @@ -50,7 +50,7 @@
<url>git://[email protected]:memsql/singlestore-kafka-connector.git</url>
<connection>scm:git:[email protected]:memsql/singlestore-kafka-connector.git</connection>
<developerConnection>scm:git:[email protected]:memsql/singlestore-kafka-connector.git</developerConnection>
<tag>singlestore-kafka-connector-1.2.1</tag>
<tag>singlestore-kafka-connector-1.2.2</tag>
</scm>

<dependencies>
Expand Down

0 comments on commit 6d73b79

Please sign in to comment.