Skip to content

Releases: takapi327/ldbc

v0.3.0-beta9

07 Dec 12:18
Compare
Choose a tag to compare

ldbc v0.3.0-beta9 is released.
This release adds enhancements to existing features.

Note

ldbc is pre-1.0 software and is still undergoing active development. New versions are not binary compatible with prior versions, although in most cases user code will be source compatible.
The major version will be the stable version.

What's Changed

Caution

This version is not compatible with the previous version, v0.3.0-beta8.

Adding Column Annotation

Table generation using Deriving used the property name as it is as the column name.

case class City(
  id:          Int,
  name:        String,
  countryCode: String,
  district:    String,
  population:  Int
) derives Table

The formatting of the column names could be adjusted by implicitly passing Naming.

given Naming = Naming.PASCAL

However, this alone could not address all cases. For example, the following column names cannot be used in the previous implementation.

CREATE TABLE city (
  ID Bigint
  CountryID Bigint
)

Naming supports only certain formats (CAMEL, PASCAL, and SNAKE cases) and does not support column names that are all uppercase or only uppercase in certain parts of the column name.

Such cases can be resolved by using column annotations.

case class City(
  @Column("ID") id: Int,
  name:             String,
  countryCode:      String,
  district:         String,
  population:       Int
) derives Table

The overall formatting is done by Naming, and changes to only specific columns can be made using this annotation, which allows for flexible customization of column names.

Support for INSERT SELECT/JOIN

There are two ways to pass values in an INSERT statement: by VALUES or by using a SELECT statement or JOIN.

This modification supports both syntaxes.

TableQuery[City]
   .insertInto(city => city. id *: city. name)
   .select(
     TableQuery[Country]
       .select(country => country. id *: city. name)
   )

The same can be defined for using the records resulting from a JOIN.

TableQuery[City]
   .insertInto(city => city. id *: city. name)
   .select(
     TableQuery[Country]
       .join(TableQuery[City])
       .on((country, city) => country. id === city. countryId)
       .select((country, city) => country. id *: city. name)
       .where((country, city) => city. population > 1000000)
   )

Breaking Change

Change column refinement method

So far, column refinement has simply grouped the columns used as Tuple.

cityTable.select(city => (city.id, city.name))

However, there was a problem with this. Column is a type with a single type parameter. Until Scala2, there was a limit to the number of Tuples that could be passed. Scala 2 had a limit on the number of Tuples, so it was necessary to create a boilerplate or something that could handle all the number of Tuples.
In this case, dynamic Tuples are treated as Tuples or Tuple.Map, so if you wanted to access a Column type, you had to cast the type using asInstanceOf because its type can only be treated as a Tuple.
Casting the type would of course lose its type safety and complicate the code.

We decided to adopt twiddles, one of the same TypeLevel projects, to solve this problem.

Columns can be more easily composited by using twiddles.

cityTable.select(city => city.id *: city.name)

Composition itself is implementor-friendly because it can be done using *:, and this eliminates the need for unsafe typecasting since the internal code can just use Column[T] instead of Tuple.

Twiddles also make it easy to convert the composite result to another type.

case class City(id: Long, name: String)

def id: Column[Int] = column[Int]("ID")
def name: Column[String] = column[String]("Name")

def city: Column[City] = (id *: name).to[City]

Change from Table to TableQuery

Until now, the same Table type has been used to construct queries using the Table type and the Table type with Table information from the model.

case class City(id: Long, name: String) derives Table
val cityTable = Table[City]

Because the same type was used to represent two things, it was easy to implement incorrectly.

cityTable.select(city => city.insert(???))

Development tools such as IDEs could cause no small amount of confusion to implementers because they complement all available APIs.

case class City(id: Long, name: String) derives Table
val cityTable = TableQuery[City]

To solve this problem, we created a new type called TableQuery and separated the implementation of query construction.

Changes to Insert

The Insert statement is similarly modified so that columns are composited and inserted columns are specified.

cityTable
- .insertInto(city => (city.id, city.name))
+ .insertInto(city => city.id *: city.name)
  .values((1L, "Tokyo"))

Changes to Update

Previously, the Update Statement had to be set up one by one for each column to be updated. This implementation is convenient when you want to update a few columns individually, but it is very cumbersome to write set for each additional column you want to update.

cityTable
  .update("id", 1L)
  .set("name", "Tokyo")
  .set("population", 1, false)

With this update, columns can now be combined, allowing multiple columns to be specified together for update processing.

cityTable
  .update(city => city.id *: city.name)((1L, "Tokyo"))
  .set(_.population, 1, false)

With this update, columns can now be combined, allowing multiple columns to be specified together for update processing. It is still possible to update only specific columns using set. In addition, since set can be used to set conditions for updating, it is possible to create a query that updates additional columns only when the conditions are positive.

Changes to Join

Previously, table joins were constructed by setting the join condition as the second argument.

cityTable.join(countryTable)((c, co) => c.countryCode === co.code)

From this change, the conditions for joining tables must be set via the on API. This change is the result of an internal implementation change.

cityTable.join(countryTable).on((c, co) => c.countryCode === co.code)

💪 Enhancement

  • Enhancement/2024 09 raises errors in fields that cannot be decoded by @takapi327 in #306
  • Enhancement/2024 10 added support tuple class mapping by @takapi327 in #307
  • Refactor/2024 10 renewal schema by @takapi327 in #329

🪲 Bug Fixes

🔧 Refactoring

📖 Documentation

⛓️ Dependency update

Full Changelog: v0.3.0-beta8...v0.3.0-beta9

v0.3.0-beta8

29 Sep 08:24
9bd7404
Compare
Choose a tag to compare

ldbc v0.3.0-beta8 is released.
This release adds enhancements to existing features.

Note

ldbc is pre-1.0 software and is still undergoing active development. New versions are not binary compatible with prior versions, although in most cases user code will be source compatible.
The major version will be the stable version.

What's Changed

Caution

This version is not compatible with the previous version, v0.3.0-beta7.

Bug Fix

The process to retrieve records by matching the table name and column name, such as when using JOIN, does not work as intended.

sql"SELECT c.Id, c.Name, ct.Code, ct.Name FROM city AS c JOIN country AS ct ON c.CountryCode = ct.Code"
  .query[(Int, String, String, String)]

There is no problem in retrieving records by specifying the Index, but when retrieving records by specifying the column name internally, the records are not retrieved correctly because they are retrieved as Id instead of c.Id.

JDBC allows the retrieval of records by the combination of table name and column name, so the modification was made accordingly.

https://github.com/mysql/mysql-connector-j/blob/release/8.x/src/main/core-impl/java/com/mysql/cj/result/DefaultColumnDefinition.java#L204-L210

Support for custom data types (Breaking change)

When using user-defined data types, custom data types were supported using ResultSetReader and Parameter.

With this update, the method of supporting custom data types using ResultSetReader and Parameter has changed.

Encoder

Change from Parameter to Encoder for dynamic embedding in query strings.

This eliminates the need for the user to write a redundant process to receive the Effect Type, and allows for simpler implementation and use of custom data types as parameters.

enum Status(val code: Int, val name: String):
  case Active   extends Status(1, "Active")
  case InActive extends Status(2, "InActive")

Before

given Parameter[Status] with
  override def bind[F[_]](
    statement: PreparedStatement[F],
    index: Int,
    status: Status
  ): F[Unit] = statement.setInt(index, status.code)

After

given Encoder[Status] with
  override def encode(status: Status): Int = status.done

Encoder's encoding process can only return Scala types that can be handled by PreparedStatement.

The following types are supported at this time.

Scala Type Methods called in PreparedStatement
Boolean setBoolean
Byte setByte
Short setShort
Int setInt
Long setLong
Float setFloat
Double setDouble
BigDecimal setBigDecimal
String setString
Array[Byte] setBytes
java.time.LocalDate setDate
java.time.LocalTime setTime
java.time.LocalDateTime setTimestamp
None setNull

Decoder

Change the process of getting data from ResultSet from ResultSetReader to Decoder.

This allows users to convert retrieved records into nested hierarchical data.

case class City(id: Int, name: String, countryCode: String)
case class Country(code: String, name: String)
case class CityWithCountry(city: City, country: Country)

sql"SELECT city.Id, city.Name, city.CountryCode, country.Code, country.Name FROM city JOIN country ON city.CountryCode = country.Code".query[CityWithCountry]

Using Query Builder

case class City(id: Int, name: String, countryCode: String) derives Table
case class Country(code: String, name: String) derives Table

val city = Table[City]
val country = Table[Country]

city.join(country).join((city, country) => city.countryCode === country.code)
  .select((city, country) => (city.name, country.name))
  .query // (String, String)
  .to[Option]
  

city.join(country).join((city, country) => city.countryCode === country.code)
  .selectAll
  .query // (City, Country)
  .to[Option]
Custom type decoding

Decoding of custom types will change from ResultSetReader to the following

Before

enum Custom:
  case ...

given ResultSetReader[IO, Custom] =
  ResultSetReader.mapping[IO, str, Custom](str => Custom.valueOf(str))

After

enum Custom:
  case ...

given Decoder.Elem[Custom] =
  Decoder.Elem.mapping[String, Custom](str => Custom.valueOf(str))

Query (Breaking Change)

When using selectAll in a query built with Query Builder, the type that could be obtained with query was a Tuple type consisting of all records in the selected table. Therefore, conversion to the model corresponding to the table had to be done explicitly using queryTo.

This was redundant, and if the user wanted to retrieve all the records, the application would have taken that model as the income.

Therefore, the change has been made so that selections made with selectAll are converted directly into models when query is used. If you use select to retrieve only specific records, the query will still be of type Tuple. Conversion to the specified model can still be done using queryTo.

Before

case class City(id: Int, name: String, countryCode: String) derives Table

city.selectAll.query // (Int, String, String)

After

case class City(id: Int, name: String, countryCode: String) derives Table

city.selectAll.query // City

🪲 Bug Fixes

🔧 Refactoring

⛓️ Dependency update

Full Changelog: v0.3.0-beta7...v0.3.0-beta8

v0.3.0-beta7

24 Aug 12:22
Compare
Choose a tag to compare

ldbc v0.3.0-beta7 is released.
This release adds enhancements to existing features.

Note

ldbc is pre-1.0 software and is still undergoing active development. New versions are not binary compatible with prior versions, although in most cases user code will be source compatible.
The major version will be the stable version.

What's Changed

This version improves the performance of the proprietary connector.

The release also includes internal code refactoring.

💪 Enhancement

🪲 Bug Fixes

🔧 Refactoring

  • Enhancement/2024 08 modification of result set by @takapi327 in #278

⛓️ Dependency update

Full Changelog: v0.3.0-beta6...v0.3.0-beta7

v0.3.0-beta6

12 Aug 05:52
Compare
Choose a tag to compare

ldbc v0.3.0-beta6 is released.
This release adds enhancements to existing features.

Note

ldbc is pre-1.0 software and is still undergoing active development. New versions are not binary compatible with prior versions, although in most cases user code will be source compatible.
The major version will be the stable version.

What's Changed

This version improves the performance of the proprietary connector.

The release also includes internal code refactoring.

💪 Enhancement

🔧 Refactoring

  • Refactor/2024 07 type change of capability flags by @takapi327 in #261
  • Refactor/2024 07 type change of server status flags by @takapi327 in #262
  • Refactor/2024 07 changing collection types for result set row packet by @takapi327 in #263
  • Refactor/2024 07 protocol read until eof function by @takapi327 in #264
  • Refactor/2024 08 change sync syntax helper function type by @takapi327 in #267
  • Refactor/2024 07 tracer and log handler as default noop by @takapi327 in #248

⛓️ Dependency update

Full Changelog: v0.3.0-beta5...v0.3.0-beta6

v0.3.0-beta5

20 Jul 15:43
Compare
Choose a tag to compare

ldbc v0.3.0-beta5 is released.
This release adds enhancements to existing features.

Note

ldbc is pre-1.0 software and is still undergoing active development. New versions are not binary compatible with prior versions, although in most cases user code will be source compatible.
The major version will be the stable version.

Dependency changes (Broken change)

Caution

This change is not compatible with previous versions. It constitutes a disruptive change.

Change package name

Previously, the core package was the core package of ldbc. However, ldbc has changed its structure around this core package and the functions provided by the core package are now provided as extra functions.

0.2.x 0.3.x
ldbc-core ldbc-schema

The core package still exists but will be removed in a future version.

Additional Function: Additional error handling functions

All ldbc monads shall provide a MonadError[? [_], Throwable] and provide an extended Async instance. This means that Executor and others will have the following primitive operations

  • raiseError: raise an exception (converts Throwable to M[A])
  • handleErrorWith: handle exception (convert M[A] to M[B])
  • attempt: catch exception (convert M[A] to M[Either[Throwable, A]])

In other words, any ldbc program can catch an exception by simply adding attempt.

val program = Executor.pure[IO, Int](1)

program.attempt
// Executor[IO, Either[Throwable, Int]]

Many other operations can be derived from the attempt and raiseError combinators, as described in the Cats documentation.

Functional modification: Enhanced error messages

Error messages when an exception occurs in a database operation have been modified to make them more visible and understandable.

==> X ldbc.tests.LdbcSQLStringContextQueryTest.Statement should be able to retrieve BIT type records.  0.146s ldbc.connector.exception.SQLException: 
🔥  
🔥  MySQL ERROR code 1054 (42S22)
🔥  
🔥    Problem: Unknown column 'not found' in 'field list'
🔥  
🔥  The statement under consideration is
🔥  
🔥    SELECT `bit`, `not found` FROM `connector_test`.`all_types`
🔥  

What's Changed

🚀 Features

💪 Enhancement

  • Enhancement/2024 07 sql exception message enhancement by @takapi327 in #246

🧰 Maintenance

🔧 Refactoring

⛓️ Dependency update

Full Changelog: v0.3.0-beta4...v0.3.0-beta5

v0.3.0-beta4

20 Jun 12:51
Compare
Choose a tag to compare

ldbc v0.3.0-beta4 is released.
This release adds enhancements to existing features.

Note

ldbc is pre-1.0 software and is still undergoing active development. New versions are not binary compatible with prior versions, although in most cases user code will be source compatible.
The major version will be the stable version.

Functional modification: extension of plain query construction.

The determination of the type retrieved by the database connection method using a plain query used to specify the type to be retrieved and its format (List or Option) collectively.

This modification changed that and made the internal logic common by separating the type to be retrieved from the specification of its format. This makes the plain query syntax more similar to doobie, so users of doobie should be able to use it without confusion.

before

sql"SELECT id, name, age FROM user".toList[(Long, String, Int)].readOnly(connection)
sql"SELECT id, name, age FROM user WHERE id = ${1L}".headOption[User].readOnly(connection)

after

sql"SELECT id, name, age FROM user".query[(Long, String, Int)].to[List].readOnly(connection)
sql"SELECT id, name, age FROM user WHERE id = ${1L}".query[User].to[Option].readOnly(connection)

Dependency changes (Broken change)

Caution

This change is not compatible with previous versions. It constitutes a disruptive change.

Previously, ldbc-query-builder was included in ldbc-dsl. However, with this update, if you want to use the Query Builder, you need to set up an additional dependency on ldbc-query-builder.

libraryDependencies += "io.github.takapi327" %% "ldbc-query-builder" % "0.3.0-beta4"

What's Changed

🔧 Refactoring

⛓️ Dependency update

New Contributors

Full Changelog: v0.3.0-beta3...v0.3.0-beta4

v0.3.0-beta3

03 Jun 13:50
Compare
Choose a tag to compare

ldbc v0.3.0-beta3 is released.
This release adds enhancements to existing features.

Note

ldbc is pre-1.0 software and is still undergoing active development. New versions are not binary compatible with prior versions, although in most cases user code will be source compatible.
The major version will be the stable version.

Functional modification: support for connector switching

The Scala MySQL connector has added new support for switching connections between JDBC and ldbc.

The modification allows developers to flexibly select database connections using the JDBC or ldbc libraries, depending on the requirements of the project. This allows them to utilise the functionality of the different libraries and increases flexibility in setting up and operating the connection.

How to switch

First, set up common dependencies.

libraryDependencies += "io.github.takapi327" %% "ldbc-dsl" % "0.3.0-beta3"

For Cross-Platform projects (JVM, JS, and/or Native):

libraryDependencies += "io.github.takapi327" %%% "ldbc-dsl" % "0.3.0-beta3"

The dependency package used depends on whether the database connection is made via a connector using the Java API or a connector provided by ldbc.

Use jdbc connector

libraryDependencies += "io.github.takapi327" %% "jdbc-connector" % "0.3.0-beta3"

Use ldbc connector

libraryDependencies += "io.github.takapi327" %% "ldbc-connector" % "0.3.0-beta3"

For Cross-Platform projects (JVM, JS, and/or Native)

libraryDependencies += "io.github.takapi327" %%% "ldbc-connector" % "0.3.0-beta3"

Usage

The difference in usage is that there are differences in the way connections are built between jdbc and ldbc.

Caution

ldbc is currently under active development. Please note that current functionality may therefore be deprecated or changed in the future.

jdbc connector

val ds = new com.mysql.cj.jdbc.MysqlDataSource()
ds.setServerName("127.0.0.1")
ds.setPortNumber(13306)
ds.setDatabaseName("world")
ds.setUser("ldbc")
ds.setPassword("password")

val datasource = jdbc.connector.MysqlDataSource[IO](ds)

val connection: Resource[IO, Connection[IO]] =
  Resource.make(datasource.getConnection)(_.close())

ldbc connector

val connection: Resource[IO, Connection[IO]] =
  ldbc.connector.Connection[IO](
    host     = "127.0.0.1",
    port     = 3306,
    user     = "ldbc",
    password = Some("password"),
    database = Some("ldbc"),
    ssl      = SSL.Trusted
  )

The connection process to the database can be carried out using the connections established by each of these methods.

val result: IO[(List[Int], Option[Int], Int)] = connection.use { conn =>
  (for
    result1 <- sql"SELECT 1".toList[Int]
    result2 <- sql"SELECT 2".headOption[Int]
    result3 <- sql"SELECT 3".unsafe[Int]
  yield (result1, result2, result3)).readOnly(conn)
}

Using the query builder

ldbc provides not only plain queries but also type-safe database connections using the query builder.

The first step is to create a schema for use by the query builder.

ldbc maintains a one-to-one mapping between Scala models and database table definitions. The mapping between the properties held by the model and the columns held by the table is done in definition order. Table definitions are very similar to the structure of Create statements. This makes the construction of table definitions intuitive for the user.

case class User(
  id: Long,
  name: String,
  age: Option[Int],
)

val table = Table[User]("user")(                     // CREATE TABLE `user` (
  column("id", BIGINT, AUTO_INCREMENT, PRIMARY_KEY), //   `id` BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
  column("name", VARCHAR(255)),                      //   `name` VARCHAR(255) NOT NULL,
  column("age", INT.UNSIGNED.DEFAULT(None)),         //   `age` INT unsigned DEFAULT NULL
)              

The next step is to build a TableQuery using the schema you have created.

import ldbc.query.builder.TableQuery

val userQuery = TableQuery[IO, User](table)

Finally, you can use the query builder to create a query.

val result: IO[List[User]] = connection.use { conn =>
  userQuery.selectAll.toList[User].readOnly(conn)
  // "SELECT `id`, `name`, `age` FROM user"
}

Functional modification: extension of plain query construction.

A new feature has been added to the MySQL connector in Scala to treat static strings as query strings.

Our Scala-made MySQL connector has extended the plain query construction functionality to increase query flexibility. In particular, a new sc function has been introduced that allows static strings to be used in string completion. This allows static strings to be incorporated directly as part of the query.

Examples of use and details of functions

Introducing the sc function: sc stands for static context, which can be used in string completion to safely incorporate static strings, such as table names, into queries.

Assembling the query

val table = sc("table_name")
sql"SELECT * FROM $table WHERE id = ${1L}"

In the above code, table_name is incorporated into the query as a static string using sc and "SELECT * FROM table_name WHERE id = ?" The SQL is generated in the form "SELECT * FROM table_name WHERE id = ? In this way, the placeholder (?) corresponds to a dynamic parameter (in this case 1L).

This extension makes it easier to create more dynamic and secure queries. This new method is particularly useful when multiple table names or SQL fragments need to be combined dynamically from the programme.

Functional modification: provision of functions to support plain query construction.

The MySQL connector in Scala adds a new feature for building queries using dynamic values.

The enhancements allow developers to build SQL queries more flexibly and simplify the dynamic generation of SQL, especially when dealing with multiple values and conditions. Key new features include utility functions for efficiently building combinations of values, in, notIn, and, or and conditional expressions.

Main new functions

  • values function: allows multiple values to be incorporated into a VALUES expression, e.g. used for batch insertion.
  • in and notIn functions: generate IN and NOT IN clauses based on a specified list of values.
  • and and or functions: combine multiple conditions with AND or OR to form logical conditional expressions.
  • whereAnd and whereOr functions: incorporate multiple conditions into a WHERE clause to enhance query filtering.

examples showing the use (of a word)

val ids = List(1, 2, 3)  // Dynamically generated values
val query = sql"SELECT * FROM users WHERE" ++ in(sql"id", ids)
// SELECT * FROM users WHERE id IN (?, ?, ?)

With this retrofit, developers using the MySQL connector from Scala will be able to build programmatically complex queries easily and safely. This will improve the performance and maintainability of applications and allow for more diverse data manipulation.

What's Changed

🚀 Features

💪 Enhancement

🪲 Bug Fixes

🔧 Refactoring

📖 Documentation

⛓️ Dependency update

Full Changelog: v0.3.0-beta2...v0.3.0-beta3

v0.3.0-beta2

22 May 23:27
Compare
Choose a tag to compare

ldbc v0.3.0-beta2 is released.
This release adds enhancements to existing features.

Note

ldbc is pre-1.0 software and is still undergoing active development. New versions are not binary compatible with prior versions, although in most cases user code will be source compatible.
The major version will be the stable version.

Functional modification: extension of Connection and integration with the ldbc package.

The Scala-made MySQL connector now integrates with the ldbc package, allowing more intuitive and flexible database operations.

Our Scala-made MySQL connector has a new integration with the ldbc package to make the interaction with the database more sophisticated. This modification enables intuitive and flexible database queries that utilise Scala's functional programming patterns. Users can now interact with the database using plain queries as follows.

connection.use { conn =>
  (for
    result1 <- sql"SELECT `p1`, `p2` FROM `table1`".toList[(Int, Int)]
    result2 <- sql"SELECT `p1`, `p2` FROM `table1` WHERE `p1` = ${ 1 }".headOption[(Int, Int)]
    result3 <- sql"SELECT `p1`, `p2` FROM `table1`".unsafe[(Int, Int)]
  yield (result1, result2, result3)).run(conn)
}

Main modifications:

  • Support for functional queries: through the ldbc library, it is possible to build queries that take advantage of Scala's functional properties. This improves code readability and maintainability.
  • Code simplification: combining multiple queries can now be done directly within a single context, greatly increasing the efficiency of database operations.
  • Increased security and flexibility: methods such as .toList, .headOption and .unsafe allow for fine-grained control over how data is retrieved, enhancing security and flexibility.

The enhancements allow developers working with MySQL databases using Scala to achieve more advanced and intuitive database access. The integrated ldbc package significantly improves the performance and usability of the programme and makes it easier to perform complex database tasks.

Important change

Caution

A naming change has been made with regard to the API for update systems.

The API updateReturningAutoGeneratedKey, which converts values generated by AUTO INCREMENT columns in the update API, has been renamed to returning.

This is a characteristic of MySQL, which returns the value generated by AUTO INCREMENT when inserting data, whereas other RDBs have different behaviour and can return values other than those generated by AUTO INCREMENT.
The API name was changed early on to make the limited API name more extensible in view of future extensions.

before

sql"INSERT INTO `table`(`id`, `c1`) VALUES ($None, ${ "column 1" })".updateReturningAutoGeneratedKey[Long]

after

sql"INSERT INTO `table`(`id`, `c1`) VALUES ($None, ${ "column 1" })".returning[Long]

What's Changed

🚀 Features

Full Changelog: v0.3.0-beta1...v0.3.0-beta2

v0.3.0-beta1

20 May 01:32
Compare
Choose a tag to compare

ldbc v0.3.0-beta1 is released.
This release adds enhancements to existing features.

Note

ldbc is pre-1.0 software and is still undergoing active development. New versions are not binary compatible with prior versions, although in most cases user code will be source compatible.
The major version will be the stable version.

We have merged the JDBC-mimicking interface used by ldbc connector with the ldbc sql package.
This makes it ready to work with the other packages in ldbc.

What's Changed

🔧 Refactoring

  • Match the ldbc sql package to ldbc connector. by @takapi327 in #206
  • Refactor/2024 05 remove cora package dependency from sql package by @takapi327 in #208
  • Refactor/2024 05 Integration with sql package by @takapi327 in #209
  • Refactor/2024 05 integration savepoint by @takapi327 in #210

⛓️ Dependency update

Full Changelog: v0.3.0-alpha9...v0.3.0-beta1

v0.3.0-alpha9

12 May 14:24
Compare
Choose a tag to compare

ldbc v0.3.0-alpha9 is released.
This release adds enhancements to existing features.

Note

ldbc is pre-1.0 software and is still undergoing active development. New versions are not binary compatible with prior versions, although in most cases user code will be source compatible.
The major version will be the stable version.

New: CallableStatement Support

A new CallableStatement feature for calling stored procedures and functions has been added to the MySQL connector in Scala.

Our Scala-made MySQL connector introduces new CallableStatement support for more advanced interaction with the database. This feature allows direct calls to stored procedures and database functions. This is especially useful for streamlining data processing and improving application performance by executing complex logic on the database server side.

Features:

  • Stored Procedure Execution: Stored procedures defined in the database can be executed, allowing complex data manipulation and business logic to be handled at the database level.
  • Database function calls: Database functions can be used to perform specific calculations as part of a query. This reduces the load on the application and improves overall response time.
  • Support for input and output parameters: Input parameters can be passed and output parameters can be received through CallableStatement. This allows for flexible and dynamic database interactions.

Take advantage of this update to the MySQL connector in Scala to take your application's database interactions to the next Take your application's database interaction to the next level with this update to the MySQL Connector in Scala.

What's Changed

🚀 Features

⛓️ Dependency update

Full Changelog: v0.3.0-alpha8...v0.3.0-alpha9