Releases: scalamolecule/molecule-old
Adapting to `sbt-molecule` 0.8.1
Adapting to sbt-molecule
0.8.1 that now no longer adds MapK attributes to the schema creation file.
Aggregates for card-many attributes
Card-many attribute values can now be aggregated.
Bug fixes
Bugfixes:
- Card-many ref attributes now have the same api as other card-many attributes (Set's of values can now be applied to both types).
- Variable resolution on fulltext searches added.
- Text attributes can now handle text input with quotation marks.
- Touching entity ids with
Entity
now correctly handles all types.
Improvement:
- For a more direct query evaluation, applying a single value to an attribute is now
ground
ed to a variable instead of usingcomparison
.
Meta becomes Generic
Semantic clearance.
Re-aligning Meta semantics
Minor updates improving clarity and semantic understandings of internal code.
Synchronized internal naming scheme + bug fixes
In order to synchronize scala molecule boilerplate code and its internal representation,
namespace names are now capitalized also in molecule model/query/datalog. This
applies when no custom partitions are defined (and namespaces not partition-prefixed):
// Namespace names are now capitalized in the model
m(Community.name)._model === Model(List(
Atom("Community", "name", "String", 1, VarValue) // "Community" now uppercase
))
// Uppercase namespace names are also retrieved when querying the schema
Schema.a.part.ns.nsFull.attr.get === List((
":Community/name", // namespace name now uppercase
"db.part/user", // default partition (not prefixed to namespaces)
"Community", // now uppercase
"Community", // now uppercase (has no partition prefix when no custom partition is defined)
"name"
))
// Produced Datalog queries will also use the uppercase namespace name
[:find ?b
:where [?a :Community/name ?b]]
This change takes away the need to lower/capitalize namespace names back and forth
between the source code representation and molecule model/query/datalog representations.
It also makes it easier to distinguish between namespace/attribute names in internal
molecule representations.
Unaffected with custom partitions
As before, custom partition-prefixed namespaces are unaffected:
// Namespace names are now capitalized in the model
m(accounting_Invoice.invoiceLine)._model === Model(List(
Atom("accounting_Invoice", "invoiceLine", "ref", 1, VarValue)
))
// Querying the schema
Schema.a.part.ns.nsFull.attr.get === List((
":accounting_Invoice/invoiceLine",
"accounting", // custom partition (always lowercase)
"Invoice", // namespace now uppercase
"accounting_Invoice", // partition-prefixed namespace
"invoiceLine"
))
// Datalog query
[:find ?b
:where [?a :accounting_Invoice/invoiceLine ?b]]
Working with non-molecule Datomic databases
For the end user internal uppercase namespace names have no impact unless you are working
with externally defined Datomic databases or data sets that can have lowercase namespace
names defined.
The sbt-plugin (as of version 0.8) now generates two additional schema transaction files
that can be transacted with the external lowercase database so that you can use your
uppercase Molecule code with it:
Molecule schema (uppercase) + external data (lowercase)
When importing external data
(example)
from a database with lowercase namespace names then you can
transact lowercase attribute aliases
(example)
so that your uppercase Molecule code can recognize the
imported lowercase data:
conn.datomicConn.transact(SchemaUpperToLower.namespaces)
External schema (lowercase) + external data (lowercase)
If both external schema and data is created with lowercase namespace names, then you can transact
uppercase attribute aliases with the live database so that it will recognize your uppercase
molecule code
(example):
conn.datomicConn.transact(MBrainzSchemaLowerToUpper.namespaces)
Getters using fast ListBuffer
Minor update to getter methods returning List
s of data. Now uses ListBuffer
internally to build type-casted data and then converting that to an immutable List
. Both buildup and conversion to List
takes constant time and is as fast as it can be.
Minor fixes
v0.17.1 v0.17.1 Minor fixes
Datoms, Indexes, Log, Schema and debugging
7 Generic APIs introduced/streamlined to access data and schema generically. Some examples:
Datoms
Entity id of Ben with generic datom attribute e
on a custom molecule...
Person.e.name.get.head === (benEntityId, "Ben")
EAVT index
Attributes/values of entity id e1
...
EAVT(e1).e.a.v.t.get === List(
(e1, ":person/name", "Ben", t1),
(e1, ":person/age", 42, t2),
(e1, ":golf/score", 5.7, t2)
)
AVET index
Values and entity associations for attribute :person/age
...
AVET(":person/age").v.e.t.get === List(
(42, e1, t2),
(37, e2, t5)
(14, e3, t7),
)
Datomic's indexRange API is also implemented...
// Entities and transactions of age attribute with values between 14 and 37
AVET.range(":person/age", Some(14), Some(37)).v.e.t.get === List(
(14, e4, t7) // 14 is included in value range
)
AEVT index
Entity ids, values and transaction t's of attribute :person/name
:
AEVT(":person/name").e.v.t.get === List(
(e1, "Ben", t2),
(e2, "Liz", t5)
)
VAET index
Reverse index for ref attributes...
// Say we have 3 entities pointing to one entity:
Release.e.name.Artists.e.name.get === List(
(r1, "Abbey Road", a1, "The Beatles"),
(r2, "Magical Mystery Tour", a1, "The Beatles"),
(r3, "Let it be", a1, "The Beatles"),
)
// .. then we can get the reverse relationships with the VAET Index:
VAET(a1).v.a.e.get === List(
(a1, ":release/artists", r1),
(a1, ":release/artists", r2),
(a1, ":release/artists", r3),
)
Log index
Access to datoms index sorted by transaction/time:
// Data from transaction t1 (inclusive) until t4 (exclusive)
Log(Some(t1), Some(t4)).t.e.a.v.op.get === List(
(t1, e1, ":person/name", "Ben", true),
(t1, e1, ":person/age", 41, true),
(t2, e2, ":person/name", "Liz", true),
(t2, e2, ":person/age", 37, true),
(t3, e1, ":person/age", 41, false),
(t3, e1, ":person/age", 42, true)
)
Schema
Programatically explore your Schema
structure...
// Datomic type and cardinality of attributes
Schema.a.tpe.card.get === List (
(":sales_customer/name", "string", "one"),
(":accounting_invoice/invoiceLine", "ref", "many")
)
Debugging
Various debugging methods to explore molecule queries and transactional commands.
Async API + tx functions
Sync/AsyncAPIs
All getter methods now have an asynchronous equivalent method that returns a Scala Future with the data:
get
/getAsync
- Default List of typed tuples for convenient access to smaller data sets.getArray
/getAsyncArray
- Array of typed tuples for fast retrieval and traversing of large data sets.getIterable
/getAsyncIterable
- Iterable of typed tuples for lazy evaluation of datagetJson
/getAsyncJson
- Json formatted result datagetRaw
/getAsyncRaw
- Raw untyped data from Datomic
All transactional operations on molecules now similarly have async implementations returning a Future with
a TxReport
containing data about the transaction.
save
/saveAsync
insert
/insertAsync
update
/updateAsync
retract
/retractAsync
Tx functions
Molecule now implements typed transaction functions.
Within the tx function you have access to the transaction database value so that you can ensure any
synchronization constraints before returning the resulting tx statements to be transacted. To abort
the whole transaction if a constraint is not met, simply throw an exception. Either all tx statements
will transact successfully or none will thereby ensuring atomicity of the transaction.
Any complexity of logic can be performed within a tx function as long as no side effects are produced
(like trying to update the database within the tx method body).
Tx function definitions
Tx functions in Datomic are untyped (takes arguments of type Object
). But Molecule allows you to
define typed tx methods inside a @TxFns
-annotated object that will automatically create equivalent "twin"
functions with the shape that Datomic expects and save them in the Datamic database transparently for you.
@TxFns
object myTxFns {
// Constraint check before multiple updates
def transfer(from: Long, to: Long, amount: Int)(implicit conn: Conn): Seq[Seq[Statement]] = {
// Validate sufficient funds in from-account
val curFromBalance = Ns(from).int.get.headOption.getOrElse(0)
if (curFromBalance < amount)
// Throw exception to abort the whole transaction
throw new TxFnException(s"Can't transfer $amount from account $from having a balance of only $curFromBalance.")
// Calculate new balances
val newFromBalance = curFromBalance - amount
val newToBalance = Ns(to).int.get.headOption.getOrElse(0) + amount
// Update accounts
Ns(from).int(newFromBalance).getUpdateTx ++ Ns(to).int(newToBalance).getUpdateTx
}
}
Tx function are invoked in application code with the transact
or transactAsync
method:
transact(transfer(fromAccount, toAccount, 20))
transact
(or transactAsync
) is a macro that analyzes the tx function signature to be able to
invoke its generated twin method within Datomic.
Bundled transactions
If the transactional logic is not dependent on access to the transaction database value,
multiple "bundled" tx statements can now be created by adding molecule tx statements to
one of the bundling transact
or transactAsync
methods:
transact(
// retract
e1.getRetractTx,
// save
Ns.int(4).getSaveTx,
// insert
Ns.int.getInsertTx(List(5, 6)),
// update
Ns(e2).int(20).getUpdateTx
)
Tx statement getters for the molecule operations are used to get the tx statements to be transacted
in one transaction. As with tx functions, only all tx statements will atomically transact or none will
if there is some transactional error.
Composite syntax
Composite molecules are now tied together with +
instead of ~
.
m(Ref2.int2 + Ns.int).get.sorted === Seq(
(1, 11),
(2, 22)
)
This change was made to avoid collision with the upcoming splice operator ~
in the next
major version of Scala/Dotty (see MACROS: THE PLAN FOR SCALA 3)
Composite inserts previously had its own special insert method but now shares syntax
with other inserts
val List(e1, e2) = Ref2.int2 + Ns.int insert Seq(
// Two rows of data
(1, 11),
(2, 22)
) eids