Releases: tensorflow/gnn
v1.0.3
Release 1.0 is the first with a stable public API.
What's Changed in r1.0
- Overall
- Use with incompatible Keras v3 raises a clear error.
- As of release 1.0.3, the error refers to the new Keras version guide and explains how to get Keras v2 with TF2.16+ via
TF_USE_LEGACY_KERAS=1
. - Releases 1.0.0 to 1.0.2 had a pip package requirement for TF
<2.16
but could be made to work the same way.
- As of release 1.0.3, the error refers to the new Keras version guide and explains how to get Keras v2 with TF2.16+ via
- Minimum supported TF/Keras version moved to
>=2.12
. - Importing the library no longer leaks private module names.
- All parts of the
GraphSchema
protobuf are now exposed undertfgnn.proto.*
. - Model saving now clearly distinguishes export to inference (pure TF, fully supported) from misc ways of saving for model reuse.
- Numerous small bug fixes.
- Use with incompatible Keras v3 raises a clear error.
- Subgraph sampling: major upgrade
- New and unified sampler for in-memory and beam-based subgraph sampling.
- Module
tfgnn.experimental.in_memory
is removed in favor of the new sampler. - New console script
tfgnn_sampler
replaces the oldtfgnn_graph_sampler
.
- GraphTensor
- Most
tfgnn.*
functions on GraphTensor now work in Keras' Functional API, including the factory methodsGraphTensor.from_pieces(...)
etc. - New static checks for GraphTensor field shapes, opt out with
tfgnn.disable_graph_tensor_validation()
. - New runtime checks for GraphTensor field shapes, sizes and index ranges, opt in with
tfgnn.enable_graph_tensor_validation_at_runtime()
. GraphTensor
maintains.row_splits_dtype
separately from.indices_dtype
.- The
GraphSchema
and the I/O functions fortf.Example
now support all non-quantized, non-complex floating-point and integer types as well asbool
andstring
. - Added convenience wrapper
tfgnn.pool_neighbors_to_node()
. - Misc fixes to
tfgnn.random_graph_tensor()
, now respects component boundaries.
- Most
- Runner
- New tasks for link prediction and node classification/regression based on structured readout.
- Now comes with API docs.
- Models collection
models/contrastive_losses
gets multiple extensions, including a triplet loss and API docs.models/multi_head_attention
replaces sigmoid with elu+1 in trained scaling.- Bug fixes for mixed precision.
Full Changelog: v0.6.1...v1.0.0
What's Changed in v1.0.3 over v1.0.2
- Support TF2.16+ via
TF_USE_LEGACY_KERAS=1
: updated setup.py, docs and error messages (605b552)
Full Changelog: v1.0.2...v1.0.3
What's Changed in v1.0.2 over v1.0.1
- Bugfixes for use with TF 2.14 and 2.15 in case
tf_keras
is installed but not used astf.keras
(ffa453f) (e1d9210).
Full Changelog: v1.0.1...v1.0.2
What's Changed in v1.0.1 over v1.0.0
- Bugfix for regression tasks
runner.GraphMean*Error
: thereduce_type
is again passed through correctly (19c10f2).
Full Changelog: v1.0.0...v1.0.1
v1.0.3rc0
Release 1.0 is the first with a stable public API.
What's Changed in r1.0
- Overall
- Use with incompatible Keras v3 raises a clear error.
- As of release 1.0.3, the error refers to the new Keras version guide and explains how to get Keras v2 with TF2.16+ via
TF_USE_LEGACY_KERAS=1
. - Releases 1.0.0 to 1.0.2 had a pip package requirement for TF
<2.16
but could be made to work the same way.
- As of release 1.0.3, the error refers to the new Keras version guide and explains how to get Keras v2 with TF2.16+ via
- Minimum supported TF/Keras version moved to
>=2.12
. - Importing the library no longer leaks private module names.
- All parts of the
GraphSchema
protobuf are now exposed undertfgnn.proto.*
. - Model saving now clearly distinguishes export to inference (pure TF, fully supported) from misc ways of saving for model reuse.
- Numerous small bug fixes.
- Use with incompatible Keras v3 raises a clear error.
- Subgraph sampling: major upgrade
- New and unified sampler for in-memory and beam-based subgraph sampling.
- Module
tfgnn.experimental.in_memory
is removed in favor of the new sampler. - New console script
tfgnn_sampler
replaces the oldtfgnn_graph_sampler
.
- GraphTensor
- Most
tfgnn.*
functions on GraphTensor now work in Keras' Functional API, including the factory methodsGraphTensor.from_pieces(...)
etc. - New static checks for GraphTensor field shapes, opt out with
tfgnn.disable_graph_tensor_validation()
. - New runtime checks for GraphTensor field shapes, sizes and index ranges, opt in with
tfgnn.enable_graph_tensor_validation_at_runtime()
. GraphTensor
maintains.row_splits_dtype
separately from.indices_dtype
.- The
GraphSchema
and the I/O functions fortf.Example
now support all non-quantized, non-complex floating-point and integer types as well asbool
andstring
. - Added convenience wrapper
tfgnn.pool_neighbors_to_node()
. - Misc fixes to
tfgnn.random_graph_tensor()
, now respects component boundaries.
- Most
- Runner
- New tasks for link prediction and node classification/regression based on structured readout.
- Now comes with API docs.
- Models collection
models/contrastive_losses
gets multiple extensions, including a triplet loss and API docs.models/multi_head_attention
replaces sigmoid with elu+1 in trained scaling.- Bug fixes for mixed precision.
Full Changelog: v0.6.1...v1.0.0
What's Changed in v1.0.3 over v1.0.2
- Support TF2.16+ via
TF_USE_LEGACY_KERAS=1
: updated setup.py, docs and error messages (605b552)
Full Changelog: v1.0.2...v1.0.3rc0
What's Changed in v1.0.2 over v1.0.1
- Bugfixes for use with TF 2.14 and 2.15 in case
tf_keras
is installed but not used astf.keras
(ffa453f) (e1d9210).
Full Changelog: v1.0.1...v1.0.2
What's Changed in v1.0.1 over v1.0.0
- Bugfix for regression tasks
runner.GraphMean*Error
: thereduce_type
is again passed through correctly (19c10f2).
Full Changelog: v1.0.0...v1.0.1
v1.0.2
Release 1.0 is the first with a stable public API.
What's Changed in r1.0
- Overall
- Supported TF/Keras versions moved to
>=2.12,<2.16
, incompatible Keras v3 raises a clear error. - Importing the library no longer leaks private module names.
- All parts of the
GraphSchema
protobuf are now exposed undertfgnn.proto.*
. - Model saving now clearly distinguishes export to inference (pure TF, fully supported) from misc ways of saving for model reuse.
- Numerous small bug fixes.
- Supported TF/Keras versions moved to
- Subgraph sampling: major upgrade
- New and unified sampler for in-memory and beam-based subgraph sampling.
- Module
tfgnn.experimental.in_memory
is removed in favor of the new sampler. - New console script
tfgnn_sampler
replaces the oldtfgnn_graph_sampler
.
- GraphTensor
- Most
tfgnn.*
functions on GraphTensor now work in Keras' Functional API, including the factory methodsGraphTensor.from_pieces(...)
etc. - New static checks for GraphTensor field shapes, opt out with
tfgnn.disable_graph_tensor_validation()
. - New runtime checks for GraphTensor field shapes, sizes and index ranges, opt in with
tfgnn.enable_graph_tensor_validation_at_runtime()
. GraphTensor
maintains.row_splits_dtype
separately from.indices_dtype
.- The
GraphSchema
and the I/O functions fortf.Example
now support all non-quantized, non-complex floating-point and integer types as well asbool
andstring
. - Added convenience wrapper
tfgnn.pool_neighbors_to_node()
. - Misc fixes to
tfgnn.random_graph_tensor()
, now respects component boundaries.
- Most
- Runner
- New tasks for link prediction and node classification/regression based on structured readout.
- Now comes with API docs.
- Models collection
models/contrastive_losses
gets multiple extensions, including a triplet loss and API docs.models/multi_head_attention
replaces sigmoid with elu+1 in trained scaling.- Bug fixes for mixed precision.
Full Changelog: v0.6.1...v1.0.0
What's Changed in v1.0.2 over v1.0.1
- Bugfixes for use with TF 2.14 and 2.15 in case
tf_keras
is installed but not used astf.keras
(ffa453f) (e1d9210).
Full Changelog: v1.0.1...v1.0.2
What's Changed in v1.0.1 over v1.0.0
- Bugfix for regression tasks
runner.GraphMean*Error
: thereduce_type
is again passed through correctly (19c10f2).
Full Changelog: v1.0.0...v1.0.1
v1.0.2rc1
Release 1.0 is the first with a stable public API.
What's Changed in r1.0
- Overall
- Supported TF/Keras versions moved to
>=2.12,<2.16
, incompatible Keras v3 raises a clear error. - Importing the library no longer leaks private module names.
- All parts of the
GraphSchema
protobuf are now exposed undertfgnn.proto.*
. - Model saving now clearly distinguishes export to inference (pure TF, fully supported) from misc ways of saving for model reuse.
- Numerous small bug fixes.
- Supported TF/Keras versions moved to
- Subgraph sampling: major upgrade
- New and unified sampler for in-memory and beam-based subgraph sampling.
- Module
tfgnn.experimental.in_memory
is removed in favor of the new sampler. - New console script
tfgnn_sampler
replaces the oldtfgnn_graph_sampler
.
- GraphTensor
- Most
tfgnn.*
functions on GraphTensor now work in Keras' Functional API, including the factory methodsGraphTensor.from_pieces(...)
etc. - New static checks for GraphTensor field shapes, opt out with
tfgnn.disable_graph_tensor_validation()
. - New runtime checks for GraphTensor field shapes, sizes and index ranges, opt in with
tfgnn.enable_graph_tensor_validation_at_runtime()
. GraphTensor
maintains.row_splits_dtype
separately from.indices_dtype
.- The
GraphSchema
and the I/O functions fortf.Example
now support all non-quantized, non-complex floating-point and integer types as well asbool
andstring
. - Added convenience wrapper
tfgnn.pool_neighbors_to_node()
. - Misc fixes to
tfgnn.random_graph_tensor()
, now respects component boundaries.
- Most
- Runner
- New tasks for link prediction and node classification/regression based on structured readout.
- Now comes with API docs.
- Models collection
models/contrastive_losses
gets multiple extensions, including a triplet loss and API docs.models/multi_head_attention
replaces sigmoid with elu+1 in trained scaling.- Bug fixes for mixed precision.
Full Changelog: v0.6.1...v1.0.0
What's Changed in v1.0.2 over v1.0.1
- Bugfixes for use with TF 2.14 and 2.15 in case
tf_keras
is installed but not used astf.keras
(ffa453f) (e1d9210).
Full Changelog: v1.0.1...v1.0.2rc1
What's Changed in v1.0.1 over v1.0.0
- Bugfix for regression tasks
runner.GraphMean*Error
: thereduce_type
is again passed through correctly (19c10f2).
Full Changelog: v1.0.0...v1.0.1
v1.0.2rc0
Release 1.0 is the first with a stable public API.
What's Changed in r1.0
- Overall
- Supported TF/Keras versions moved to
>=2.12,<2.16
, incompatible Keras v3 raises a clear error. - Importing the library no longer leaks private module names.
- All parts of the
GraphSchema
protobuf are now exposed undertfgnn.proto.*
. - Model saving now clearly distinguishes export to inference (pure TF, fully supported) from misc ways of saving for model reuse.
- Numerous small bug fixes.
- Supported TF/Keras versions moved to
- Subgraph sampling: major upgrade
- New and unified sampler for in-memory and beam-based subgraph sampling.
- Module
tfgnn.experimental.in_memory
is removed in favor of the new sampler. - New console script
tfgnn_sampler
replaces the oldtfgnn_graph_sampler
.
- GraphTensor
- Most
tfgnn.*
functions on GraphTensor now work in Keras' Functional API, including the factory methodsGraphTensor.from_pieces(...)
etc. - New static checks for GraphTensor field shapes, opt out with
tfgnn.disable_graph_tensor_validation()
. - New runtime checks for GraphTensor field shapes, sizes and index ranges, opt in with
tfgnn.enable_graph_tensor_validation_at_runtime()
. GraphTensor
maintains.row_splits_dtype
separately from.indices_dtype
.- The
GraphSchema
and the I/O functions fortf.Example
now support all non-quantized, non-complex floating-point and integer types as well asbool
andstring
. - Added convenience wrapper
tfgnn.pool_neighbors_to_node()
. - Misc fixes to
tfgnn.random_graph_tensor()
, now respects component boundaries.
- Most
- Runner
- New tasks for link prediction and node classification/regression based on structured readout.
- Now comes with API docs.
- Models collection
models/contrastive_losses
gets multiple extensions, including a triplet loss and API docs.models/multi_head_attention
replaces sigmoid with elu+1 in trained scaling.- Bug fixes for mixed precision.
Full Changelog: v0.6.1...v1.0.0
What's Changed in v1.0.2 over v1.0.1
- Bugfix for use with TF 2.14 and 2.15 in case
tf_keras
is installed but not used astf.keras
(ffa453f).
Full Changelog: v1.0.1...v1.0.2rc0
What's Changed in v1.0.1 over v1.0.0
- Bugfix for regression tasks
runner.GraphMean*Error
: thereduce_type
is again passed through correctly (19c10f2).
Full Changelog: v1.0.0...v1.0.1
v1.0.1
Release 1.0 is the first with a stable public API.
What's Changed in r1.0
- Overall
- Supported TF/Keras versions moved to
>=2.12,<2.16
, incompatible Keras v3 raises a clear error. - Importing the library no longer leaks private module names.
- All parts of the
GraphSchema
protobuf are now exposed undertfgnn.proto.*
. - Model saving now clearly distinguishes export to inference (pure TF, fully supported) from misc ways of saving for model reuse.
- Numerous small bug fixes.
- Supported TF/Keras versions moved to
- Subgraph sampling: major upgrade
- New and unified sampler for in-memory and beam-based subgraph sampling.
- Module
tfgnn.experimental.in_memory
is removed in favor of the new sampler. - New console script
tfgnn_sampler
replaces the oldtfgnn_graph_sampler
.
- GraphTensor
- Most
tfgnn.*
functions on GraphTensor now work in Keras' Functional API, including the factory methodsGraphTensor.from_pieces(...)
etc. - New static checks for GraphTensor field shapes, opt out with
tfgnn.disable_graph_tensor_validation()
. - New runtime checks for GraphTensor field shapes, sizes and index ranges, opt in with
tfgnn.enable_graph_tensor_validation_at_runtime()
. GraphTensor
maintains.row_splits_dtype
separately from.indices_dtype
.- The
GraphSchema
and the I/O functions fortf.Example
now support all non-quantized, non-complex floating-point and integer types as well asbool
andstring
. - Added convenience wrapper
tfgnn.pool_neighbors_to_node()
. - Misc fixes to
tfgnn.random_graph_tensor()
, now respects component boundaries.
- Most
- Runner
- New tasks for link prediction and node classification/regression based on structured readout.
- Now comes with API docs.
- Models collection
models/contrastive_losses
gets multiple extensions, including a triplet loss and API docs.models/multi_head_attention
replaces sigmoid with elu+1 in trained scaling.- Bug fixes for mixed precision.
Full Changelog: v0.6.1...v1.0.0
What's Changed in v1.0.1 over v1.0.0
- Bugfix for regression tasks
runner.GraphMean*Error
: thereduce_type
is again passed through correctly
(19c10f2).
Full Changelog: v1.0.0...v1.0.1
v1.0.0
First release with a stable public API.
What's Changed
- Overall
- Supported TF/Keras versions moved to
>=2.12,<2.16
, incompatible Keras v3 raises a clear error. - Importing the library no longer leaks private module names.
- All parts of the
GraphSchema
protobuf are now exposed undertfgnn.proto.*
. - Model saving now clearly distinguishes export to inference (pure TF, fully supported) from misc ways of saving for model reuse.
- Numerous small bug fixes.
- Supported TF/Keras versions moved to
- Subgraph sampling: major upgrade
- New and unified sampler for in-memory and beam-based subgraph sampling.
- Module
tfgnn.experimental.in_memory
is removed in favor of the new sampler. - New console script
tfgnn_sampler
replaces the oldtfgnn_graph_sampler
.
- GraphTensor
- Most
tfgnn.*
functions on GraphTensor now work in Keras' Functional API, including the factory methodsGraphTensor.from_pieces(...)
etc. - New static checks for GraphTensor field shapes, opt out with
tfgnn.disable_graph_tensor_validation()
. - New runtime checks for GraphTensor field shapes, sizes and index ranges, opt in with
tfgnn.enable_graph_tensor_validation_at_runtime()
. GraphTensor
maintains.row_splits_dtype
separately from.indices_dtype
.- The
GraphSchema
and the I/O functions fortf.Example
now support all non-quantized, non-complex floating-point and integer types as well asbool
andstring
. - Added convenience wrapper
tfgnn.pool_neighbors_to_node()
. - Misc fixes to
tfgnn.random_graph_tensor()
, now respects component boundaries.
- Most
- Runner
- New tasks for link prediction and node classification/regression based on structured readout.
- Now comes with API docs.
- Models collection
models/contrastive_losses
gets multiple extensions, including a triplet loss and API docs.models/multi_head_attention
replaces sigmoid with elu+1 in trained scaling.- Bug fixes for mixed precision.
Full Changelog: v0.6.1...v1.0.0
v1.0.0rc0
v1.0.0.dev2
Early developmental release of tensorflow-gnn 1.0.0 code; docs still unfinished.