View source on GitHub |
Version of embedding_lookup that avoids duplicate lookups.
tfra.dynamic_embedding.embedding_lookup_unique(
params,
ids,
partition_strategy=None,
name=None,
validate_indices=None,
max_norm=None,
return_trainable=(False)
)
This can save communication in the case of repeated ids. Same interface as embedding_lookup.
params
: A dynamic_embedding.Variable instance.ids
: a tensor with any shape as same dtype of params.key_dtype.partition_strategy
: No used, for API compatiblity withnn.emedding_lookup
.name
: A name for the operation. Name is optional in graph mode and required in eager mode.validate_indices
: No used, just for compatible with nn.embedding_lookup .max_norm
: If notNone
, each embedding is clipped if its l2-norm is larger than this value.return_trainable
: optional, If True, also return TrainableWrapper
A tensor with shape [shape of ids] + [dim], dim is equal to the value dim of params. containing the values from the params tensor(s) for keys in ids.
trainable_wrap
: A TrainableWrapper object used to fill the Optimizersvar_list
Only provided ifreturn_trainable
is True.