Skip to content

Latest commit

 

History

History
66 lines (48 loc) · 1.95 KB

embedding_lookup_unique.md

File metadata and controls

66 lines (48 loc) · 1.95 KB

tfra.dynamic_embedding.embedding_lookup_unique

View source on GitHub




Version of embedding_lookup that avoids duplicate lookups.

tfra.dynamic_embedding.embedding_lookup_unique(
    params,
    ids,
    partition_strategy=None,
    name=None,
    validate_indices=None,
    max_norm=None,
    return_trainable=(False)
)

This can save communication in the case of repeated ids. Same interface as embedding_lookup.

Args:

  • params: A dynamic_embedding.Variable instance.
  • ids: a tensor with any shape as same dtype of params.key_dtype.
  • partition_strategy: No used, for API compatiblity with nn.emedding_lookup.
  • name: A name for the operation. Name is optional in graph mode and required in eager mode.
  • validate_indices: No used, just for compatible with nn.embedding_lookup .
  • max_norm: If not None, each embedding is clipped if its l2-norm is larger than this value.
  • return_trainable: optional, If True, also return TrainableWrapper

Returns:

A tensor with shape [shape of ids] + [dim], dim is equal to the value dim of params. containing the values from the params tensor(s) for keys in ids.

  • trainable_wrap: A TrainableWrapper object used to fill the Optimizers var_list Only provided if return_trainable is True.