You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
def reset(self, initial_state: Optional[Dict[str, Any]] = None):
super().reset(initial_state)
if initial_state is not None:
self._it.reset(initial_state[self.IT_STATE_KEY])
else:
self._it.reset()
is it an intended behavior?
my understanding is that any node that has a source node would be supposed to reset its source node, so that resetting any node will recursively reset every node toward the end of the source node. please let me know if i'm misunderstanding.
Versions
(nothing to do with any other packages.)
torchdata version: 0.10.1
The text was updated successfully, but these errors were encountered:
You're right, when a node's .reset() is called, it should reset its own source node(s), and so I believe this is working correctly: ParallelMapper.reset() is calling self._it.reset() so it should reset source node as we
🐛 Describe the bug
Currently (https://github.com/pytorch/data/blob/main/torchdata/nodes/map.py#L437) in 0.10.1,
ParallelMapper
class do not init the source node.is it an intended behavior?
my understanding is that any node that has a source node would be supposed to reset its source node, so that resetting any node will recursively reset every node toward the end of the source node. please let me know if i'm misunderstanding.
Versions
(nothing to do with any other packages.)
torchdata version: 0.10.1
The text was updated successfully, but these errors were encountered: