Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft of Keras pickle RFC #1

Merged
merged 26 commits into from
Sep 21, 2020
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 5 additions & 8 deletions rfcs/20200902-pickle-for-keras.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,11 +150,9 @@ assert m1 == m2 # TODO: or some other check
For `tf.keras.Model`, we can use `SaveModel` as the backend for `__reduce_ex__`:

``` python
# tensorflow/python/.../training.py
from tf.keras.models import load_model

class Model:
...
class NewModel(Model):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious as to why the name change? I was envisioning these code blocks as representing "here's a pseudocode of what this would look like if implemented in TF" and not necessarily "here's how users can create a picklable Model" which is the first thought that came to mind when I saw NewModel(Model).

def __reduce_ex__(self, protocol):
self.save(f"ram://tmp/saving/{id(self)")
b = tf.io.gfile.read_folder(f"ram://tmp/saving/{id(self)}")
Expand All @@ -166,15 +164,14 @@ class Model:
return load_model(temp_ram_location)
adriangb marked this conversation as resolved.
Show resolved Hide resolved
```

Small augmentations to TensorFlow's `io` module would be required, as discussed in [tensorflow#39609].

By wrapping the pickled object within a `Numpy` array, pickling will support
pickle protocol 5 for zero-copy pickling. This provides an immediate
performance improvement for many use cases.

This almost exactly mirrors the PyTorch implementation of Pickle support in [pytorch#9184]
performance improvement for many use cases. This almost exactly mirrors the PyTorch
implementation of Pickle support in [pytorch#9184]
as mentioned in "[Pickle isn't slow, it's a protocol]."

Small augmentations to TensorFlow's `io` module would be required, as discussed in [tensorflow#39609].

[pytorch#9184]:https://github.com/pytorch/pytorch/pull/9184

### Alternatives Considered
Expand Down