History log of /external/tensorflow/tensorflow/python/training/checkpointable.py
Revision Date Author Comments (<<< Hide modified files) (Show modified files >>>)
f5e2ef8efd7f3f204d071a48200d9c2b548e2e3e 17-Feb-2018 Allen Lavoie <allenl@google.com> Checkpointable: Don't run ops automatically when graph building.

This is a prerequisite to moving toward a Saver-like model when graph building. We no longer mess with initializers (when graph building; eager needs it), and restore ops just get queued up and returned.

Since initializers are left alone when graph building, there is a new special case for slot variables which needs to be handled. This is the third(!) queue for deferred slot restorations ((1) variable -> slot, (2) optimizer -> slot, (3) (optimizer, variable) -> slot), and should be the last one I need (it's a hypergraph with 3-tuple edges).

The plan after this is to switch over to tf.train.Saver's existing restore op creation infrastructure, which will handle any SaveableObjects. There will also be a few CLs for making graph usage prettier, and eventually allowing eager/graph agnostic save/restore.

PiperOrigin-RevId: 186059387
/external/tensorflow/tensorflow/python/training/checkpointable.py
2d6a550ab4e54de12f03dd04a892562dd85425db 16-Feb-2018 Allen Lavoie <allenl@google.com> Remove the __setattr__ override for Variables

Was slowing down the creation of _UnreadVariable objects. Adds CheckpointableBase without the __setattr__ override.

It's tempting to just override __setattr__ in variables to try making it faster, but it's already just doing an isinstance check. Removing the override entirely seems to be the cleanest option.

PiperOrigin-RevId: 186041147
/external/tensorflow/tensorflow/python/training/checkpointable.py
8745e3426713068e7061b3aae368ebb4db8dc2cc 15-Feb-2018 Allen Lavoie <allenl@google.com> Object-based saving: Switch to "everything is Checkpointable"

The only sane way to use/test this is to have Variables be Checkpointable, so this CL includes a move of the base class to core. No public methods are exposed, and I've attempted to not throw any errors on __setattr__.

Allows dynamic dependencies (track after restore) and restoring variables on assignment to a Checkpointable object, and includes the protocol buffer modifications necessary for saving information with each object.

There are still some prominent TODOs:
- Stop modifying the graph after the first save/restore (likely cache ops in Checkpointable objects)
- Add some overridable methods for saving Python strings when restore() is called, fed when graph building rather than embedded as constants in the graph
- Work on the initialization story for graph building. Currently the unit tests rely on collections for this.
- Support for more objects, move the prototype modifications in checkpointable_test to core.

The diff is larger than I was hoping (mostly deletions and unit tests); that could be reduced a bit (or at least "lines added" converted to "lines deleted") by diffbasing on cl/180950921, which was my first attempt at dynamic dependencies. This CL is more of a re-write than a modification, so sending that one out seems a bit silly. The unit tests are still good, though.

PiperOrigin-RevId: 185893387
/external/tensorflow/tensorflow/python/training/checkpointable.py