10eb6fc7a3c6b9825a1c970a65576afd8982526b |
|
16-Feb-2017 |
A. Unique TensorFlower <gardener@tensorflow.org> |
Factor parsing the args in optimize_for_inference to a separate function, to allow reusing the binary. Change: 147666737
/external/tensorflow/tensorflow/python/tools/optimize_for_inference.py
|
39eca117a335918e5314f3593022568a0454ba31 |
|
25-Jan-2017 |
Rohan Jain <rohanj@google.com> |
Adding support for 'b' in mode for FileIO. Change: 145590579
/external/tensorflow/tensorflow/python/tools/optimize_for_inference.py
|
2b351f224df81121cdcf8131d84be0e3f43d407c |
|
06-Jan-2017 |
Vijay Vasudevan <vrv@google.com> |
Convert tf.flags usage to argparse. Move use of FLAGS globals into main() only. Change: 143799731
/external/tensorflow/tensorflow/python/tools/optimize_for_inference.py
|
97866c1084ad4ce029dac6f03d69d243cba65556 |
|
04-Jan-2017 |
A. Unique TensorFlower <gardener@tensorflow.org> |
Automated rollback of change 143523842 Change: 143524229
/external/tensorflow/tensorflow/python/tools/optimize_for_inference.py
|
51e5d17bc3642449c67227dbea229444d63a4e60 |
|
04-Jan-2017 |
Vijay Vasudevan <vrv@google.com> |
Convert tf.flags usage to argparse. Move use of FLAGS globals into main() only. Change: 143523842
/external/tensorflow/tensorflow/python/tools/optimize_for_inference.py
|
e121667dc609de978a223c56ee906368d2c4ceef |
|
30-Dec-2016 |
Justine Tunney <jart@google.com> |
Remove so many more hourglass imports Change: 143230429
/external/tensorflow/tensorflow/python/tools/optimize_for_inference.py
|
f3033eef37e36a99e1c11ab5648cb587eb16fe97 |
|
08-Oct-2016 |
A. Unique TensorFlower <gardener@tensorflow.org> |
Allow optimize_for_inference.py to deal with text GraphDef proto files. Text graph files do not have variable weights frozen in the graph. The weights are stored in separate checkpoint files. Change: 135539698
/external/tensorflow/tensorflow/python/tools/optimize_for_inference.py
|
cb324446acbdf0d3d2129904361cf0bcbe53e852 |
|
02-Sep-2016 |
Pete Warden <petewarden@google.com> |
Fuse resize and mirror padding ops into convolutions Spatial transformations like padding and bilinear resizing can be merged into the im2col stage of conv2d. This reduces the memory usage considerably (from 338MB to 224MB) and latency (by 15%) on some models, and helps us avoid OOM crashes on iOS. This PR has all the changes needed to fuse these particular ops, including the kernels themselves and integration into the optimize_for_inference script. Change: 132094335
/external/tensorflow/tensorflow/python/tools/optimize_for_inference.py
|
7c7014fd41cdf4e24f923b9e79c249d717aa508f |
|
08-Aug-2016 |
Pete Warden <petewarden@google.com> |
Optimizing graphs for inference. Change: 129581148
/external/tensorflow/tensorflow/python/tools/optimize_for_inference.py
|