History log of /external/tensorflow/tensorflow/contrib/lite/toco/model_cmdline_flags.cc
Revision Date Author Comments (<<< Hide modified files) (Show modified files >>>)
2b7d03c91d092cda88e6db345705fff3cd5b7b77 26-Jan-2018 A. Unique TensorFlower <gardener@tensorflow.org> Allow passing dummy/custom minmax information on a per-array basis,
unlike the existing --default_ranges_{min,max} flags which only allowed
to set a single global value for all arrays.

This takes the form of a new embedded message in ModelFlags, which is
its own message so that it can be serialized separately. The command-line
interface is --arrays_extra_info_file=some_proto.pbtxt, i.e. we don't
try to make a command-line-flags-only interface, we mandate putting the info
in a file. The rationale is that users may want to specify custom minmax
for hundreds of arrays, so it would be cumbersome to have that all in a
command line.

This should be considered an experimental feature, in the sense that
in properly quantized models, minmax information is already embedded
in the graph (e.g. in FakeQuant nodes). This is an extension of the
existing --default_ranges_{min,max} feature which had turned out to be
too restrictive for many users.

PiperOrigin-RevId: 183326000
/external/tensorflow/tensorflow/contrib/lite/toco/model_cmdline_flags.cc
b998b7b456066530dd27ef532dae195d27505266 25-Jan-2018 A. Unique TensorFlower <gardener@tensorflow.org> Drop the manually_create field from RnnState.

Initially, I thought that the shape of RNN state arrays could always be
determined by shape propagation. Then I came across some graphs where this
wasn't so easy to infer, so I introduced manually_create thinking of it
as a hack. Today I took another look at dropping that hack, and had a
"D'oh" moment when I realized that the cyclic nature of RNN graphs makes
it impossible to infer the shapes of all arrays by usual propagation.
For example, in a LSTM cell, the input array is concatenated with
a state array, so if we don't already know the shape of that state array,
shape propagation stops there.

Thus, this change removes manually_create by making toco always behave as
if manually_create=true, i.e. early-creating all RNN state arrays with
the shape explicitly specified by the user. The next TODO item here
(see model_flags.proto) is to introduce a generic 'shape' field, so far
the current 'size' field only allows specifying 1-D shapes.

PiperOrigin-RevId: 183294102
/external/tensorflow/tensorflow/contrib/lite/toco/model_cmdline_flags.cc
10d7ddfa9bb95d65f7245dae4230a00b0badde06 25-Jan-2018 A. Unique TensorFlower <gardener@tensorflow.org> Automated g4 rollback of changelist 183239252

PiperOrigin-RevId: 183241034
/external/tensorflow/tensorflow/contrib/lite/toco/model_cmdline_flags.cc
028ef1e67201700e8d9d77af64655f1dd20ae665 25-Jan-2018 A. Unique TensorFlower <gardener@tensorflow.org> Drop the manually_create field from RnnState.

Initially, I thought that the shape of RNN state arrays could always be
determined by shape propagation. Then I came across some graphs where this
wasn't so easy to infer, so I introduced manually_create thinking of it
as a hack. Today I took another look at dropping that hack, and had a
"D'oh" moment when I realized that the cyclic nature of RNN graphs makes
it impossible to infer the shapes of all arrays by usual propagation.
For example, in a LSTM cell, the input array is concatenated with
a state array, so if we don't already know the shape of that state array,
shape propagation stops there.

Thus, this change removes manually_create by making toco always behave as
if manually_create=true, i.e. early-creating all RNN state arrays with
the shape explicitly specified by the user. The next TODO item here
(see model_flags.proto) is to introduce a generic 'shape' field, so far
the current 'size' field only allows specifying 1-D shapes.

PiperOrigin-RevId: 183239252
/external/tensorflow/tensorflow/contrib/lite/toco/model_cmdline_flags.cc
ac4d418e3cd1d3236037508b815db4cff82bcfda 13-Dec-2017 A. Unique TensorFlower <gardener@tensorflow.org> Test consistently that the strings passed in input_arrays and output_arrays
consist of printable ASCII characters (this is motivated by a user having
unwittingly passed unicode zero-width characters, probably by copy-pasting),
and are names of arrays actually existing in the model.
Centralize these tests in CheckInvariants.

This can be overridden with new model flags: --allow_nonascii_arrays,
--allow_nonexistent_arrays. These are model flags because this is about
self-consistency of the model and its existing modelflags.

This CL partly undoes a recent relaxation of checks on input arrays that
was done to support getting graphviz out of incorrectly specified graphs.
Such users will now have to pass --allow_nonexistent_arrays.

PiperOrigin-RevId: 178939235
/external/tensorflow/tensorflow/contrib/lite/toco/model_cmdline_flags.cc
2463815ad59b8dc52840f143d95be39581ac5896 06-Dec-2017 A. Unique TensorFlower <gardener@tensorflow.org> Check against passing the same array in --input_arrays and --output_arrays.

PiperOrigin-RevId: 178156041
/external/tensorflow/tensorflow/contrib/lite/toco/model_cmdline_flags.cc
cc7482fd0a0f3a370880a0108f2c980bb808b277 06-Dec-2017 A. Unique TensorFlower <gardener@tensorflow.org> Change InputArray.shape from being a repeated int field to being
an optional embedded message itself containing a repeated int field
(now called 'dims'). This matches existing shape structurs (both in
Toco internally, and in TensorFlow) and is necessary in order to
disambiguate between a 0-dimensional shape and an undefined/unknown
shape. This is a necessary prerequisite, in particular, for allowing
toco to operate without given fixed input shapes, as so far these
were impossible to disambiguate from fixed 0-dimensional shapes.

PiperOrigin-RevId: 178027064
/external/tensorflow/tensorflow/contrib/lite/toco/model_cmdline_flags.cc
b8406da50df94dc17114c10d472a2058ff75b2d2 22-Nov-2017 A. Unique TensorFlower <gardener@tensorflow.org> Make drop_control_dependency a TocoFlag, not a ModelFlag.

PiperOrigin-RevId: 176680726
/external/tensorflow/tensorflow/contrib/lite/toco/model_cmdline_flags.cc
6040ed631ba8e95b97c0e3edb1dd31e04569b521 20-Nov-2017 A. Unique TensorFlower <gardener@tensorflow.org> Input types flags refactoring.
1. --input_type[s] is deprecated. Its semantics were not clearly defined,
and included both ModelFlags-like semantics (describing a property
of the input file) and TocoFlags-like semantics (describing a requested
property of the output file).
2. New ModelFlags: --input_data_type[s], represented as a new 'type'
field on each input array proto. This is unambiguously describing a
property of the input file, similar to the existing input_array[s],
input_shape[s] etc.
3. New TocoFlag: --inference_input_type. This is essentially the new
name of --input_type, except that it's purely a transformation flag,
only describing a property of the requested output file, not anymore
mixed with ModelFlags-like semantics (now taken care of by 2.).
The name --inference_input_type makes it clear that it's a
companion of --inference_type. Also, --inference_input_type is now
optional, defaulting to using the same value as --inference_type.
This reflects the fact that most users want to do either float
inference on a float input, or quantized inference on a quantized
input. The only case at the moment where --inference_input_type
is needed, is for doing float inference on a quantized input,
which is typically done in some vision applications where the
input is a bitmap image with integer-quantized channels.

PiperOrigin-RevId: 176356352
/external/tensorflow/tensorflow/contrib/lite/toco/model_cmdline_flags.cc
0b15439f8f0f2d4755587f4096c3ea04cb199d23 10-Nov-2017 Andrew Selle <aselle@google.com> Internal Change.

PiperOrigin-RevId: 175307445
/external/tensorflow/tensorflow/contrib/lite/toco/model_cmdline_flags.cc