History log of /external/tensorflow/tensorflow/compiler/xla/tests/tuple_test.cc
Revision Date Author Comments (<<< Hide modified files) (Show modified files >>>)
7575f334ee0879825ceed23928f5e99d0f71b5f8 14-Feb-2018 Justin Lebar <jlebar@google.com> [XLA:GPU] Don't crash when the root instruction of a computation is a multi-output fusion node, and avoid some pointer chasing with tuples.

Previously, the kernels we generated would have one argument per
*top-level* buffer of the input/output. This was fine for inputs. But
it doesn't work for outputs: Imagine you're a node that returns a tuple
-- e.g. multi-output fusion -- if all you get is a pointer to the
top-level buffer of your output (which should contain pointers to the
lower-level buffers at some point, but at the moment is just empty), how
are you supposed to figure out where to write your output?

(This usually worked because most of the time your output would live
inside of the big XLA temp buffer, and kernels always get a pointer to
that.)

Now we pass all the buffers, top-level and otherwise, to our kernel. In
addition, we're now willing to dereference statically tuples that live
entirely in XLA's temp buffer. Pointers in input tuples must still be
dereferenced dynamically, because the caller has the option of giving us
these values or not when invoking XLA.

This change makes some parts of BufferAssignment/BufferAllocations more
truthful. Previously, if you passed a tuple-shaped input to XLA, we'd
say in BufferAllocations that the pointer for some subshape of the param
was the *top-level tuple pointer*. XLA then knew that this was a lie
and would dereference it accordingly. Now we have an explicit notion of
a BufferAllocation pointing to a subshape of an input parameter.

PiperOrigin-RevId: 185614060
/external/tensorflow/tensorflow/compiler/xla/tests/tuple_test.cc
7022e6b62908f688cae446f1abbc667c5273db04 13-Jan-2018 A. Unique TensorFlower <gardener@tensorflow.org> [XLA] Add some scalar tests.

PiperOrigin-RevId: 181819491
/external/tensorflow/tensorflow/compiler/xla/tests/tuple_test.cc
0683cdbd8701e4e6a582db1e71d58fcad628e070 11-Dec-2017 A. Unique TensorFlower <gardener@tensorflow.org> Enriching some C64 test coverage.

PiperOrigin-RevId: 178624364
/external/tensorflow/tensorflow/compiler/xla/tests/tuple_test.cc
a235f23d5babcffa05b6d190c3e1a8909afb5273 22-Nov-2017 Mark Heffernan <meheff@google.com> Roll forward new copy insertion pass.

Original cl: cl/174423881, rollback cl: cl/174505237.

This roll forward includes the following changes from the original to address various issues uncovered with the rollback:

(1) A fix for a problem with fusion instruction serialization was broken out and submitted separately (cl/176035108).

(2) A dataflow analysis fix was broken out and submitted separately (cl/176035108)

(3) Adding RunBenchmarks to our unit test main was broken out. Fix for uncovered segv in while_test_cpu benchmark in pending cl/176068232.

(4) Moved a cpu-specific copy-insertion pass into it's own file, and added tests.

(5) Renamed gpu/copy_insertion.* to gpu/gpu_copy_insertion.* to match cpu side.

PiperOrigin-RevId: 176658339
/external/tensorflow/tensorflow/compiler/xla/tests/tuple_test.cc
456929281592f14d50443cfbdaa2f6b36167a134 03-Nov-2017 Mark Heffernan <meheff@google.com> Rollback copy insertion change because it results in a DCHECK with an internal model.
END_PUBLIC

BEGIN_PUBLIC
Automated g4 rollback of changelist 174423881

PiperOrigin-RevId: 174505237
/external/tensorflow/tensorflow/compiler/xla/tests/tuple_test.cc
7bb2d57b0b051d1cf8dd74d3276bf5a452774172 03-Nov-2017 Mark Heffernan <meheff@google.com> Rewrite CopyInsertion to use module-scoped HloAliasAnalysis. The net effect (number of copies inserted) is roughly similar to the existing implementation, but the new implementation is much more general. The new implementation can handle entry argument buffer reuse with minimal modification, for example.

Some unnecessary copies are still added due to deficiencies in buffer assignment (b/62548313), but these can be removed when buffer assignment also uses HloAliasAnalysis.

Also address a few issues uncovered with this cl:

(1) For inplace dynamic slice in llvm backends, truncate do not wrap the slice. This matches the behavior of the non-inplace variant.

(2) Disable SelectBetweenPredTuples test on GPU. The test introduces top-level buffer ambiguity which is not tolerated by the gpu backend.

(3) When deserializing HLO form a proto, do not uniquify instruction names in fused computations.

(4) In dataflow analysis, don't deallocate deleted HloValues during propagation.

(5) In dataflow analysis, fix issue with live_out_of_computation property.

PiperOrigin-RevId: 174423881
/external/tensorflow/tensorflow/compiler/xla/tests/tuple_test.cc
b29b839215fa9bf5a00ca97e19673cfa5f780314 26-Sep-2017 A. Unique TensorFlower <gardener@tensorflow.org> [XLA] Map API change to enable mapping over an arbitrary set of dimensions.

PiperOrigin-RevId: 170090055
/external/tensorflow/tensorflow/compiler/xla/tests/tuple_test.cc
dc1eda8a6d06cff541be768a0c8e2b22b376651c 13-Sep-2017 Peter Hawkins <phawkins@google.com> [XLA] Fix CHECK-failure crash if a non-tuple was passed to GetTupleElement.

PiperOrigin-RevId: 168550703
/external/tensorflow/tensorflow/compiler/xla/tests/tuple_test.cc
ddd8e21b7c1d23bf80ddf0141b44e168c17647f3 27-Jul-2017 Eli Bendersky <eliben@google.com> [XLA] Consolidate all similar main()s in tests into a single target.

PiperOrigin-RevId: 163354724
/external/tensorflow/tensorflow/compiler/xla/tests/tuple_test.cc
c0f475b03eeca19b217527f7946da837e8438e27 06-Jul-2017 Peter Hawkins <phawkins@google.com> [XLA] Add support for tuple constants via a rewrite that replaces constant tuples with tuples of non-tuple constants.

Fix crash in HloInstruction::ToString() for tuple constants.

PiperOrigin-RevId: 161085252
/external/tensorflow/tensorflow/compiler/xla/tests/tuple_test.cc
a4a46983233690f17ca794c2907b2b11c8cd2060 21-Jun-2017 A. Unique TensorFlower <gardener@tensorflow.org> [XLA] Add tests for select ops and while loops that produce tuples that contain predicates.

PiperOrigin-RevId: 159645900
/external/tensorflow/tensorflow/compiler/xla/tests/tuple_test.cc
46737e4e81314f7482bfd6a710f126a27f5d7975 19-Jun-2017 A. Unique TensorFlower <gardener@tensorflow.org> Remove class xla::LiteralUtil. NFC (mind-numbingly so).

This patch removes class xla::LiteralUtil and rewrites every call to use class
xla::Literal instead.
PiperOrigin-RevId: 159446373
/external/tensorflow/tensorflow/compiler/xla/tests/tuple_test.cc
c77399d44fc2ed6912e7f301839ad3e404739b80 14-Jun-2017 Eli Bendersky <eliben@google.com> [XLA] Remove remaining flags from cpu_compiler_flags

And move them to debug_options_flags; these two flags (embed_ir_in_executable,
dump_debug_json_to) are also unified with similarly named GPU compiler flags.
This lets us completely remove the cpu_compiler_flags module.

PiperOrigin-RevId: 158989621
/external/tensorflow/tensorflow/compiler/xla/tests/tuple_test.cc
8cedce4b806639c351d45a00324fcc269704f42b 12-Jun-2017 Eli Bendersky <eliben@google.com> [XLA] Replace some XLA CPU compiler specific options by generic "debug options".

LLVM optimization level, extra LLVM flags and "cpu parallel" all turn into
debug options on the xla proto. "cpu parallel" is combined with "backend extra
options" as a map.

PiperOrigin-RevId: 158751784
/external/tensorflow/tensorflow/compiler/xla/tests/tuple_test.cc
1e67c90e2caceeff82d09793d1ef5fa0300d219b 09-Jan-2017 Peter Hawkins <phawkins@google.com> Initial open-source release of XLA: Accelerated Linear Algebra.

XLA is a compiler-based linear algebra execution engine that targets CPUs, GPUs and custom accelerators.

XLA is still experimental; we are releasing it early to get the community involved.
Change: 143990941
/external/tensorflow/tensorflow/compiler/xla/tests/tuple_test.cc