History log of /external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
Revision Date Author Comments (<<< Hide modified files) (Show modified files >>>)
7a5d775685c7c853a0d3dc5118016ae30e43f7a2 10-Feb-2018 Kay Zhu <kayzhu@google.com> [XLA] Implement GeneralDot semantics in HloEvaluator.

Also:
- add a general matmul test, enable interpreter to run dot_operation_test.
- remove now redundant/obselete CHECKS for HandleDot in HloEvaluator.
- improve documentation for DotGeneral a bit.
PiperOrigin-RevId: 185211512
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
5eccb29930deeccd7199a79539f3c60e3769db30 15-Dec-2017 Sanjoy Das <sanjoy@google.com> [XLA] Change some dot test cases to be parametric

- Use a generator instead of manually writing out all the test configurations

- Call the tests MatrixDotF32_12_117_7_MajorToMinorFF etc. instead of
MatrixDotF32_12_117_7_MinorToMajorFF. I think this is the correct naming
scheme since the T/F denotes whether an operand's dimensions are laid out as
major to minor or not. If this is correct, names like
SquareMatrixDotF32MinorToMajorTF are also incorrect and I'll fix those in a
separate CL.

- Remove a couple of unnecessary vector-matrix product tests -- we don't need
to test with different layouts since the layouts are only used for the
GlobalData literals that are passed in as arguments to the computation, and
not for the operations in the computation itself. I suspect this is
accidental though -- we probably should be changing the layouts of the
computations as well?

PiperOrigin-RevId: 179122732
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
b3e97d56bd10bdf1976c61aab1f50a8902068c5c 14-Dec-2017 Sanjoy Das <sanjoy@google.com> [XLA:CPU] Implement Ax+b dot output fusion for Matrix-vector products

I had to roll in the change to generalize CPU layout assignment as without it we
lose the make-rhs-column-major optimization and that causes a performance
regression.

PiperOrigin-RevId: 178970986
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
379d59d01e4419aadf9103338315624eadfd9f81 12-Dec-2017 Sanjoy Das <sanjoy@google.com> [XLA] Optimize dot(concat(..), constant)

dot(concat(..), constant) and dot(constant, concat(..)) can be rewritten to
avoid the concatenate. This can itself be a win, but can also help unlock other
optimization opportunities.

PiperOrigin-RevId: 178691585
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
dd77f385591c8b6ef7ab8dae7429c7eff7813a1e 11-Dec-2017 A. Unique TensorFlower <gardener@tensorflow.org> [XLA] Move BatchDot unrolling from TF2XLA bridge to AlgebraicSimplifier so that unrolling can be selectively enabled/disabled per backend (should be no performance change).

PiperOrigin-RevId: 178666990
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
0683cdbd8701e4e6a582db1e71d58fcad628e070 11-Dec-2017 A. Unique TensorFlower <gardener@tensorflow.org> Enriching some C64 test coverage.

PiperOrigin-RevId: 178624364
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
4146ff1259c0b4ada8afbbad11a7b37d8373d1b9 30-Nov-2017 A. Unique TensorFlower <gardener@tensorflow.org> [XLA] Adds Dot with DotDimensionNumbers proto for specifying arbitrary contracting and batch dimensions.

PiperOrigin-RevId: 177481231
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
58f7858601b72aa3c5854571f2152b91d1795e29 13-Nov-2017 A. Unique TensorFlower <gardener@tensorflow.org> [TF:XLA] Adding test coverage for more C64 operations, and ensuring they pass.

Included here:
- reduction ops (reduce_sum, reduce_prod)
- unaries: tanh, sigmoid (currently GPU only)
- binaries: pow (currently GPU only)

PiperOrigin-RevId: 175562417
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
64d2636e2946772d4b1531ec91b389110a2787b7 09-Nov-2017 Mark Heffernan <meheff@google.com> Move MakeFakeLiteral from client/lib/testing.h to tests/test_utils.h. Also remove superfluous literal creation methods in that file, and replace them with the existing ones in the Literal class.

Also, optionally print layout in Literal::ToString.

PiperOrigin-RevId: 175076277
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
76967158085d50b53d29901c140fea69b3cf15af 08-Nov-2017 Sanjoy Das <sanjoy@google.com> [XLA:CPU] Implement single threaded Matrix-Vector products in LLVM IR

Right now we're always doing a 8x8 tiling on the matrix. This can probably be
tuned further.

There are some other follow-up items that I did not want to put in this already
large CL:

- Eigen has some smarts to avoid issuing unaligned vector loads and stores
which the current CL does not. We need to investigate if being smart about
alignment is worth it.

- Prevent LLVM from vectorizing the epilogue. In fact we should disable loop
vectorization for all the loops we've explicitly vectorized.

- Cache the kernels by their shape to reduce code size impact.

- Add aliasing information to the loads and stores emitted by the
PacketSupportLibrary. This is probably not super critical since we've
already vectorized the code, but we should do this for completeness.

PiperOrigin-RevId: 175036991
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
4198e27be8115585ad6b5b141383fb7dc7856c24 27-Oct-2017 A. Unique TensorFlower <gardener@tensorflow.org> [XLA:CPU] [XLA:GPU] Adds compiler support for C64 primitive type, including relevant elementwise unary and binary op lowering for CPU and GPU.

We use a named LLVM struct "complex64", laid out the same as std::complex<float>. This named struct is accessed via the llvm::Module, which required changes to accessors of PrimitiveTypeToIrType & friends.

Ops that require atan2 (in particular, angle and log) are only supported on GPU at this point. LLVM lacks a CPU intrinsic for atan or atan2, whereas libdevice provides this for GPU.

PiperOrigin-RevId: 173676849
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
6498f6011bf8a7f62fb4f51bcdff659eba278f35 15-Aug-2017 A. Unique TensorFlower <gardener@tensorflow.org> Implement dot fusion on the CPU backend for some simple cases.

PiperOrigin-RevId: 165245157
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
ddd8e21b7c1d23bf80ddf0141b44e168c17647f3 27-Jul-2017 Eli Bendersky <eliben@google.com> [XLA] Consolidate all similar main()s in tests into a single target.

PiperOrigin-RevId: 163354724
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
ae0207d3820308d882253c85fe58494b818f254d 21-Jul-2017 Eli Bendersky <eliben@google.com> [XLA] Remove the xla_default_layout flag.

The default layout is just major-to-minor, as we don't have sufficient testing
for alternative default layouts.

PiperOrigin-RevId: 162766231
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
50b999a8336d19400ab75aea66fe46eca2f5fe0b 28-Jun-2017 A. Unique TensorFlower <gardener@tensorflow.org> Merge changes from github.

PiperOrigin-RevId: 160344052
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
1fa73c53ab95693f070ce70e6be0c644d83c163a 26-Jun-2017 A. Unique TensorFlower <gardener@tensorflow.org> Automated g4 rollback of changelist 160182040

PiperOrigin-RevId: 160190881
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
f3c89936e97c99dead1ca3310246691c1b221adf 26-Jun-2017 A. Unique TensorFlower <gardener@tensorflow.org> Merge changes from github.
END_PUBLIC

Note: this CL will break builds. cl/159887762 to follow to fix all the breakages.

---
Commit 2336cdf7f authored by Maxwell Paul Brickner<mbrickn@users.noreply.github.com>
Committed by gunan<gunan@google.com>:
Updated link to use HTTPS (#10998)

Howdy!

I just updated a link to use https instead of http.

Thanks!
---
Commit ad0892df1 authored by Luke Iwanski<luke@codeplay.com>
Committed by Luke Iwanski<luke@codeplay.com>:
[OpenCL] Fixes run_metadata_test for SYCL

This test is designed to test CUDA specific behavior

---
Commit 6b37a0725 authored by Todd Wang<toddwang@gmail.com>
Committed by GitHub<noreply@github.com>:
Update comments
---
Commit 1699d904a authored by John Lawson<john@codeplay.com>
Committed by Luke Iwanski<luke@codeplay.com>:
[OpenCL] Fixes CUDA specific test run on SYCL (#56)

The testBadParentValuesOnGPU should only be run on CUDA devices, as the
test checks for particular CUDA behaviour. We don't actually provide a
SYCL kernel for GatherTree and so it's not a problem that the tests
don't target SYCL.
---
Commit 3c1946230 authored by myPrecious<Moriadry@users.noreply.github.com>
Committed by Shanqing Cai<cais@google.com>:
Java API to get the size of specified input list of operations. (#10865)

* Java API to get the size of specified input list of operations

* remove unnecessary explain to avoid bring a new term to users.

---
Commit e911c7480 authored by Luke Iwanski<luke@codeplay.com>
Committed by Luke Iwanski<luke@codeplay.com>:
[OpenCL] REGISTER -> REGISTER6

---
Commit fbf6c4cec authored by superryanguo<superryanguo@gmail.com>
Committed by superryanguo<superryanguo@gmail.com>:
Simplify the Quickstart section with the weblink is better

---
Commit 72e2918cc authored by Taehoon Lee<taehoonlee@snu.ac.kr>
Committed by Taehoon Lee<taehoonlee@snu.ac.kr>:
Fix typos

---
Commit 90c4406b7 authored by Rishabh Patel<patelrishabh@users.noreply.github.com>
Committed by GitHub<noreply@github.com>:
Correct the learning rate as per the code snippet
---
Commit 03da61134 authored by Todd Wang<toddwang@gmail.com>
Committed by GitHub<noreply@github.com>:
Update ir_array.cc
---
Commit 2df6cd3ac authored by Todd Wang<toddwang@gmail.com>
Committed by GitHub<noreply@github.com>:
Another try
---
Commit af0cbace1 authored by Luke Iwanski<luke@codeplay.com>
Committed by Benoit Steiner<benoitsteiner@users.noreply.github.com>:
[OpenCL] Transpose to go through Eigen (#10321)

---
Commit fc7361081 authored by Luke Iwanski<luke@codeplay.com>
Committed by Benoit Steiner<benoitsteiner@users.noreply.github.com>:
[OpenCL] Registers RGBToHSV and HSVToRGB (#91) (#10848)

* [OpenCL] Added RGBToHSV and HSVToRGB

* Aligning '\'
---
Commit 832894ef8 authored by Luke Iwanski<luke@codeplay.com>
Committed by Benoit Steiner<benoitsteiner@users.noreply.github.com>:
[OpenCL] Registers AdjustContrastv2 (#10949)

* [OpenCL] Registers AdjustContrastv2 (#93)

* [OpenCL] Extended adjust_contrast_op_benchmark_test for OpenCL (#96)

* [OpenCL] Extended adjust_contrast_op_benchmark_test for OpenCL

* simplified to #ifndef

* Changed to "#if GOOGLE_CUDA"

* Update adjust_contrast_op_benchmark_test.cc

* Added comments

---
Commit cb4c2f8d1 authored by Yifei Feng<yifeif@google.com>
Committed by Yifei Feng<yifeif@google.com>:
Make TransferBufferToInFeed not virual so it compiles.

---
Commit e89f04d80 authored by Yifei Feng<yifeif@google.com>
Committed by Yifei Feng<yifeif@google.com>:
Fix calling Literal member functions.

---
Commit 15a8df724 authored by Yifei Feng<yifeif@google.com>
Committed by Yifei Feng<yifeif@google.com>:
Fix mac build
clone from meheff's change:
[XLA] Change return type of DeviceAssignment::Deserialize to fix build
breakage on mac.
The mac build had the following error:

error: incomplete type 'xla::DeviceAssignment' used in type trait
expression

This was due to a static method returning a StatusOr<DeviceAssignment>
inside of the definition of DeviceAssignment.

---
Commit a54d43fa4 authored by Yifei Feng<yifeif@google.com>
Committed by Yifei Feng<yifeif@google.com>:
Replace LiteralUtil to Literal in compiler/plugin/executor

---
Commit 88a6bb80c authored by Guenther Schmuelling<guschmue@microsoft.com>
Committed by Guenther Schmuelling<guschmue@microsoft.com>:
expand inline for debug builds to limit number of symbols

---
Commit 62fb49d31 authored by Yifei Feng<yifeif@google.com>
Committed by Yifei Feng<yifeif@google.com>:
Fix visibility error for contrib/remote_fused_graph/pylib/BUILD.

---
Commit 4c75252f2 authored by Mark Neumann<markn@allenai.org>
Committed by Mark Neumann<markn@allenai.org>:
fix initial test values to avoid numerical instability

---
Commit b58d98353 authored by sj6077<epik03sj@gmail.com>
Committed by Benoit Steiner<benoitsteiner@users.noreply.github.com>:
Fixes of AutoParallel bug (#10368)

* Fix the bug that auto_parallel could replicate variable snapshot name

* Use NodeName in grappler:utils instead of substr, convert variables->variable_def of grappler item

* remove variable_def from grappler item, exclude snapshot nodes from dont_replicate_nodes in auto_parallel

---
Commit a286b7db8 authored by Yifei Feng<yifeif@google.com>
Committed by Yifei Feng<yifeif@google.com>:
Make debug_test slice integer.

---
Commit 97fcfdfa6 authored by Toby Boyd<tobyboyd@google.com>
Committed by GitHub<noreply@github.com>:
Fixed path to seq2seq.py and minor formatting
---
Commit 63c1befb8 authored by Anish Shah<shah.anish07@gmail.com>
Committed by Anish Shah<shah.anish07@gmail.com>:
Improve docs for tf.nn.depthwise_conv2d_native

---
Commit 8d42202b2 authored by Yong Tang<yong.tang.github@outlook.com>
Committed by Yong Tang<yong.tang.github@outlook.com>:
Fix mismatched delete in mkl_tfconv_op.cc

This fix fixes mismatched new[]-delete in mkl_tfconv_op.cc

(the file went through clang-format so there are some additional
changes)

Signed-off-by: Yong Tang <yong.tang.github@outlook.com>

---
Commit 26301bd55 authored by Danny Goodman<goodman.danny@gmail.com>
Committed by Danny Goodman<goodman.danny@gmail.com>:
fix error format

---
Commit b3f33ad46 authored by Yao Zhang<yaozhang@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Make changes to prepare for the fused option of batch norm to be set to None (None means using fused batch norm if possible).

PiperOrigin-RevId: 159649743

---
Commit a4a469832 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[XLA] Add tests for select ops and while loops that produce tuples that contain predicates.

PiperOrigin-RevId: 159645900

---
Commit 980d3f2be authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Use C API to implement Operation.name property

This name property is used in many existing tests including those that
already run with C API enabled (math_ops_test, framework_ops_test,
session_test, session_partial_run_test, math_ops_test_gpu, etc).

PiperOrigin-RevId: 159645767

---
Commit 26239c706 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Previously we didn't have an implementation of BatchNormInference and BatchNormTraining, which gives a linker error if anyone ever tries to call that. A dummy implementation is friendlier than a linker error.

PiperOrigin-RevId: 159645612

---
Commit f671c5caa authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
BEGIN_PUBLIC
Automated g4 rollback of changelist 159570549

PiperOrigin-RevId: 160182040
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
cfe28e09f36e54d55d08e666392d19c5c46c67db 23-Jun-2017 Eli Bendersky <eliben@google.com> [XLA] Remove unused xla_cpu flag and move another to DebugOptions.

PiperOrigin-RevId: 159952124
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
eb00d1d98efe06de98afceac83b8e88cb63b8c20 22-Jun-2017 Eli Bendersky <eliben@google.com> Automated g4 rollback of changelist 159746509

PiperOrigin-RevId: 159763112
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
f787d718967b3586561287a1506aec03e614d8dd 21-Jun-2017 Eli Bendersky <eliben@google.com> [XLA] Remove xla_cpu_*_eigen flags from CPU backends.

These flags are currently de-facto unused; parallelism should be controlled
through the cpu_parallel backend. For configuring Eigen, if needed, the options
should be piped more directly to the code.

PiperOrigin-RevId: 159746509
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
46737e4e81314f7482bfd6a710f126a27f5d7975 19-Jun-2017 A. Unique TensorFlower <gardener@tensorflow.org> Remove class xla::LiteralUtil. NFC (mind-numbingly so).

This patch removes class xla::LiteralUtil and rewrites every call to use class
xla::Literal instead.
PiperOrigin-RevId: 159446373
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
c77399d44fc2ed6912e7f301839ad3e404739b80 14-Jun-2017 Eli Bendersky <eliben@google.com> [XLA] Remove remaining flags from cpu_compiler_flags

And move them to debug_options_flags; these two flags (embed_ir_in_executable,
dump_debug_json_to) are also unified with similarly named GPU compiler flags.
This lets us completely remove the cpu_compiler_flags module.

PiperOrigin-RevId: 158989621
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
8cedce4b806639c351d45a00324fcc269704f42b 12-Jun-2017 Eli Bendersky <eliben@google.com> [XLA] Replace some XLA CPU compiler specific options by generic "debug options".

LLVM optimization level, extra LLVM flags and "cpu parallel" all turn into
debug options on the xla proto. "cpu parallel" is combined with "backend extra
options" as a map.

PiperOrigin-RevId: 158751784
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
45dbb0a02d2fa5b1eb20836fe854e19842b9593f 24-Mar-2017 Blake Hechtman <blakehechtman@google.com> Strength reduce Dot into broadcasting multiply and reduce. Also optimizes
transposes and reshapes that feed reductions.
Change: 151162327
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
0a374c7182fb8f0970f3dbf13db9e6f8c5464c00 14-Mar-2017 A. Unique TensorFlower <gardener@tensorflow.org> Add matmul large array tests.
Change: 150099362
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
0faa2a46260dbe0960674db4ecfe6fceed7e6a08 20-Jan-2017 A. Unique TensorFlower <gardener@tensorflow.org> [XLA] TODO(b/n) rather than TODO(user), NFC
Change: 145107436
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc
1e67c90e2caceeff82d09793d1ef5fa0300d219b 09-Jan-2017 Peter Hawkins <phawkins@google.com> Initial open-source release of XLA: Accelerated Linear Algebra.

XLA is a compiler-based linear algebra execution engine that targets CPUs, GPUs and custom accelerators.

XLA is still experimental; we are releasing it early to get the community involved.
Change: 143990941
/external/tensorflow/tensorflow/compiler/xla/tests/dot_operation_test.cc