History log of /external/tensorflow/tensorflow/compiler/tf2xla/kernels/relu_op.cc
Revision Date Author Comments (<<< Hide modified files) (Show modified files >>>)
e56628b085ffa7922e5238537f6ebd6deee0f0cc 09-Oct-2017 A. Unique TensorFlower <gardener@tensorflow.org> [TF:XLA] Rename ComputationBuilder::LogicalX to X

PiperOrigin-RevId: 171562764
/external/tensorflow/tensorflow/compiler/tf2xla/kernels/relu_op.cc
01967e30a4fa3003e917973dbfa04016c7f0b69a 26-May-2017 A. Unique TensorFlower <gardener@tensorflow.org> Use "override" and "nullptr"; remove unused includes

PiperOrigin-RevId: 157258631
/external/tensorflow/tensorflow/compiler/tf2xla/kernels/relu_op.cc
93f9caba8e371bd2f55ec789ed2f8ece9b3d976d 30-Mar-2017 Peter Hawkins <phawkins@google.com> [TF:XLA] Refactor TF/XLA operator registration.

Rather than requiring an explicit registration for each (operator, backend) pair, by default register all operators for all backends, for all types supported by each backend.

As we are beginning to see out-of-tree backends, as XLA translations of operators are added to the TF/XLA bridge, per-backend explicit registration lists will become stale. Registering all operators on all backends is both less verbose and more maintainable for backend authors.

Since not all operators work on all backends, we add several constraint mechanisms:
* operators may specify type constraints that are shared across all backends.
* operators may specify a whitelist of backends on which they work. This is useful if an operator is CPU-only because of a CustomCall.
* backends may register a function that specifies operators to blacklist or whose registrations to modify. This is necessary since operator implementations cannot know the set of all out-of-tree backends.

This change also lays the ground-work for removing the list of compile-time constant inputs in const_analysis.cc. In a subsequent CL, compile-time constant inputs can be annotated on the XLA operator registration.
Change: 151724100
/external/tensorflow/tensorflow/compiler/tf2xla/kernels/relu_op.cc
3e975ea978bac4d861bb09328b06f3c316212611 02-Mar-2017 Andrew Harp <andrewharp@google.com> Merge changes from github.
Change: 148954491
/external/tensorflow/tensorflow/compiler/tf2xla/kernels/relu_op.cc
a8c325e57c1077f1e8df540a20bd8b36d3d1f968 15-Feb-2017 Peter Hawkins <phawkins@google.com> [TF:XLA] Split XlaOpRegistry out of xla_compilation_device.{cc,h} into a separate xla_op_registry.{cc,h}.
Move XlaExpression out of xla_context.{cc,h} into xla_compilation_device.{cc,h}, since it is used to wrap computation handles on the XLA compilation device.
Change just moves code around, there are no functional changes.
Change: 147632770
/external/tensorflow/tensorflow/compiler/tf2xla/kernels/relu_op.cc
c8384ed2900201f55f219e52e2fd57e2d4d48e70 13-Jan-2017 Peter Hawkins <phawkins@google.com> Add a unit test for XlaCompiler.
Add support for marking Xla computations as stateful.
Add a store for xla::ChannelHandles in XlaCompiler.
Don't mark _Send/_Recv for XLA computation.
Change: 144382814
/external/tensorflow/tensorflow/compiler/tf2xla/kernels/relu_op.cc
1e67c90e2caceeff82d09793d1ef5fa0300d219b 09-Jan-2017 Peter Hawkins <phawkins@google.com> Initial open-source release of XLA: Accelerated Linear Algebra.

XLA is a compiler-based linear algebra execution engine that targets CPUs, GPUs and custom accelerators.

XLA is still experimental; we are releasing it early to get the community involved.
Change: 143990941
/external/tensorflow/tensorflow/compiler/tf2xla/kernels/relu_op.cc