History log of /external/tensorflow/tensorflow/compiler/tf2xla/xla_compilation_device.cc
Revision Date Author Comments (<<< Hide modified files) (Show modified files >>>)
bf5326a75412e59985b727b26f5cad01315b6c89 20-Dec-2017 Peter Hawkins <phawkins@google.com> [TF:XLA] Move XlaResource into its own file, and refactor it into a better-abstracted class. No functional changes intended.

PiperOrigin-RevId: 179734920
/external/tensorflow/tensorflow/compiler/tf2xla/xla_compilation_device.cc
a1c2e20fe5cf04965ce206911ff1a7446a24fadf 08-Dec-2017 A. Unique TensorFlower <gardener@tensorflow.org> Introduce an experimental API to pass sharding information from tensorflow to XLA.

PiperOrigin-RevId: 178366566
/external/tensorflow/tensorflow/compiler/tf2xla/xla_compilation_device.cc
117bcd9cb5f3e55ce1fcc09a0bb4963c32bad8ce 02-Nov-2017 Rohan Jain <rohanj@google.com> Adding support for local device names for ProcessFLR. Now one can specify a remote target as /device:CPU:0 or /device:GPU:0 etc.

PiperOrigin-RevId: 174252575
/external/tensorflow/tensorflow/compiler/tf2xla/xla_compilation_device.cc
27412f3b64ad09131ce330a0b91938af1931d515 01-Nov-2017 A. Unique TensorFlower <gardener@tensorflow.org> Add compiler/tf2xla/sharding_util.h with utilities for getting the core device from
a Node.

PiperOrigin-RevId: 174133602
/external/tensorflow/tensorflow/compiler/tf2xla/xla_compilation_device.cc
efcbf6e34e4519172d38be76c08c2d99792fd7be 30-Oct-2017 A. Unique TensorFlower <gardener@tensorflow.org> Supported in this CL:
* Attaching sharding descriptors to HLO ops
* Partitioning the HLO graph into per-device computations based on those sharding descriptors.
* All operator support for device placement and ops replicated on all devices.
* Elementwise op support for tiled shardings.
* 2D Convolution support for tiled shardings (no stride or dilation support).

PiperOrigin-RevId: 173946036
/external/tensorflow/tensorflow/compiler/tf2xla/xla_compilation_device.cc
dc500c869721e93ae1f3036b677a1d9d424e9d23 06-Oct-2017 Jacques Pienaar <jpienaar@google.com> [TF2XLA] Update device name in convert and redo check that name parsing is correct.

* Update ConvertGraphToXla to use the new form for setting the assigned device name.
* Remove some stale comments.
* Revert workaround that allowed the requested device name to not be parsed.

PiperOrigin-RevId: 171314671
/external/tensorflow/tensorflow/compiler/tf2xla/xla_compilation_device.cc
78af510b9aab4094a895851d61e2ea359a9b4985 06-Oct-2017 Jacques Pienaar <jpienaar@google.com> Temporarily don't error out if the requested device name cannot be parsed.

PiperOrigin-RevId: 171246995
/external/tensorflow/tensorflow/compiler/tf2xla/xla_compilation_device.cc
85c4a379985b46930ece49edc4347af628ee2928 24-Sep-2017 Peter Hawkins <phawkins@google.com> [XLA] Adds an API to attach a device assignment to HLO operators.

PiperOrigin-RevId: 169841868
/external/tensorflow/tensorflow/compiler/tf2xla/xla_compilation_device.cc
1f20a786d69c4b91a4015fe3f4df8c23bd345f40 20-Sep-2017 Peter Hawkins <phawkins@google.com> [TF:XLA] Add support for reading and writing TensorArray gradients in a while loop.

Previously, there was no code to handle propagating the values of a TensorArray's gradients into and out of loops. This change passes TensorArray gradients into and out of loops by packing them up as a (base array, gradient values...) tuple.

PiperOrigin-RevId: 169338418
/external/tensorflow/tensorflow/compiler/tf2xla/xla_compilation_device.cc
c19e6cac0413b0b93d5a15f9d4dc7c861aa1c734 07-Jun-2017 Peter Hawkins <phawkins@google.com> [TF:XLA] Initial implementation of TensorArray ops.

The XLA implementation of TensorArrays is more restrictive than regular TensorArrays:
* XLA TensorArrays must have dynamic_size=False.
* all elements in an XLA TensorArray must have the same shape.
* writes always add their values to any existing values; neither reads nor writes ever issue errors. Out-of-bounds writes currently wrap.

Refactor Variable handling in the TF/XLA bridge. Use a XlaVariable* to refer to variables inside compilation rather than a numerical ID. Allow for variables that don't correspond to variables known to the user. Also use XlaVariable to handle TensorArrays.

PiperOrigin-RevId: 158322041
/external/tensorflow/tensorflow/compiler/tf2xla/xla_compilation_device.cc
f28935a7d280b6ba75fe93fe35783d87b9cc2ec9 05-May-2017 Brennan Saeta <saeta@google.com> Implement ClusterSpec Propagation in TF Master

ClusterSpec propagation is a capability upgrade for TensorFlow that should make
it much easier to (1) build distributed TensorFlow clusters, and (2) handle
node failures. The ClusterSpec propagation capability allows TensorFlow workers
to be booted independently of each other, and with no knowledge about others.
The client can then construct a ClusterDef (ClusterSpec), and then send it
to the TF master at session creation. The master in turn then propagates the
ClusterDef along to all of the workers.
Change: 155159972
/external/tensorflow/tensorflow/compiler/tf2xla/xla_compilation_device.cc
00d0347ccebc3e29ffe541703b5a2f929b89da36 10-Mar-2017 Brennan Saeta <saeta@google.com> [TF:XLA] Add debug metadata to HLO ops.

In order to support end-to-end debugging and performance profiling tooling for
the TensorFlow::XLA toolchain, this change adds a DebugMetadata proto to the
HloInstruction class, and pipes it through the tf2xla stack.
Change: 149703349
/external/tensorflow/tensorflow/compiler/tf2xla/xla_compilation_device.cc
542c3cbf711c4b89310fa4046c48150d29564008 22-Feb-2017 Peter Hawkins <phawkins@google.com> [TF:XLA] Add support for resource variables to the Tensorflow/XLA bridge.
Change: 148176223
/external/tensorflow/tensorflow/compiler/tf2xla/xla_compilation_device.cc
a8c325e57c1077f1e8df540a20bd8b36d3d1f968 15-Feb-2017 Peter Hawkins <phawkins@google.com> [TF:XLA] Split XlaOpRegistry out of xla_compilation_device.{cc,h} into a separate xla_op_registry.{cc,h}.
Move XlaExpression out of xla_context.{cc,h} into xla_compilation_device.{cc,h}, since it is used to wrap computation handles on the XLA compilation device.
Change just moves code around, there are no functional changes.
Change: 147632770
/external/tensorflow/tensorflow/compiler/tf2xla/xla_compilation_device.cc
4e24bec4182b2ac63e3b6666cbd3794912ef41a8 11-Feb-2017 Peter Hawkins <phawkins@google.com> [TF:XLA] Refactor XlaContext, moving some of its reponsibilities to XlaCompiler and XlaOpKernelContext.

Move handling of arguments and return values to XlaCompiler. Introduce a new XlaContext::HandleOrConstant structure, use it for both arguments and results.
Make XlaCompiler own the xla::ComputationBuilder.

Move code for wrapping/unwrapping XlaExpressions in Tensors to XlaOpKernelContext, which is its only consumer.

No functional changes.
Change: 147250375
/external/tensorflow/tensorflow/compiler/tf2xla/xla_compilation_device.cc
96007205c42d591ef5cef2d7e8245b780f44f0d7 07-Feb-2017 Peter Hawkins <phawkins@google.com> [TF:XLA] Disable the XLA CPU jit by default when the JIT is requested via the OptimizerOptions.
The XLA CPU JIT is not optimized yet, and should not be enabled by default since it is usually slower than the standard Tensorflow CPU kernels.
Change: 146811646
/external/tensorflow/tensorflow/compiler/tf2xla/xla_compilation_device.cc
83c6e0c63acdcab2c58c4ed7220bfa58879b1d57 12-Jan-2017 Jonathan Hseu <jhseu@google.com> Switch open-source to use jemalloc for CPU Tensor memory allocation, gRPC, and other places where we call malloc/free.

- Only enabled on Linux for now.
- Added as a ./configure option defaulting to enabled.
Change: 144266237
/external/tensorflow/tensorflow/compiler/tf2xla/xla_compilation_device.cc
1e67c90e2caceeff82d09793d1ef5fa0300d219b 09-Jan-2017 Peter Hawkins <phawkins@google.com> Initial open-source release of XLA: Accelerated Linear Algebra.

XLA is a compiler-based linear algebra execution engine that targets CPUs, GPUs and custom accelerators.

XLA is still experimental; we are releasing it early to get the community involved.
Change: 143990941
/external/tensorflow/tensorflow/compiler/tf2xla/xla_compilation_device.cc