fc2526a8c1cf0bc2a93c8cc819ff7209eb4628c9 |
|
16-Dec-2017 |
A. Unique TensorFlower <gardener@tensorflow.org> |
Merged commit includes the following changes: 179277894 by gunan: Run buildifier on build file. -- 179275101 by meheff: Replace DeviceMemoryBase with ShapedBuffer in XLA interfaces. Executable, TransferManager, and AllocationTracker now use ShapedBuffer to hold device memory addresses holding XLA data. Most of the change is straight-forward with the exception of AllocationTracker which was mostly rewritten (and simplified) and some refactoring in the CPU executable. Also, have ShapedBuffer hold on-host and on-device Shapes which are the shapes of the representation of the data on the host and device, respectively. This is necessary because with cl/178624364 the on-host and on-device shape may no longer be equal. -- 179265385 by A. Unique TensorFlower: Return error rather than CHECK fail in Executable::ExecuteOnStreamWrapper -- 179264551 by dandelion: Internal fixes. -- PiperOrigin-RevId: 179277894
/external/tensorflow/tensorflow/compiler/xla/service/transfer_manager.h
|
0683cdbd8701e4e6a582db1e71d58fcad628e070 |
|
11-Dec-2017 |
A. Unique TensorFlower <gardener@tensorflow.org> |
Enriching some C64 test coverage. PiperOrigin-RevId: 178624364
/external/tensorflow/tensorflow/compiler/xla/service/transfer_manager.h
|
22d948d2739ecaadfb4091302f2050ba9cf0d0c1 |
|
16-Nov-2017 |
Mark Heffernan <meheff@google.com> |
Add methods on TransferManager which transfer to/from device memory specified by ShapedBuffer rather than DeviceMemoryBase. This is part of a broader replacement of DeviceMemoryBase->ShapedBuffer in several XLA interfaces. With this change TransferManager no longer has to allocate memory to transfer tuples to the device. The existing methods using DeviceMemoryBase will be removed in a followup cl. Various related changes: * Make the transfer_manager_test an xla_test so that it runs on all the platforms. * Make several of the TransferManager methods protected. * Change ScopedShapedBuffer::Allocate to only allocate device memory buffers, and not fill in the tuple index table. The index table is filled in by the transfer manager. This is a cleaner separation of concerns. PiperOrigin-RevId: 176015628
/external/tensorflow/tensorflow/compiler/xla/service/transfer_manager.h
|
b0bcf675a4b5d6217f3b58fd27b344f20e7bf25d |
|
15-Nov-2017 |
Sanjoy Das <sanjoy@google.com> |
Use a static "linker initialized" tensorflow::mutex when possible. There is no need to use a lazily created tensorflow::mutex since the tensorflow::LINKER_INITIALIZED constructor is a no-op. PiperOrigin-RevId: 175874749
/external/tensorflow/tensorflow/compiler/xla/service/transfer_manager.h
|
3f7d27ae53095a140994b3c0c00b12f7a6f5fd06 |
|
07-Nov-2017 |
A. Unique TensorFlower <gardener@tensorflow.org> |
Mark TransferManager::GetByteSizeRequirement and virtual overrides const. PiperOrigin-RevId: 174873299
/external/tensorflow/tensorflow/compiler/xla/service/transfer_manager.h
|
06deeea373c93ea36547648481c5daf4dc56126f |
|
27-Sep-2017 |
Mark Heffernan <meheff@google.com> |
For tuple-shaped data, change ShapedBuffer (an abstraction holding on-device data of a given shape) to also hold an array of pointers representing the tuple structure in the device memory. Previously ShapedBuffer only held array-shaped data at the leaves of the tuple shape. Construction of these array-of-pointers is handled by TransferManager which has to construct array-of-pointers anyway to transfer literals to the device. This change makes ShapedBuffer match the native representative of tuple-shaped data passed into XLA computations. This is the first step to migrating XLA interfaces away from using naked device memory pointers (DeviceMemoryBase) to using more expressive ShapedBuffers instead. This change enables tuple-shaped parameters in computations run through the LocalClient interface. Also, change LocalClient interfaces to return ScopedShapedBuffers as these are generally easier to deal with ownership-wise that ShapedBuffers. They are analogous to std::unique_ptr, while ShapedBuffers are analogous to bare pointers. This change includes a couple other cleanups found along the way: * move cpu/gpu/interpreter transfer managers into their respective directories under xla/service. * Make the generic transfer manager take a pointer size. Previously it would just use sizeof(void*) which might not be exactly what is needed. PiperOrigin-RevId: 170133015
/external/tensorflow/tensorflow/compiler/xla/service/transfer_manager.h
|
b52debb4e63cce1e0733d6d34975d4efb9934680 |
|
15-Jun-2017 |
Jacques Pienaar <jpienaar@google.com> |
[XLA] Add transfer buffer to infeed. Mirroring the transfer buffer to device interface, add a transfer buffer to infeed interface. PiperOrigin-RevId: 159152897
/external/tensorflow/tensorflow/compiler/xla/service/transfer_manager.h
|
02ac85399d4fb35d5055ecf426632b9446a70041 |
|
01-Jun-2017 |
A. Unique TensorFlower <gardener@tensorflow.org> |
Introduce new class Literal to replace protobuf Literal. This renames the existing Literal message to LiteralProto and introduces a new C++ class named Literal to replace it. The LiteralProto is only used at RPC boundaries, or when protobuf-specific functionality is required. The Literal class offers a 'ToProto' function to generate a new LiteralProto message when necessary. Currently, all the static functions in class LiteralUtil, just forward to their counterparts in class Literal. This will change in a future CL. Class Literal implements all the buffers as std::vectors. The only exception is preds(), which given the std::vector<bool> representation, makes it unusable for the semantics we require (it's not possible to get the address of the underlying vector, for instance). The CL adds a BoolVector class to work around that issue. In future CLs, the std::vector representation may be changed to something more efficient, if needed. PiperOrigin-RevId: 157739125
/external/tensorflow/tensorflow/compiler/xla/service/transfer_manager.h
|
8cb5e9867482a8e05f756fad35634e1674fe7f16 |
|
25-May-2017 |
A. Unique TensorFlower <gardener@tensorflow.org> |
Preliminary Infeed support for GPU backend. ** GPU transfer manager and GPU specific infeed manager/infeed buffer implementation ** Infeed thunk PiperOrigin-RevId: 157054373
/external/tensorflow/tensorflow/compiler/xla/service/transfer_manager.h
|
c8399f61ea845b0c440d13407429a92f6a0591e3 |
|
14-Apr-2017 |
A. Unique TensorFlower <gardener@tensorflow.org> |
[XLA] Remove checks about Tupple allocation. Change: 153120363
/external/tensorflow/tensorflow/compiler/xla/service/transfer_manager.h
|
32589d191387db16d8505a3f9e0dd10ef2ec194b |
|
16-Mar-2017 |
A. Unique TensorFlower <gardener@tensorflow.org> |
[XLA] TransferManager states if tuple elements have distinct buffers from tuple itself. Also, lets the backend allocator specify behavior for 0-sized buffer allocations via StreamExecutorMemoryAllocator. Change: 150260500
/external/tensorflow/tensorflow/compiler/xla/service/transfer_manager.h
|
efc8f98d45df835bac2373e19f1da57e3a1ea2d0 |
|
28-Feb-2017 |
Jacques Pienaar <jpienaar@google.com> |
[XLA] Add basic outfeed support. Change: 148699787
/external/tensorflow/tensorflow/compiler/xla/service/transfer_manager.h
|
99e1b19ceba32b8354dddc2841b81864c9ba96bb |
|
12-Jan-2017 |
Jacques Pienaar <jpienaar@google.com> |
Clarify ResetDevice operates on all devices associated with backend. Change: 144258290
/external/tensorflow/tensorflow/compiler/xla/service/transfer_manager.h
|
1e67c90e2caceeff82d09793d1ef5fa0300d219b |
|
09-Jan-2017 |
Peter Hawkins <phawkins@google.com> |
Initial open-source release of XLA: Accelerated Linear Algebra. XLA is a compiler-based linear algebra execution engine that targets CPUs, GPUs and custom accelerators. XLA is still experimental; we are releasing it early to get the community involved. Change: 143990941
/external/tensorflow/tensorflow/compiler/xla/service/transfer_manager.h
|