History log of /external/tensorflow/tensorflow/compiler/xla/shape_tree.h
Revision Date Author Comments (<<< Hide modified files) (Show modified files >>>)
5dd585abb84c5d13af0017f78741e29505f7b5f7 15-Feb-2018 A. Unique TensorFlower <gardener@tensorflow.org> Make conversions from ShapedBuffer <-> ScopedShapedBuffer efficient by
moving memory ownership instead of copying.

PiperOrigin-RevId: 185871648
/external/tensorflow/tensorflow/compiler/xla/shape_tree.h
e4532d20973c4c00854492362665317551661c18 22-Dec-2017 A. Unique TensorFlower <gardener@tensorflow.org> Merge changes from github.

PiperOrigin-RevId: 179953488
/external/tensorflow/tensorflow/compiler/xla/shape_tree.h
9648f8040a559f6cf9bbe0501ba96f2b2c2864b1 16-Dec-2017 A. Unique TensorFlower <gardener@tensorflow.org> Automated g4 rollback of changelist 179258973

PiperOrigin-RevId: 179260538
/external/tensorflow/tensorflow/compiler/xla/shape_tree.h
d55f532867a3670d66460c5ee3b774519542adc1 16-Dec-2017 Dandelion Man? <dandelion@google.com> Merge changes from github.

PiperOrigin-RevId: 179258973
/external/tensorflow/tensorflow/compiler/xla/shape_tree.h
bfcc0970d61952ee894eaa4ef3256033239359b7 10-Nov-2017 A. Unique TensorFlower <gardener@tensorflow.org> Change HloSharding to allow getting a ShapeTree for non-tuple types.
Add reverse iteration to ShapeTree.

PiperOrigin-RevId: 175341255
/external/tensorflow/tensorflow/compiler/xla/shape_tree.h
c38781f6ec9710c0102bdc9d95bf6176fd96d1ce 09-Nov-2017 A. Unique TensorFlower <gardener@tensorflow.org> When sharding a tuple, we typically want to describe the data sharding
of each individual subtensor individually. Tuples are essentially just
containers - the tensors they contain should be able to be sharded
differently.

Tuples are hierarchically structured, but shardings were designed to
not contain the sharded type (the sharded type is inferred from the
output type of the instruction the sharding is applied to). Therefore,
shardings for tuples contain shardings for each subtensor as a
non-structured list.

This list is ordered as a preorder walk of the tuple shape, and of
course only the leaf nodes of the tuple shape are stored. The
structure is reapplied when the sharded instruction's shape is known.

PiperOrigin-RevId: 175132692
/external/tensorflow/tensorflow/compiler/xla/shape_tree.h
5ead76420dee762a5f710fda6893075f1292d5d3 19-Aug-2017 A. Unique TensorFlower <gardener@tensorflow.org> Reduce XLA compile time by ~7% for a convolutional image model:

* Added CompactPointerSet<T>, which is optimized for set size <= 1.
* Changed expensive CHECKs to DCHECKS in buffer_assignment.cc
* Reserve space in DFS state array before starting DFS.
* Use unsigned arithmetic in DFS state maintenance.
* HloInstruction:
- Moved frequently used fields to start for better cache locality.
- Use InlinedVector instead of vector for operand array.
- Use InlinedVector instead of vector for DFS stack.
* Pre-compute "is array" and "is tuple" for LogicalBuffer.
* PointsToSet:
- Combine two ShapeTrees into one.
- Use CompactPointerSet instead of std::set to hold sources.
- Use CompactPointerSet instead of std::set to hold flattened buffers.
* ShapeTree: use unique_ptr instead of optional for shape storage
(reduces size and destruction overhead).
* Add proper const qualifiers to some FlatSet iterator methods.

Co-author=jeff
PiperOrigin-RevId: 165759117
/external/tensorflow/tensorflow/compiler/xla/shape_tree.h
f3cd80f1f84fedbd2b9ccc064c2c5c28eb9711b8 04-Aug-2017 Mark Heffernan <meheff@google.com> Fix const_iterator in ShapeTree. The existing implementation didn't work
because const Foo<T>* and Foo<const T>* are not convertible in C++ which broke
the internal machinery of the iterator in the const case.

PiperOrigin-RevId: 164276236
/external/tensorflow/tensorflow/compiler/xla/shape_tree.h
1e236617d308bcfcf030ea363854f02c3c7c64f2 02-Aug-2017 A. Unique TensorFlower <gardener@tensorflow.org> Add an iterator implementing std::forward_iterator_tag to ShapeTree.

This allows it to be iterated over by a for-loop.

PiperOrigin-RevId: 163950729
/external/tensorflow/tensorflow/compiler/xla/shape_tree.h
181816fe27684585bface6e2260a0ff1c890e3e9 29-Jun-2017 Justin Lebar <jlebar@google.com> Speed up TuplePointsToAnalysis.

This analysis is one of the most expensive parts of the HLO optimization
pipeline.

- Avoid one or two unnecessary hashtable lookups in
PopulateDefinedBuffersAndAliases.

- Add a mode to ShapeTree wherein we avoid copying Shapes.

- Use templated functors rather than std::function in ShapeTree's
iterators, thus avoiding the overhead of std::function.

PiperOrigin-RevId: 160487485
/external/tensorflow/tensorflow/compiler/xla/shape_tree.h
05412bd367198ec491ca034b4bc634784c03125c 07-Jun-2017 Mark Heffernan <meheff@google.com> [XLA] Simplify Shape traversal visitors.
Simplify shape traversal visitors in ShapeUtil and ShapeTree. Add a non-Status form because most uses of the traversal methods do not use it, and remove is_leaf parameter from ShapeTree.ForEach* as it is not frequently used.

PiperOrigin-RevId: 158201574
/external/tensorflow/tensorflow/compiler/xla/shape_tree.h
2ee09b873a7f658fba151d8e39d2a8bc67e136a6 02-Jun-2017 Mark Heffernan <meheff@google.com> [XLA] Various improvements to ShapeTree.
Add support for holding non-copyable types, operator==, and a
CopySubtreeFrom method for copying a subtree from one ShapeTree to
another.

PiperOrigin-RevId: 157777636
/external/tensorflow/tensorflow/compiler/xla/shape_tree.h
ea910532bc21e265d5af7bdcd63d836e4204acf2 17-Apr-2017 A. Unique TensorFlower <gardener@tensorflow.org> Fix ShapeTree operator= to actually work. Also added some tests.
Change: 153385325
/external/tensorflow/tensorflow/compiler/xla/shape_tree.h
a241fe8ec5bc46e06c48c3ad609f00bc7a5030d9 17-Apr-2017 A. Unique TensorFlower <gardener@tensorflow.org> [XLA] Improve ShapeTree implementation.

1) Ensure PODs are zero-initialized via the shape-constructor; i.e.
ShapeTree<bool>(shape) has all data items initialized to false.
2) Add a default constructor, which makes it possible to use operator[] in a
FlatMap<_, ShapeTree<_>>. This creates a tree where !ShapeTree::IsValid.
3) Change the representation so that the root node holds the shape by value,
while non-root nodes don't hold a shape. This is "cleaner" than holding an
unused shape in non-root nodes, and also simplifies the implementation.
There is now a struct internal::ShapeTreeNode only used by ShapeTree.
4) Simplify constructors to delegate to a single implementation.
5) Fix bug in ShapeTree(Shape shape, T init_value) constructor, which caused
the existing code to only work for T=bool.
6) Fix documentation to indicate that a T data value is held for every node in
the tree, not just leaf nodes. That was the original behavior, which is
unchanged in this CL; it's just the documentation that was wrong.
Change: 153305130
/external/tensorflow/tensorflow/compiler/xla/shape_tree.h
1e67c90e2caceeff82d09793d1ef5fa0300d219b 09-Jan-2017 Peter Hawkins <phawkins@google.com> Initial open-source release of XLA: Accelerated Linear Algebra.

XLA is a compiler-based linear algebra execution engine that targets CPUs, GPUs and custom accelerators.

XLA is still experimental; we are releasing it early to get the community involved.
Change: 143990941
/external/tensorflow/tensorflow/compiler/xla/shape_tree.h