History log of /external/tensorflow/tensorflow/python/ops/data_flow_grad.py
Revision Date Author Comments (<<< Hide modified files) (Show modified files >>>)
f06f18e57ff17297e6e20c889d51f141f841496f 10-Aug-2017 A. Unique TensorFlower <gardener@tensorflow.org> Added parallel version of DynamicStitchOp (named ParallelDynamicStitchOp) with
slightly different semantics.

PiperOrigin-RevId: 164796436
/external/tensorflow/tensorflow/python/ops/data_flow_grad.py
cdecf416365c85f8274393e097ecab163cbea7c3 09-Mar-2017 Shanqing Cai <cais@google.com> Enable the direct use of TensorHandles as feed values through ResourceHandles

This is motivated by, among other goals, the need to enhance memory efficiency during TFDBG's stepper operations. The stepper caches TensorHandles to already-continued-to tensors and use them as feeds if later continue-to actions depend on the tensors as transitive inputs. However, previously the TensorHandles had to be converted to Numpy arrays by calling eval() and the Numpy arrays were then fed back to next Session.run() calls. This mode of operation involved at least two unnecessary tensor-numpy and numpy-tensor copying.

This CL makes it possible to use the ResourceHandle representations TensorHandles directly as feed values, eliminating the need for the aforementioned copying.

To this end, the following changes are made
1) the underlying representations of TensorHandles are changed from string to ResourceHandle. A custom numpy struct type is created to allow ResourceHandle of the TensorHandle subtype to be fed during Session.run() calls.
2) added GetSessionHandleOpV2, which deprecates GetSessionHandleOp. The V2 op outputs a DT_RESOURCE Tensor, instead of a string Tensor in the deprecated version.
Change: 149672538
/external/tensorflow/tensorflow/python/ops/data_flow_grad.py
612bae7e9fbe6ebf09a5e90279897baf950e5f28 09-Sep-2016 Vijay Vasudevan <vrv@google.com> Rename NoGradient -> NotDifferentiable, to make it clear
about when it should be used.

Keep the old name around for temporary backwards compatibility.
Change: 132700646
/external/tensorflow/tensorflow/python/ops/data_flow_grad.py
a558c6e3b38846727873b5afbbc3ba309ae5dff5 14-Jun-2016 Olivia Nordquist <nolivia@google.com> Execute TODOs to
move client/graph_util.py
ops/common_shapes.py
ops/constant_op.py
ops/op_def_library.py to framework/.
Also moved 2 corresponding test files and fixed some linting errors
Change: 124885409
/external/tensorflow/tensorflow/python/ops/data_flow_grad.py
0cf9ed3a719c0782695154d5a0bca260001cec15 02-Jun-2016 A. Unique TensorFlower <nobody@tensorflow.org> Update copyright for 3p/tf/python.
Change: 123900456
/external/tensorflow/tensorflow/python/ops/data_flow_grad.py
b1a6216a82c78bec2ed9c881c51629eb1fa4a7ee 20-Apr-2016 Eugene Brevdo <ebrevdo@gmail.com> Add allow_small_batch attribute to QueueInterface, and a new op
called DequeueUpToOp for Queues. In python land, there is a new
Queue.dequeue_up_to method. No queues support this dequeue option
for now. If a user calls dequeue_up_to, an error is currently returned
at runtime.
Change: 120341224
/external/tensorflow/tensorflow/python/ops/data_flow_grad.py
098f930de4ef044021f3ef1d3cdd6848c23eddb0 10-Apr-2016 Yuan Yu <yuanbyu@google.com> This is another step to make TensorFlow more interactive and flexible to users. It allows a tensor produced by a run call to stay "in-place" so that a future run call can use it in-place. To achieve this, a run call can now return a handle of a tensor to the client, which can then be fed to a subsequent run call. This feature is complimentary to partial run, though there are some overlaps.

Here are a few properties of the current implementation:

1. Tensors are stored in the state of a session. The tensors are garbage collected if the client doesn't have a reference to the tensor or the session is closed.

2. There is no change to the current session API. We introduced two ops to manage the conversions between tensors and its handles. (There is a third op to garbage collect a tensor.) See the example below.

3. It fits quite well into the current feed-fetch design/implementation. It tries to reuse the graph (and caches) as much as possible so to make things efficient.

Below is a simple example. More examples can be found in sessopn_ops_test.py.

# Return a handle.
a = tf.constant(10)
b = tf.constant(5)
c = tf.mul(a, b)
h = tf.get_session_handle(c).eval()

# Feed a tensor handle.
f, x = tf.get_session_tensor(dtypes.int32)
y = tf.mul(x, 10)
result = sess.run(y, feed_dict={f: h.handle})
# result == 500
Change: 119481352
/external/tensorflow/tensorflow/python/ops/data_flow_grad.py
7760ce56fc3ab4ab8cdc408e29d8ad8b539c417e 11-Feb-2016 Josh Levenberg <josh11b@tensorflow.org> Get rid of some import cruft.
Change: 114374558
/external/tensorflow/tensorflow/python/ops/data_flow_grad.py
e59493941cf1f9ff82dc01499005bdfa28842f60 27-Jan-2016 Eugene Brevdo <ebrevdo@gmail.com> Factor TensorArray functionality into its own python files.
Change: 113202608
/external/tensorflow/tensorflow/python/ops/data_flow_grad.py
697084c97b880f6845a9158a348f12f4e0ed8d35 27-Jan-2016 Eugene Brevdo <ebrevdo@gmail.com> Fix bugs in TensorArray gradients + unit tests.

Two major bugs:

* Multiple calls to tf.gradients will create multiple TensorArrayWrites to the
same gradient slot from the exact same gradient source. This gets treated as
a write + an add, and as a result gradients of TensorArrayRead can be
double-counted. The solution is to create a separate TensorArray for each
call to tf.gradients. This can be done by:
1. looking at the name of the input gradient to e.g. TensorArrayRead
2. slicing off the prefix (e.g. "gradients", "gradients_1", etc)
3. passing this to the tensor_array_grad op which
4. uses this as a suffix to the original name when creating or looking up a new
TensorArray object for storing the gradients.

* The initial gradient TensorArrayWrite performed a shallow copy of the
write tensor. Since we support aggregation to the same slot from different
sources, modifying the PersistentTensor in place can affect the original
tensor elsewhere.

Instead, we now specifically disallow multiple reads / packs from a TensorArray.
This simplifies the code immensely and removes the need to support gradient
aggregation. It also makes the interface much more functional. Once a
Tensor has been read out of the TensorArray, it can be used in several places.
However, gradient aggregation can be performed outside the TensorArray.

Additional improvements:

* TensorArray constructor is now "read_once", which means after a Read or
Pack operation, the reference inside TensorArray to the Tensor is removed. This
frees up memory early.
Change: 113193579
/external/tensorflow/tensorflow/python/ops/data_flow_grad.py
e14090d217fc4e7e49ac04ccbc50acdba8b9f120 22-Jan-2016 Eugene Brevdo <ebrevdo@gmail.com> Add TensorArray data_flow_ops wrapper and TensorArray gradients.

These are undocumented features and the API can and will change.
Change: 112733605
/external/tensorflow/tensorflow/python/ops/data_flow_grad.py
2712ed6de036e16b2599fcab2071acd7bbf8b17a 19-Jan-2016 Eugene Brevdo <ebrevdo@gmail.com> Implement TensorArray forward ops.

Allows dynamic writing to- and reading from- an array of Tensors (size of the array determined at run time).

This is useful for, e.g., While loops. Each while iteration can write to the Array; and the final handle can be used with Concat to get all the outputs in one Tensor.

No gradient support yet, this will be implemented in a future CL.
Change: 112493043
/external/tensorflow/tensorflow/python/ops/data_flow_grad.py
2b672c4a2f6aeaea8457fd4941f48f5a9e80d283 08-Jan-2016 A. Unique TensorFlower <nobody@tensorflow.org> Add gradients for DynamicPartition
Change: 111650709
/external/tensorflow/tensorflow/python/ops/data_flow_grad.py
854f49bd43588c062b046384f239f64a3d819702 25-Nov-2015 Manjunath Kudlur <keveman@gmail.com> TensorFlow: Upstream changes to git

Changes:
- Updates to docs
- Several changes for Python 3 compatibility
- Added license headers

Base CL: 108710566
/external/tensorflow/tensorflow/python/ops/data_flow_grad.py
9c3043ff3bf31a6a81810b4ce9e87ef936f1f529 20-Nov-2015 Manjunath Kudlur <keveman@gmail.com> TensorFlow: Improve performance of Alexnet

Changes:

* error message that refers to removed `DefaultSession` method.
* -Wnull-conversion warnings
* the "_start_time" attr for recvs when the flag "--brain_enable_scheduling_for_recvs" is set.
* typo in tutorial data download progress message.
* a typo ("however their installing"=>"however installing").
* typo, rename "TensorFlow Mechanics" to "How To" to be consistent with the website.
* a typo ("subtact"=>"subtract").
* protobuf examples in comments in tensorflow::Example.proto.
* formula formatting in MNIST beginner tutorial
* negative fraction-of-queue-full stats
* protobuf inclusion path so that Android demo will build under Blaze.
* small typo (moderatly > moderately)
* Session.run() to check that tensor arguments come from the session's graph.
* another six import
* seq2seq typo in bazel command

Base CL: 108349164
/external/tensorflow/tensorflow/python/ops/data_flow_grad.py
f2102f4e2c1c87f1d1bf9ab856a2849c54478760 12-Nov-2015 Vijay Vasudevan <vrv@google.com> TensorFlow: upstream changes from the afternoon.

Changes:

- futurize --stage2 changes for Python 3 compatibility by @girving.

- Small updates to documentation by @vrv, schuster and others

- Account for failure of std::thread::hardware_concurrency by @ebrevdo.

- More changes for backwards-compatibility tests by Josh

- Updates to python op doc generation by Josh

- Added support for using the best-fit allocator via ConfigProto by @vrv.

- Rename LocalSession to DirectSession, since local was a bad name for
it.

- Enable tf.nn.moments() to work with tensors of unknown shape by @mrry.
GITHUB_ISSUE: 139

- Changes for Android build by Andrew.

Base CL: 107645181
/external/tensorflow/tensorflow/python/ops/data_flow_grad.py
f41959ccb2d9d4c722fe8fc3351401d53bcf4900 07-Nov-2015 Manjunath Kudlur <keveman@gmail.com> TensorFlow: Initial commit of TensorFlow library.
TensorFlow is an open source software library for numerical computation
using data flow graphs.

Base CL: 107276108
/external/tensorflow/tensorflow/python/ops/data_flow_grad.py