f83693a2aa9cf6c90d0cc214eab7524817390c54 |
|
13-Feb-2018 |
Igor Saprykin <isaprykin@google.com> |
Allow other types of variables to act as a resource variable. Introduce resource_variable_ops.is_resource_variable() function that returns true if an _should_act_as_resource_variable attribute is set. PiperOrigin-RevId: 185559202
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
d90054e7c0f41f4bab81df0548577a73b939a87a |
|
07-Feb-2018 |
Michael Case <mikecase@google.com> |
Merge changes from github. PiperOrigin-RevId: 184897758
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
b9018073ec1afc7dfc302ab171db8bf5b177c2dd |
|
07-Feb-2018 |
Yifei Feng <yifeif@google.com> |
Add pylint check for W0611 unused-import in ci_sanity.sh and fix existing pylint errors. PiperOrigin-RevId: 184790548
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
351c0a533a111636333b4ebeede16485cf679ca9 |
|
25-Jan-2018 |
Yifei Feng <yifeif@google.com> |
Add C0330 bad-continuation check to pylint. PiperOrigin-RevId: 183270896
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
2968447d32bdfd0dd6fafabfcd1aafd6dc261803 |
|
23-Jan-2018 |
Anna R <annarev@google.com> |
Adding tf_export decorators/calls to TensorFlow functions and constants. PiperOrigin-RevId: 182862075
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
71896cc7e5bd3d1b8b5bb615eac7bebf86fa998c |
|
04-Jan-2018 |
Raghuraman Krishnamoorthi <raghuramank@google.com> |
Merge changes from github. PiperOrigin-RevId: 180746153
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
2d8206b6b5daf8f5bedd94f32c61eb2c00fd7c25 |
|
07-Dec-2017 |
Skye Wanderman-Milne <skyewm@google.com> |
Add Python checks to prevent mixing ops from different while loops. The executor can currently catch some errors like this by trying to reconstruct the while loop contexts by tracing the graph from enter nodes, but this doesn't catch everything and can cause hangs or other undesirable behavior. This change puts the check in Python and also provides better debugging information. In addition, this change refactors some logic from control_flow_ops.py to a new file, control_flow_util.py. This is so we can call CheckInputFromValidContext from ops.py without creating circular imports between ops.py and control_flow_ops.py. PiperOrigin-RevId: 178161679
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
e2a60582bf28fa29c871736d10edad06e660776d |
|
16-Nov-2017 |
James Keeling <jtkeeling@google.com> |
Correct markdown for code segments in gradients_impl PiperOrigin-RevId: 176015049
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
334c7bda17fb0ad1c437461a99a487d4610d310b |
|
02-Nov-2017 |
A. Unique TensorFlower <gardener@tensorflow.org> |
Change gradient colocation logic to avoid conflicting colocations/devices in cases where gradients are gated. PiperOrigin-RevId: 174263147
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
4723f8f6ed4e43632ea90456bd36a1f8e8b1aeb8 |
|
30-Oct-2017 |
RJ Ryan <rjryan@google.com> |
Support SymbolicGradient for functions with non-trainable arguments. The non-trainable arguments end up with None as their incoming out_grad, which is not a valid input to SymbolicGradient (inputs have to be convertible to Tensor, and None isn't). PiperOrigin-RevId: 173901727
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
bf1fad214febef6af5c101d8f953d0109c46dfbb |
|
24-Oct-2017 |
A. Unique TensorFlower <gardener@tensorflow.org> |
Fix NCCL rewrite bug when rerunning sessions (assigned device id is not stable). Fix collocate_gradients for initial losses. Remove NcclBroadcast gradient test for now. The generated AddN to accumulate the broadcast outputs before passing it to the gradient function is CPU only and cannot be collocated with NcclBroadcast on the GPU. PiperOrigin-RevId: 173306409
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
d6b616925657bc44de9bd6a5ddf3437e9c7ba88b |
|
12-Oct-2017 |
Eugene Brevdo <ebrevdo@google.com> |
Wrap grad_ys tensors passed to tf.gradients in the tf.gradients name scope. Fixes #13355. PiperOrigin-RevId: 171972633
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
32dc203f55a7462ddf780c68d619af574daedd46 |
|
05-Oct-2017 |
Eugene Brevdo <ebrevdo@google.com> |
Improve gradient shape validation errors. PiperOrigin-RevId: 171077826
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
549dd6fd66a9b176ee3fe5e7093e4a1654bcbdb1 |
|
04-Sep-2017 |
A. Unique TensorFlower <gardener@tensorflow.org> |
Add an argument `stop_gradients` to `tf.gradients` in order to hold specific tensors constant wrt `xs`. PiperOrigin-RevId: 167501127
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
e705629ea7bcdc257f567f55100e3f793f3d1372 |
|
28-Aug-2017 |
A. Unique TensorFlower <gardener@tensorflow.org> |
Fix device for ResourceVariables, and add tape.watch on reading them. Print status for errors in resource variable lookups. PiperOrigin-RevId: 166744927
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
90d6421c5e0898fb840197d9533c2f8ba1a7c651 |
|
11-Jul-2017 |
Shanqing Cai <cais@google.com> |
Merge changes from github. END_PUBLIC --- Commit d0f53f77f authored by Penghao Cen<scorpiocph@gmail.com> Committed by Shanqing Cai<cais@google.com>: Minor fix typo (#11323) --- Commit 02fcf564e authored by Chris Song<sjhshy@gmail.com> Committed by Chris Song<sjhshy@gmail.com>: Fix misspells. --- Commit 764c9b6b4 authored by Louis Tiao<ltiao@users.noreply.github.com> Committed by GitHub<noreply@github.com>: Fixed typo in docstring --- Commit f8cd1283e authored by Shanqing Cai<cais@google.com> Committed by Shanqing Cai<cais@google.com>: Chaser --- Commit 01383b946 authored by Shanqing Cai<cais@google.com> Committed by Shanqing Cai<cais@google.com>: Adapt TensorFlowTestCase.setUp() to new reset_default_graph() semantics Avoid calling reset_default_graph() directly to prevent exceptions in cases where test methods error out from within nested graph contexts, which can leave _default_graph_stack non-empty in certain Python versions. --- Commit 0ffc37890 authored by Amit Patankar<amitpatankar@google.com> Committed by Amit Patankar<amitpatankar@google.com>: Removing second declaration of functions. --- Commit f9c9cacb0 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Refactor ElementalIrEmitter's slice index finding code into IrArray::Index::SourceIndexOfSlice(). PiperOrigin-RevId: 161140653 --- Commit ba297aec9 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Update ops-related pbtxt files. PiperOrigin-RevId: 161138258 --- Commit 68d666737 authored by Alexandre Passos<apassos@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Fixes a reentrant lock issue with tensors using ndarray memory which uses tensor memory. PiperOrigin-RevId: 161137788 --- Commit a2ee8bca3 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add support for int8 x int8 -> int32 matrix multiplication via cublasGemmEx to stream_executor. PiperOrigin-RevId: 161137741 --- Commit 755fa7b50 authored by Mark Daoust<markdaoust@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Block generate_test, and docs generating from running in python3. - Doc generation is currently unsupported in python3 - These both end in errors in python 3.5.1+ PiperOrigin-RevId: 161137467 --- Commit 97cbcac45 authored by Peter Hawkins<phawkins@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [TF:XLA] Fix failure in functionalize_control_flow rewrite for Enter nodes that are unused. Make sure we ignore such nodes without producing an error. PiperOrigin-RevId: 161136545 --- Commit dabcb60bc authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [XLA] Add reasonable error messages to Builder::Build for bad parameter numbers. PiperOrigin-RevId: 161136262 --- Commit 0cbd249e8 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add complex tensors support to `matrix_determinant`. PiperOrigin-RevId: 161132422 --- Commit 335f1f14d authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Extend static shape inference for SparseTensors with dense_shapes constructed using slicing. PiperOrigin-RevId: 161132391 --- Commit 53604916e authored by Jianwei Xie<xiejw@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Fixed the missing labels test in TPUEstimator. PiperOrigin-RevId: 161131282 --- Commit 9f57dc8dd authored by Bruno Rosa<bruno.rosa@eldorado.org.br> Committed by Bruno Rosa<bruno.rosa@eldorado.org.br>: Use mcpu instead of march for ppc64le march is not support by gcc on ppc64le --- Commit 7d5c74a9c authored by Skye Wanderman-Milne<skyewm@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Move duplicate detection logic from Graph to FunctionLibraryDefinition Turns out this is more useful, since there are many function libraries that don't belong to a graph. This will be used in a future change. Note that this maintains the current behavior of Graph. In addition, updates FunctionDefsEqual() to handle unset attr entries (I ran into this when using this in said future change). PiperOrigin-RevId: 161126628 --- Commit 2caec3af1 authored by Shanqing Cai<cais@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Disable more timeseries py tests failing in OSS PIP GPU builds PiperOrigin-RevId: 161124799 --- Commit 0b5cce367 authored by Eugene Brevdo<ebrevdo@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Get TopK op working on GPU again. Extend using cub's radix sort. 1. Undo rollback of Andreas Kirsch's initial implementation. 2. Use cub segmented radix sort if Andreas' heap-based impl for large k and small num_cols (thresholds of k=100, n=1000 determined empirically). 3. Use cub segmented radix sort if k == num_cols (this case is always faster). 4. Added benchmarks. Benchmarks show that the GPU implementation is up to 3x slower for small k but can be 10x faster for large num_cols and k. Benchmarks: Benchmark: m_128_n_10_k_5_use_gpu_False wall_time: 0.000166 s Throughput: 0.0077 GB/s Benchmark: m_128_n_10_k_5_use_gpu_True wall_time: 0.000796 s Throughput: 0.00161 GB/s Benchmark: m_128_n_10_k_9_use_gpu_False wall_time: 0.00017 s Throughput: 0.00751 GB/s Benchmark: m_128_n_10_k_9_use_gpu_True wall_time: 0.000796 s Throughput: 0.00161 GB/s Benchmark: m_128_n_10_k_10_use_gpu_False wall_time: 0.00017 s Throughput: 0.00753 GB/s Benchmark: m_128_n_10_k_10_use_gpu_True wall_time: 0.000775 s Throughput: 0.00165 GB/s Benchmark: m_128_n_100_k_1_use_gpu_False wall_time: 0.000155 s Throughput: 0.0826 GB/s Benchmark: m_128_n_100_k_1_use_gpu_True wall_time: 0.000796 s Throughput: 0.0161 GB/s Benchmark: m_128_n_100_k_50_use_gpu_False wall_time: 0.000247 s Throughput: 0.0519 GB/s Benchmark: m_128_n_100_k_50_use_gpu_True wall_time: 0.0008 s Throughput: 0.016 GB/s Benchmark: m_128_n_100_k_99_use_gpu_False wall_time: 0.000261 s Throughput: 0.049 GB/s Benchmark: m_128_n_100_k_99_use_gpu_True wall_time: 0.000794 s Throughput: 0.0161 GB/s Benchmark: m_128_n_100_k_100_use_gpu_False wall_time: 0.000239 s Throughput: 0.0536 GB/s Benchmark: m_128_n_100_k_100_use_gpu_True wall_time: 0.000777 s Throughput: 0.0165 GB/s Benchmark: m_128_n_1000_k_1_use_gpu_False wall_time: 0.000324 s Throughput: 0.395 GB/s Benchmark: m_128_n_1000_k_1_use_gpu_True wall_time: 0.000916 s Throughput: 0.14 GB/s Benchmark: m_128_n_1000_k_10_use_gpu_False wall_time: 0.00042 s Throughput: 0.305 GB/s Benchmark: m_128_n_1000_k_10_use_gpu_True wall_time: 0.000902 s Throughput: 0.142 GB/s Benchmark: m_128_n_1000_k_500_use_gpu_False wall_time: 0.0011 s Throughput: 0.116 GB/s Benchmark: m_128_n_1000_k_500_use_gpu_True wall_time: 0.00097 s Throughput: 0.132 GB/s Benchmark: m_128_n_1000_k_990_use_gpu_False wall_time: 0.00133 s Throughput: 0.0962 GB/s Benchmark: m_128_n_1000_k_990_use_gpu_True wall_time: 0.000993 s Throughput: 0.129 GB/s Benchmark: m_128_n_1000_k_1000_use_gpu_False wall_time: 0.00102 s Throughput: 0.126 GB/s Benchmark: m_128_n_1000_k_1000_use_gpu_True wall_time: 0.000964 s Throughput: 0.133 GB/s Benchmark: m_128_n_10000_k_10_use_gpu_False wall_time: 0.002 s Throughput: 0.64 GB/s Benchmark: m_128_n_10000_k_10_use_gpu_True wall_time: 0.00288 s Throughput: 0.445 GB/s Benchmark: m_128_n_10000_k_100_use_gpu_False wall_time: 0.00233 s Throughput: 0.549 GB/s Benchmark: m_128_n_10000_k_100_use_gpu_True wall_time: 0.00325 s Throughput: 0.394 GB/s Benchmark: m_128_n_10000_k_5000_use_gpu_False wall_time: 0.0127 s Throughput: 0.101 GB/s Benchmark: m_128_n_10000_k_5000_use_gpu_True wall_time: 0.00381 s Throughput: 0.336 GB/s Benchmark: m_128_n_10000_k_9900_use_gpu_False wall_time: 0.015 s Throughput: 0.0853 GB/s Benchmark: m_128_n_10000_k_9900_use_gpu_True wall_time: 0.00438 s Throughput: 0.292 GB/s Benchmark: m_128_n_10000_k_10000_use_gpu_False wall_time: 0.0104 s Throughput: 0.123 GB/s Benchmark: m_128_n_10000_k_10000_use_gpu_True wall_time: 0.00427 s Throughput: 0.3 GB/s Benchmark: m_128_n_100000_k_100_use_gpu_False wall_time: 0.0148 s Throughput: 0.865 GB/s Benchmark: m_128_n_100000_k_100_use_gpu_True wall_time: 0.0262 s Throughput: 0.488 GB/s Benchmark: m_128_n_100000_k_1000_use_gpu_False wall_time: 0.0201 s Throughput: 0.636 GB/s Benchmark: m_128_n_100000_k_1000_use_gpu_True wall_time: 0.0263 s Throughput: 0.486 GB/s Benchmark: m_128_n_100000_k_50000_use_gpu_False wall_time: 0.214 s Throughput: 0.0599 GB/s Benchmark: m_128_n_100000_k_50000_use_gpu_True wall_time: 0.0322 s Throughput: 0.398 GB/s Benchmark: m_128_n_100000_k_99000_use_gpu_False wall_time: 0.262 s Throughput: 0.0489 GB/s Benchmark: m_128_n_100000_k_99000_use_gpu_True wall_time: 0.0377 s Throughput: 0.34 GB/s Benchmark: m_128_n_100000_k_100000_use_gpu_False wall_time: 0.118 s Throughput: 0.108 GB/s Benchmark: m_128_n_100000_k_100000_use_gpu_True wall_time: 0.0365 s Throughput: 0.351 GB/s END_PUBLIC BEGIN_PUBLIC BEGIN_PUBLIC Automated g4 rollback of changelist 157169178 PiperOrigin-RevId: 161476569
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
50b999a8336d19400ab75aea66fe46eca2f5fe0b |
|
28-Jun-2017 |
A. Unique TensorFlower <gardener@tensorflow.org> |
Merge changes from github. PiperOrigin-RevId: 160344052
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
ee112cff56081fb9d0b74c987a8935acc360b05c |
|
11-May-2017 |
Benoit Steiner <bsteiner@google.com> |
Merge changes from github. PiperOrigin-RevId: 155709893
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
fb56fc90167c3919cb59f753f233ef2a41469cb2 |
|
31-Mar-2017 |
Yuan Yu <yuanbyu@google.com> |
This is to address a long standing issue (probably from day 1 of TensorFlow) with gradients. When we have programs like this: y1 = F(x) y2 = G(y1) g = tf.gradients([y1, y2], x) In the current TF, g = tf.gradients(y1, x), which breaks some intuitive mathematical properties. This CL makes the following mathematical properties hold: g = tf.gradients([tf.identity(y1), y2], x) g = tf.gradients([tf.identity(y1), tf.identity(y2)], x) g = tf.gradients(y1 + y2, x), if y1 and y2 can be added. Change: 151783751
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
a75b6df69e1b6965fcbac5df68e89dc3cbe9931e |
|
13-Mar-2017 |
A. Unique TensorFlower <gardener@tensorflow.org> |
[TF:XLA] Add separate_compiled_gradients to control gradient scopes. Change: 149973410
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
b03a72c804d2e6ececcbe4fe4cd603edc9f8049d |
|
08-Mar-2017 |
RJ Ryan <rjryan@google.com> |
Add tf.spectral, a module for spectral operations. * Move existing FFT ops to tf.spectral. * Add ops for computing 1D, 2D and 3D Fourier transforms of real signals. * Define a gradient for the 1D and 2D transforms. Change: 149504891
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
2bfb700080d6b3aa5b4ec00c5928e87db0d56677 |
|
17-Feb-2017 |
A. Unique TensorFlower <gardener@tensorflow.org> |
Change gradients to be computed with respect to variable ref rather than snapshot. Variables may create another snapshot or their ref may be exposed via public API (e.g., var.op.outputs[0] or graph.as_graph_element(var) which happens fairly often inside libraries or collection serialization). On the other hand, tf.gradients() use convert_to_tensor() which returns a snapshot, and gradients were computed with respect to this particular snapshot, which makes the gradients incorrect. Change: 147800865
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
a1d2a4ab90bd9df7312408f3971a2236810a1074 |
|
13-Feb-2017 |
Alexandre Passos <apassos@google.com> |
Backprop through resource handles. Change: 147371282
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
46c8e8bd966286ad3f192948fa64f2e9ee8f0dcb |
|
11-Feb-2017 |
Eugene Brevdo <ebrevdo@google.com> |
Add _XlaScope attribute to jit_scope to avoid fusing separate adjacent fused blocks. Gradients get their own separate scope based on the scope of the forward op. Provide proper XlaScope for Defuns as well (each Defun gets its own scope; their gradients get their own scope). Also move jit scope gradient unit tests out of core gradients to contrib.compiler. This is just the python side that sets the attribute; the C++ changes will come in a separate CL. Change: 147216860
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
b67f844c78e7c47c91a4cd394885073dc049ba00 |
|
30-Jan-2017 |
Eugene Brevdo <ebrevdo@google.com> |
tf.gradients respects jit compiled forward ops and compiles their gradients. Change: 145999026
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
7f79424f63c5684a43d47de216ae152144ddeecf |
|
24-Jan-2017 |
Geoffrey Irving <geoffreyi@google.com> |
Disallow gradients of complex tensors unless grad_ys is set Previously the gradient of a complex tensor would silently be the gradient of its real part. This is sufficiently nonsense to be declared a bug, not a feature change. Fixes #2818. Change: 145432311
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
56fc8834c736878af34f00caa95e7d4a57ab01d2 |
|
24-Jan-2017 |
Shanqing Cai <cais@google.com> |
Merge changes from github. Change: 145363673
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
0e226af7eed5e2764aa8acb825af4cd3e06d2452 |
|
11-Jan-2017 |
A. Unique TensorFlower <gardener@tensorflow.org> |
Switch tf.concat_v2 references in third_party/tensorflow to tf.concat. Change: 144153795
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
1243fbee608ac89299a69fd12fc338325116c219 |
|
30-Dec-2016 |
A. Unique TensorFlower <gardener@tensorflow.org> |
Deal with the case where _SwitchGrad() is not called the first time for a while loop (i.e. non-differentiable outputs) Change: 143264614
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
cb4acf5e47574deccf0c578d6d1d18d74f6117af |
|
20-Dec-2016 |
Andrew Selle <aselle@google.com> |
Rename usages of tf.mul, tf.neg, tf.sub that are used internally Change: 142595367
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
709ee95325f0814100e5b6378d4405020cf9d8b8 |
|
16-Dec-2016 |
A. Unique TensorFlower <gardener@tensorflow.org> |
Replace array_ops.unpack with array_ops.unstack in third_party/tensorflow. Change: 142248389
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
c5dc750ba9fab7e7f1f05ee0e0cdb04ae96e0e32 |
|
15-Dec-2016 |
A. Unique TensorFlower <gardener@tensorflow.org> |
Switch array_ops.pack/unpack to array_ops.stack/unstack. Also switch a few remaining references to tf.pack/unpack to tf.stack/unstack. Change: 142108785
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
d4eb834824d79c6a64a3c4a1c4a88b434b73e63e |
|
07-Dec-2016 |
A. Unique TensorFlower <gardener@tensorflow.org> |
Switch all tf.concat(concat_dim, value, name) calls in third_party/tensorflow to tf.concat_v2(value, axis, name). Change: 141255675
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
85eeec0d415a1478bbeffc3d4545c795bee64e9f |
|
19-Nov-2016 |
Jonathan Hseu <jhseu@google.com> |
Automated rollback of change 139400135 Change: 139632235
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
7e8728662120df0a80720bb7527613f96d58271e |
|
17-Nov-2016 |
Jonathan Hseu <jhseu@google.com> |
Rename `Tensor` to `Output` in all Python docs Generated by running: $ find . -name '*.py' | xargs sed -i 's/a `Tensor`/an `Output`/g' $ find . -name '*.py' | xargs sed -i 's/A `Tensor`/An `Output`/g' $ find . -name '*.py' | xargs sed -i 's/`Tensor`/`Output`/g' $ find . -name '*.py' | xargs sed -i 's/`tf.Tensor`/`tf.Output`/g' $ find . -name '*.py' | xargs sed -i 's/`Tensors`/`Output`s/g' $ find . -name '*.py' | xargs sed -i 's/#Tensor)/#Output)/g' $ find . -name '*.py' | xargs sed -i 's/#Tensor\./#Output./g' Manually fixed up lines that exceeded 80 characters after the change. Change: 139400135
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
833662d41ece9f68c678cdeff649b970030f6e00 |
|
07-Nov-2016 |
A. Unique TensorFlower <gardener@tensorflow.org> |
Better error message for "No gradients provided for any variable" case. Address a few pylint warnings. Change: 138407267
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
818993c7751601527d662d2417f220e4e856e4ef |
|
04-Nov-2016 |
Vijay Vasudevan <vrv@google.com> |
Merge changes from github. Change: 138143557
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|
36621ac3d54eb5dc6397db8a2d117e88ac90f4a7 |
|
03-Nov-2016 |
Patrick Nguyen <drpng@google.com> |
Seal gradients interface. Change: 138098284
/external/tensorflow/tensorflow/python/ops/gradients_impl.py
|