• Home
  • History
  • Annotate
  • only in /external/tensorflow/tensorflow/contrib/metrics/
NameDateSize

..10-Aug-20184 KiB

__init__.py10-Aug-20187.3 KiB

BUILD10-Aug-20183.1 KiB

python/10-Aug-20184 KiB

README.md10-Aug-20181.2 KiB

README.md

1# TensorFlow evaluation metrics and summary statistics
2
3## Evaluation metrics
4
5Metrics are used in evaluation to assess the quality of a model. Most are
6"streaming" ops, meaning they create variables to accumulate a running total,
7and return an update tensor to update these variables, and a value tensor to
8read the accumulated value. Example:
9
10value, update_op = metrics.streaming_mean_squared_error(
11    predictions, targets, weight)
12
13Most metric functions take a pair of tensors, `predictions` and ground truth
14`targets` (`streaming_mean` is an exception, it takes a single value tensor,
15usually a loss). It is assumed that the shape of both these tensors is of the
16form `[batch_size, d1, ... dN]` where `batch_size` is the number of samples in
17the batch and `d1` ... `dN` are the remaining dimensions.
18
19The `weight` parameter can be used to adjust the relative weight of samples
20within the batch. The result of each loss is a scalar average of all sample
21losses with non-zero weights.
22
23The result is 2 tensors that should be used like the following for each eval
24run:
25
26```python
27predictions = ...
28labels = ...
29value, update_op = some_metric(predictions, labels)
30
31for step_num in range(max_steps):
32  update_op.run()
33
34print "evaluation score: ", value.eval()
35```
36