History log of /frameworks/ml/nn/common/operations/Activation.cpp
Revision Date Author Comments (<<< Hide modified files) (Show modified files >>>)
45bf79e5b9fee354fde7c1f64417d9ca4a1da7da 25-Sep-2017 Miao Wang <miaowang@google.com> Clarify the expectation of scale and zeroPoint for affected ops.

- Remove the OperandType constructor taking f_min and f_max as
quantization parameters as it causes confusions for ops like
LOGISTIC and SOFTMAX.
- Update the documenation to clearly state the expected scale and
zeroPoint for LOGISTIC, SOFTMAX and CONCATENATION.
- Update the tests to directly input scale and zeroPoint.

Bug: 63905942
Test: mm
Test: NeuralNetworksTest pass
Change-Id: Ia450d6ce9509205d22e6383bd7e454afa0568cbb
/frameworks/ml/nn/common/operations/Activation.cpp
874d039215516aebdaba2e242609199897fe80c0 23-Sep-2017 Miao Wang <miaowang@google.com> Fix sigmoid and softmax tests and implementation.

- CPU executor now checks both the scale and offset for the
output Shape.
- The golden references and output range for the tests are updated.

Bug: 63905942
Test: mm
Test: NeuralNetworksTest pass
Change-Id: I9e892ae0de8ea17298dbb7edb96036e1d30c84fb
/frameworks/ml/nn/common/operations/Activation.cpp
be2b22578baf949d7be42ba002cee94304daf53c 22-Sep-2017 Miao Wang <miaowang@google.com> Use softer error reporting instead of CHECK*

- CHECK(x) checks whether condition x holds and LOG(FATAL) if not, which
will result in calling abort().
- This change uses nnOpsCheck which would log the failing condition and
return false to the runtime, allowing graceful failures.

Bug: 63905942
Test: NeuralNetworkTests pass
Change-Id: I8b1217f777638f974c91fa429449e39d37218af6
/frameworks/ml/nn/common/operations/Activation.cpp
1b69ceeb5920503f18b6c6c1233b1fa481b6e634 11-Sep-2017 Miao Wang <miaowang@google.com> Move all op preparation functions to OperationsUtils.

- All op operations are moved to OperationUtils.
- Add a helper function to derive implict padding scheme from
explicit paddings.
- Make all prepare function return false when seeing error.

Bug: 63905942
Test: NeuralNetworkTests
Change-Id: I16538dbd731a5ca1e6de5e0d0b269e9f386f4d29
/frameworks/ml/nn/common/operations/Activation.cpp
9f41362ea2d250b89e59c73cc0194c6a6720cc9d 21-Aug-2017 Miao Wang <miaowang@google.com> Implement quantized RELU, RELU1, RELU6 and SOFTMAX.

Bug: 63905942
Test: mm
Test: InceptionV3 quantized end to end test pass

Change-Id: Ibfdfa5d2476f6ee5f2f96aab78c522eeebcc5920
/frameworks/ml/nn/common/operations/Activation.cpp
bbfd239e43526ff969699d3fc6110395edd2108b 26-Jul-2017 Miao Wang <miaowang@google.com> Implement Softmax, FullyConnected, and Concatenation.

- FLOAT32 and QUANT8 versions of FullyConnected and Concatenation are
covered.
- This change only covers FLOAT32 version of Softmax

Bug: 63905942
Test: mm
Test: end-to-end test with Inception V3 pass.
Test: end-to-end test with StripOCR quantized pass.

Change-Id: I9e001265cc31df406fdb3d685d10a1c61216c700
/frameworks/ml/nn/common/operations/Activation.cpp
27e9be3904b034e422ee9b6ab70b35ea994d2b39 03-Aug-2017 Miao Wang <miaowang@google.com> Initial implementation of the following quantized ops.

- CONV_QUANT8
- DEPTHWISE_CONV_QUANT8
- AVERAGE_POOL_QUANT8
- MAX_POOL_QUANT8
- LOGISTIC_QUANT8

Additionally, added functions to plumb through quantization
parameters.

Bug: 63905942
Test: mm
Test: end-to-end MobileNet quantized test pass

Change-Id: Ib2753c68bf2c51467ae1c158b45541bcfdf10789
/frameworks/ml/nn/common/operations/Activation.cpp
eb1f88846f147d1d80ee0d688fe4635b89a40ffa 26-Jul-2017 Miao Wang <miaowang@google.com> Implement the following operations for Android NN runtime.

- CONV_FLOAT32
- DEPTHWISE_CONV_FLOAT32
- AVERAGE_POOL_FLOAT32
- L2_POOL_FLOAT32
- MAX_POOL_FLOAT32
- RELU_FLOAT32
- RELU6_FLOAT32
- TANH_FLOAT32
- LOGISTIC_FLOAT32

Bug: 63905942
Test: mm
Test: End-to-end test with MobileNet pass

Change-Id: I3eaa9794c7cdffd01792d26c5d6497c8d56d8605
/frameworks/ml/nn/common/operations/Activation.cpp