Fix the NNAPI HAL documentation about ADD and MUL

- ADD and MUL supports QUANT8_ASYMM since OMR1. It was a bug missing
  them in the HAL documentation.
  - Added the updated hash to current.txt for this ABI preserving
  change.

Bug: 75459529
Test: mm
Merged-In: I492a7431c0dbb3dc5967c586d080eb134e380bf2
Change-Id: I492a7431c0dbb3dc5967c586d080eb134e380bf2
(cherry picked from commit f62984027c)
This commit is contained in:
Miao Wang
2018-03-23 14:50:59 -07:00
committed by Michael Butler
parent 7ed6135471
commit 9237ae8889
2 changed files with 3 additions and 1 deletions

View File

@@ -248,5 +248,5 @@ c8bc853546dd55584611def2a9fa1d99f657e3366c976d2f60fe6b8aa6d2cb87 android.hardwar
# Future changes to HALs
5804ca86611d72e5481f022b3a0c1b334217f2e4988dad25730c42af2d1f4d1c android.hardware.neuralnetworks@1.0::IDevice
088b30a9c9ce27bc955b08a03c38c208f8f65b51133053c7656c875479801b99 android.hardware.neuralnetworks@1.0::types
08ae9fc24f21f809e9b8501dfbc803662fcd6a8d8e1fb71d9dd7c0c4c6f5d556 android.hardware.neuralnetworks@1.0::types

View File

@@ -84,6 +84,7 @@ enum OperationType : int32_t {
* output.dimension = {5, 4, 3, 2}
*
* Supported tensor types: {@link OperandType::TENSOR_FLOAT32}
* {@link OperandType::TENSOR_QUANT8_ASYMM}
* Supported tensor rank: up to 4
*
* Inputs:
@@ -645,6 +646,7 @@ enum OperationType : int32_t {
* input operands. It starts with the trailing dimensions, and works its way forward.
*
* Supported tensor types: {@link OperandType::TENSOR_FLOAT32}
* {@link OperandType::TENSOR_QUANT8_ASYMM}
* Supported tensor rank: up to 4
*
* Inputs: