Skip to content

Release 19.02

Compare
Choose a tag to compare
@TelmoARM TelmoARM released this 08 Mar 10:03
· 6 commits to branches/armnn_18_11 since this release

New Features:

  • Maximum operator support for CpuRef and CpuAcc backend.
  • Minimum operator support for CpuRef, CpuAcc and GpuAcc backend.
  • Maximum operator support for TensorFlow parser.
  • Pad operator support for TensorFlow parser.
  • ExpandDims operator support for TensorFlow parser.
  • Sub operator support for TensorFlow parser.
  • BatchToSpace operator support for GpuAcc backend.
  • StridedSlice operator support for CpuRef, GpuAcc and CpuAcc backend.
  • SpaceToBatchNd operator support for GpuAcc backend. Some padding configuration is currently not interpret correctly
  • Greater operator support for CpuRef, GpuAcc and CpuAcc backend.
  • Greater operator support for TensorFlow parser.
  • Equal operator support for CpuRef backend.
  • Equal operator support for TensorFlow parser.
  • AddN operator support for TensorFlow parser.
  • Split operator support for Tensorflow parser.STRIDED_SLICE
  • Reciprocal of square root (Rsqrt) operator support for CpuRef backend.
  • Mean operator support for TensorFlow parser.
  • ResizeBilinear operator support for CpuAcc backend.
  • Logistic support for TensorFlow Lite parser.
  • Logistic support for GpuAcc backend.
  • Gather operator support for CpuRef backend.
  • Gather operator support for TensorFlow parser.
  • TensorFlow Lite parser support for BatchToSpace operator.
  • TensorFlow Lite parser support for Maximum operator.
  • TensorFlow Lite parser support for Minimum operator.
  • TensorFlow Lite parser support for ResizeBilinear operator.
  • TensorFlow Lite parser support for SpaceToBatch operator.
  • TensorFlow Lite parser support for StridedSlice operator.
  • TensorFlow Lite parser support for Sub operator.
  • TensorFlow Lite parser support for concatenation on tensors with rank other than 4
  • TensorFlow Lite parser support for Detection Post Process.
  • TensorFlow Lite parser support for Reciprocal of square root (Rsqrt).
  • Detection Post Process custom operator Reference implementation added.
  • Support for Serialization / Deserialization of the following ArmNN layers:
    • Activation
    • Addition
    • Constant
    • Convolution2d
    • DepthwiseConvolution2d
    • FullyConnected
    • Multiplication
    • Permute
    • Pooling2d
    • Reshape
    • Softmax
    • SpaceToBatchNd
  • New executable to convert network from TensorFlow Protocol Buffers to ArmNN format
  • New C++ Quantization API, supported layers are:
    • Input
    • Output
    • Addition
    • Activation
    • BatchNormalization
    • FullyConnected
    • Convolution2d
    • DepthwiseConvolution2d
    • Softmax
    • Permute
    • Constant
    • StridedSlice
    • Splitter
    • Pooling2d
    • FullyConnected
    • Reshape
    • eMerger
    • SpaceToBatch
    • ResizeBilinear

Public API Changes:

  • Support for the boolean data types. These are specified as 8-bit unsigned integers where zero (all bits off) represents false and any non-zero value (any bits on) represents true.
  • AddRsqrtLayer() method added to the graph builder API.
  • The profiling event now uses BackendId instead of Compute to identify the backend. BackendId is a wrapper class for the string that identifies a backend, and it is provided by the backend itself, rather than being statically enumerated like Compute.
  • Added the new method OptimizeSubGraph to the backend interface that allows the backends to apply their specific optimizations to a given sub-grah.
  • The old way backends had to provide a list optimizations to the Optimizer (through the GetOptimizations method) is still in place for backward compatibility, but it's now considered deprecated and will be remove in a future release.
  • Added the new interface class INetworkQuantizer for the Quantization API exposing two methods
    OverrideInputRange: allowing the caller to replace the quantization range for a specific input layer
    ExportNetwork: returning the quantized version of the loaded network

Known issues:

  • Large graphs with many branches and joins can take an excessive time to load, or cause a software hang while loading into ArmNN. This issue affects versions of ArmNN from 18.11 onwards. We are continuing to investigate and will fix the problem in a future release. Models known to be affected include Inception v4 and Resnet V2 101.

  • Merge layer with 8-bit quantized data where the tensors to be merged have different quantization parameters does not work on the GpuAcc or CpuAcc backends. This is known to affect quantised Mobilenet-SSD models, and some quantized Mobilenet v2 models.