Quantcast
Channel: Raspberry Pi Forums
Viewing all articles
Browse latest Browse all 4104

Advanced users • Re: PI5 and easy AI/CV/LLM

$
0
0
While checking Intel ARC GPU for binary math support I spotted Openvino which says it has Raspberry Pi support.
https://docs.openvino.ai/2024/home.html
It took some time to compile even on this Pi5, hint - close any memory hog browser ;)
  • ./hello_query_device
    [ INFO ] Build ................................. 2024.1.0-14826-b520763404f
    [ INFO ]
    [ INFO ] Available devices:
    [ INFO ] CPU
    [ INFO ] SUPPORTED_PROPERTIES:
    [ INFO ] Immutable: AVAILABLE_DEVICES : ""
    [ INFO ] Immutable: RANGE_FOR_ASYNC_INFER_REQUESTS : 1 1 1
    [ INFO ] Immutable: RANGE_FOR_STREAMS : 1 4
    [ INFO ] Immutable: EXECUTION_DEVICES : CPU
    [ INFO ] Immutable: FULL_DEVICE_NAME : ARM CPU
    [ INFO ] Immutable: OPTIMIZATION_CAPABILITIES : FP32 FP16 INT8 BIN EXPORT_IMPORT
    [ INFO ] Immutable: DEVICE_TYPE : integrated
    [ INFO ] Immutable: DEVICE_ARCHITECTURE : arm64
    [ INFO ] Mutable: NUM_STREAMS : 1
    [ INFO ] Mutable: AFFINITY : CORE
    [ INFO ] Mutable: INFERENCE_NUM_THREADS : 0
    [ INFO ] Mutable: PERF_COUNT : NO
    [ INFO ] Mutable: INFERENCE_PRECISION_HINT : f16
    [ INFO ] Mutable: PERFORMANCE_HINT : LATENCY
    [ INFO ] Mutable: EXECUTION_MODE_HINT : PERFORMANCE
    [ INFO ] Mutable: PERFORMANCE_HINT_NUM_REQUESTS : 0
    [ INFO ] Mutable: ENABLE_CPU_PINNING : YES
    [ INFO ] Mutable: SCHEDULING_CORE_TYPE : ANY_CORE
    [ INFO ] Mutable: MODEL_DISTRIBUTION_POLICY : ""
    [ INFO ] Mutable: ENABLE_HYPER_THREADING : YES
    [ INFO ] Mutable: DEVICE_ID : ""
    [ INFO ] Mutable: CPU_DENORMALS_OPTIMIZATION : NO
    [ INFO ] Mutable: LOG_LEVEL : LOG_NONE
    [ INFO ] Mutable: CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE : 1
    [ INFO ] Mutable: DYNAMIC_QUANTIZATION_GROUP_SIZE : 0
    [ INFO ] Mutable: KV_CACHE_PRECISION : f16
    [ INFO ]
The important info, you can speed up Pi5 ML apps by moving to Binary Neural Network, that needs bit level math.

Code:

OPTIMIZATION_CAPABILITIES : FP32 FP16 INT8 BIN EXPORT_IMPORT
Most LLM's are compiled for fp16, it has been proven that Int8, binary, ternary math can be used for ML.

There is still plenty of room to optimize code written for big expensive GPUs, if you have the source.
Optimizing for the ARM64 SIMD/NEON can be done too, by someone who knows how, not me yet :lol:

Checking you GPU/NPU for Bit/Int8 capabilities is useful to know.
Some low end Edge SoCs. are coming with NPUs etc now.

This Openvino library package supports Tensorflow, Tensorflow Lite, Onnx, Paddle, Pytorch and it's own IR.
Is it any faster than other libs - no idea.

Support for Python Notebooks, an easy way to try?
https://docs.openvino.ai/2024/learn-ope ... ython.html
Have not tried Jupyter yet ;)

Statistics: Posted by Gavinmc42 — Sun Mar 24, 2024 12:47 am



Viewing all articles
Browse latest Browse all 4104

Trending Articles