Tizen Native API
7.0
|
Image Classification, Object Detection, Face and Facial landmark detection.
Required Header
#include <mv_inference.h>
Related Features
This API is related with the following features:
- http://tizen.org/feature/vision.inference
- http://tizen.org/feature/vision.inference.image
- http://tizen.org/feature/vision.inference.face
It is recommended to use features in your application for reliability.
You can check if the device supports the related features for this API by using System Information, and control your application's actions accordingly.
To ensure your application is only running on devices with specific features, please define the features in your manifest file using the manifest editor in the SDK.
More details on using features in your application can be found in Feature Element.
Overview
Media Vision Inference contains mv_inference_h handle to perform Image Classification, Object Detection, Face and Facial Landmark detection. Inference handle should be created with mv_inference_create() and destroyed with mv_inference_destroy(). mv_inference_h should be configured by calling mv_inference_configure(). After configuration, mv_inference_h should be prepared by calling mv_inference_prepare() which loads models and set required parameters. After preparation, mv_inference_image_classify() has to be called to classify images on mv_source_h, and callback mv_inference_image_classified_cb() will be invoked to process results. Module contains mv_inference_object_detect() function to detect object on mv_source_h, and mv_inference_object_detected_cb() to process object detection results. Module also contains mv_inference_face_detect() and mv_inference_facial_landmark_detect() functionalities to detect faces and their landmark on mv_source_h, and callbacks mv_inference_face_detected_cb() and mv_inference_facial_landmark_detected_cb() to process detection results.
Functions | |
int | mv_inference_create (mv_inference_h *infer) |
Creates inference handle. | |
int | mv_inference_destroy (mv_inference_h infer) |
Destroys inference handle and releases all its resources. | |
int | mv_inference_configure (mv_inference_h infer, mv_engine_config_h engine_config) |
Configures the network of the inference. | |
int | mv_inference_prepare (mv_inference_h infer) |
Prepares inference. | |
int | mv_inference_foreach_supported_engine (mv_inference_h infer, mv_inference_supported_engine_cb callback, void *user_data) |
Traverses the list of supported engines for inference. | |
int | mv_inference_image_classify (mv_source_h source, mv_inference_h infer, mv_rectangle_s *roi, mv_inference_image_classified_cb classified_cb, void *user_data) |
Performs image classification on the source. | |
int | mv_inference_object_detect (mv_source_h source, mv_inference_h infer, mv_inference_object_detected_cb detected_cb, void *user_data) |
Performs object detection on the source. | |
int | mv_inference_face_detect (mv_source_h source, mv_inference_h infer, mv_inference_face_detected_cb detected_cb, void *user_data) |
Performs face detection on the source. | |
int | mv_inference_facial_landmark_detect (mv_source_h source, mv_inference_h infer, mv_rectangle_s *roi, mv_inference_facial_landmark_detected_cb detected_cb, void *user_data) |
Performs facial landmarks detection on the source. | |
int | mv_inference_pose_landmark_detect (mv_source_h source, mv_inference_h infer, mv_rectangle_s *roi, mv_inference_pose_landmark_detected_cb detected_cb, void *user_data) |
Performs pose landmarks detection on the source. | |
int | mv_inference_pose_get_number_of_poses (mv_inference_pose_result_h result, int *number_of_poses) |
Gets the number of poses. | |
int | mv_inference_pose_get_number_of_landmarks (mv_inference_pose_result_h result, int *number_of_landmarks) |
Gets the number of landmarks per a pose. | |
int | mv_inference_pose_get_landmark (mv_inference_pose_result_h result, int pose_index, int pose_part, mv_point_s *location, float *score) |
Gets landmark location of a part of a pose. | |
int | mv_inference_pose_get_label (mv_inference_pose_result_h result, int pose_index, int *label) |
Gets a label of a pose. | |
int | mv_pose_create (mv_pose_h *pose) |
Creates pose handle. | |
int | mv_pose_destroy (mv_pose_h pose) |
Destroys pose handle and releases all its resources. | |
int | mv_pose_set_from_file (mv_pose_h pose, const char *motion_capture_file_path, const char *motion_mapping_file_path) |
Sets a motion capture file and its pose mapping file to the pose. | |
int | mv_pose_compare (mv_pose_h pose, mv_inference_pose_result_h action, int parts, float *score) |
Compares an action pose with the pose which is set by mv_pose_set_from_file(). | |
Typedefs | |
typedef bool(* | mv_inference_supported_engine_cb )(const char *engine, bool supported, void *user_data) |
Called to provide information for supported engines for inference. | |
typedef void(* | mv_inference_image_classified_cb )(mv_source_h source, int number_of_classes, const int *indices, const char **names, const float *confidences, void *user_data) |
Called when source is classified. | |
typedef void(* | mv_inference_object_detected_cb )(mv_source_h source, int number_of_objects, const int *indices, const char **names, const float *confidences, const mv_rectangle_s *locations, void *user_data) |
Called when objects in source are detected. | |
typedef void(* | mv_inference_face_detected_cb )(mv_source_h source, int number_of_faces, const float *confidences, const mv_rectangle_s *locations, void *user_data) |
Called when faces in source are detected. | |
typedef void(* | mv_inference_facial_landmark_detected_cb )(mv_source_h source, int number_of_landmarks, const mv_point_s *locations, void *user_data) |
Called when facial landmarks in source are detected. | |
typedef void(* | mv_inference_pose_landmark_detected_cb )(mv_source_h source, mv_inference_pose_result_h locations, void *user_data) |
Called when poses in source are detected. | |
typedef void * | mv_inference_h |
The inference handle. | |
typedef void * | mv_inference_pose_result_h |
The inference pose result handle. | |
typedef void * | mv_pose_h |
The pose handle. | |
Defines | |
#define | MV_INFERENCE_MODEL_CONFIGURATION_FILE_PATH "MV_INFERENCE_MODEL_CONFIGURATION_FILE_PATH" |
Defines MV_INFERENCE_MODEL_CONFIGURATION_FILE_PATH to set inference model's configuration file attribute of the engine configuration. | |
#define | MV_INFERENCE_MODEL_WEIGHT_FILE_PATH "MV_INFERENCE_MODEL_WEIGHT_FILE_PATH" |
Defines MV_INFERENCE_MODEL_WEIGHT_FILE_PATH to set inference model's weight file attribute of the engine configuration. | |
#define | MV_INFERENCE_MODEL_USER_FILE_PATH "MV_INFERENCE_MODEL_USER_FILE_PATH" |
Defines MV_INFERENCE_MODEL_USER_FILE_PATH to set inference model's category file attribute of the engine configuration. | |
#define | MV_INFERENCE_MODEL_META_FILE_PATH "MV_INFERENCE_MODEL_META_FILE_PATH" |
Defines MV_INFERENCE_MODEL_META_FILE_PATH to set inference models's metadata file attribute of the engine configuration. | |
#define | MV_INFERENCE_MODEL_MEAN_VALUE "MV_INFERENCE_MODEL_MEAN_VALUE" |
Defines MV_INFERENCE_MODEL_MEAN_VALUE to set inference model's mean attribute of the engine configuration. | |
#define | MV_INFERENCE_MODEL_STD_VALUE "MV_INFERENCE_MODEL_STD_VALUE" |
Defines MV_INFERENCE_MODEL_STD_VALUE to set an input image's standard deviation attribute of the engine configuration. | |
#define | MV_INFERENCE_BACKEND_TYPE "MV_INFERENCE_BACKEND_TYPE" |
Defines MV_INFERENCE_BACKEND_TYPE to set the type used for inference attribute of the engine configuration. | |
#define | MV_INFERENCE_TARGET_TYPE "MV_INFERENCE_TARGET_TYPE" |
Defines MV_INFERENCE_TARGET_TYPE to set the type used for device running attribute of the engine configuration. | |
#define | MV_INFERENCE_TARGET_DEVICE_TYPE "MV_INFERENCE_TARGET_DEVICE_TYPE" |
Defines MV_INFERENCE_TARGET_DEVICE_TYPE to set the type used for device running attribute of the engine configuration. | |
#define | MV_INFERENCE_INPUT_TENSOR_WIDTH "MV_INFERENCE_INPUT_TENSOR_WIDTH" |
Defines MV_INFERENCE_INPUT_TENSOR_WIDTH to set the width of input tensor. | |
#define | MV_INFERENCE_INPUT_TENSOR_HEIGHT "MV_INFERENCE_INPUT_TENSOR_HEIGHT" |
Defines MV_INFERENCE_INPUT_TENSOR_HEIGHT to set the height of input tensor. | |
#define | MV_INFERENCE_INPUT_TENSOR_CHANNELS "MV_INFERENCE_INPUT_TENSOR_CHANNELS" |
Defines MV_INFERENCE_INPUT_TENSOR_CHANNELS to set the channels, for example 3 in case of RGB colorspace, of input tensor. | |
#define | MV_INFERENCE_INPUT_DATA_TYPE "MV_INFERENCE_INPUT_DATA_TYPE" |
Defines MV_INFERENCE_INPUT_DATA_TYPE to set data type of input tensor. | |
#define | MV_INFERENCE_INPUT_NODE_NAME "MV_INFERENCE_INPUT_NODE_NAME" |
Defines MV_INFERENCE_INPUT_NODE_NAME to set the input node name. | |
#define | MV_INFERENCE_OUTPUT_NODE_NAMES "MV_INFERENCE_OUTPUT_NODE_NAMES" |
Defines MV_INFERENCE_OUTPUT_NODE_NAMES to set the output node names. | |
#define | MV_INFERENCE_OUTPUT_MAX_NUMBER "MV_INFERENCE_OUTPUT_MAX_NUMBER" |
Defines MV_INFERENCE_OUTPUT_MAX_NUMBER to set the maximum number of output attributes of the engine configuration. | |
#define | MV_INFERENCE_CONFIDENCE_THRESHOLD "MV_INFERENCE_CONFIDENCE_THRESHOLD" |
Defines MV_INFERENCE_CONFIDENCE_THRESHOLD to set the threshold value for the confidence of inference results. |
Define Documentation
#define MV_INFERENCE_BACKEND_TYPE "MV_INFERENCE_BACKEND_TYPE" |
Defines MV_INFERENCE_BACKEND_TYPE to set the type used for inference attribute of the engine configuration.
Switches between two types of the type used for neural network model inference. Possible values of the attribute are:
MV_INFERENCE_BACKEND_OPENCV,
MV_INFERENCE_BACKEND_TFLITE.
The default type is MV_INFERENCE_BACKEND_OPENCV.
- Since :
- 5.5
#define MV_INFERENCE_CONFIDENCE_THRESHOLD "MV_INFERENCE_CONFIDENCE_THRESHOLD" |
Defines MV_INFERENCE_CONFIDENCE_THRESHOLD to set the threshold value for the confidence of inference results.
- Deprecated:
- Deprecated since 6.5. Use MV_INFERENCE_MODEL_META_FILE_PATH instead.
Default value is 0.6 and its range is between 0.0 and 1.0.
- Since :
- 5.5
#define MV_INFERENCE_INPUT_DATA_TYPE "MV_INFERENCE_INPUT_DATA_TYPE" |
Defines MV_INFERENCE_INPUT_DATA_TYPE to set data type of input tensor.
- Deprecated:
- Deprecated since 6.5. Use MV_INFERENCE_MODEL_META_FILE_PATH instead.
Data type of input tensor can be changed according to a given weight file. Switches between Float32 or UInt8:
MV_INFERENCE_DATA_FLOAT32,
MV_INFERENCE_DATA_UINT8,
The default type is MV_INFERENCE_DATA_FLOAT32.
- Since :
- 6.0
#define MV_INFERENCE_INPUT_NODE_NAME "MV_INFERENCE_INPUT_NODE_NAME" |
Defines MV_INFERENCE_INPUT_NODE_NAME to set the input node name.
- Deprecated:
- Deprecated since 6.5. Use MV_INFERENCE_MODEL_META_FILE_PATH instead.
- Since :
- 5.5
#define MV_INFERENCE_INPUT_TENSOR_CHANNELS "MV_INFERENCE_INPUT_TENSOR_CHANNELS" |
Defines MV_INFERENCE_INPUT_TENSOR_CHANNELS to set the channels, for example 3 in case of RGB colorspace, of input tensor.
- Deprecated:
- Deprecated since 6.5. Use MV_INFERENCE_MODEL_META_FILE_PATH instead.
- Since :
- 5.5
#define MV_INFERENCE_INPUT_TENSOR_HEIGHT "MV_INFERENCE_INPUT_TENSOR_HEIGHT" |
Defines MV_INFERENCE_INPUT_TENSOR_HEIGHT to set the height of input tensor.
- Deprecated:
- Deprecated since 6.5. Use MV_INFERENCE_MODEL_META_FILE_PATH instead.
- Since :
- 5.5
#define MV_INFERENCE_INPUT_TENSOR_WIDTH "MV_INFERENCE_INPUT_TENSOR_WIDTH" |
Defines MV_INFERENCE_INPUT_TENSOR_WIDTH to set the width of input tensor.
- Deprecated:
- Deprecated since 6.5. Use MV_INFERENCE_MODEL_META_FILE_PATH instead.
- Since :
- 5.5
#define MV_INFERENCE_MODEL_CONFIGURATION_FILE_PATH "MV_INFERENCE_MODEL_CONFIGURATION_FILE_PATH" |
Defines MV_INFERENCE_MODEL_CONFIGURATION_FILE_PATH to set inference model's configuration file attribute of the engine configuration.
Inference model's configuration can be changed to specify the path to the file
- Since :
- 5.5
#define MV_INFERENCE_MODEL_MEAN_VALUE "MV_INFERENCE_MODEL_MEAN_VALUE" |
Defines MV_INFERENCE_MODEL_MEAN_VALUE to set inference model's mean attribute of the engine configuration.
- Deprecated:
- Deprecated since 6.5. Use MV_INFERENCE_MODEL_META_FILE_PATH instead.
- Since :
- 5.5
#define MV_INFERENCE_MODEL_META_FILE_PATH "MV_INFERENCE_MODEL_META_FILE_PATH" |
Defines MV_INFERENCE_MODEL_META_FILE_PATH to set inference models's metadata file attribute of the engine configuration.
The file includes inference model's metadata such as input and output node names, input tensor's width and height, mean and standard deviation values for pre-processing.
- Since :
- 6.5
#define MV_INFERENCE_MODEL_STD_VALUE "MV_INFERENCE_MODEL_STD_VALUE" |
Defines MV_INFERENCE_MODEL_STD_VALUE to set an input image's standard deviation attribute of the engine configuration.
- Deprecated:
- Deprecated since 6.5. Use MV_INFERENCE_MODEL_META_FILE_PATH instead.
- Since :
- 5.5
#define MV_INFERENCE_MODEL_USER_FILE_PATH "MV_INFERENCE_MODEL_USER_FILE_PATH" |
Defines MV_INFERENCE_MODEL_USER_FILE_PATH to set inference model's category file attribute of the engine configuration.
Inference model's category can be changed to specify the path to the file
- Since :
- 5.5
#define MV_INFERENCE_MODEL_WEIGHT_FILE_PATH "MV_INFERENCE_MODEL_WEIGHT_FILE_PATH" |
Defines MV_INFERENCE_MODEL_WEIGHT_FILE_PATH to set inference model's weight file attribute of the engine configuration.
Inference model's weight can be changed to specify the path to the file
- Since :
- 5.5
#define MV_INFERENCE_OUTPUT_MAX_NUMBER "MV_INFERENCE_OUTPUT_MAX_NUMBER" |
Defines MV_INFERENCE_OUTPUT_MAX_NUMBER to set the maximum number of output attributes of the engine configuration.
- Deprecated:
- Deprecated since 6.5. Use MV_INFERENCE_MODEL_META_FILE_PATH instead.
Default value is 5 and a value over 10 will be set to 10. A value under 1 will be set to 1.
- Since :
- 5.5
#define MV_INFERENCE_OUTPUT_NODE_NAMES "MV_INFERENCE_OUTPUT_NODE_NAMES" |
Defines MV_INFERENCE_OUTPUT_NODE_NAMES to set the output node names.
- Deprecated:
- Deprecated since 6.5. Use MV_INFERENCE_MODEL_META_FILE_PATH instead.
- Since :
- 5.5
#define MV_INFERENCE_TARGET_DEVICE_TYPE "MV_INFERENCE_TARGET_DEVICE_TYPE" |
Defines MV_INFERENCE_TARGET_DEVICE_TYPE to set the type used for device running attribute of the engine configuration.
Switches between CPU, GPU, or Custom:
MV_INFERENCE_TARGET_DEVICE_CPU,
MV_INFERENCE_TARGET_DEVICE_GPU,
MV_INFERENCE_TARGET_DEVICE_CUSTOM.
The default type is CPU.
- Since :
- 6.0
#define MV_INFERENCE_TARGET_TYPE "MV_INFERENCE_TARGET_TYPE" |
Defines MV_INFERENCE_TARGET_TYPE to set the type used for device running attribute of the engine configuration.
- Deprecated:
- Deprecated since 6.0. Use MV_INFERENCE_TARGET_TYPE instead.
Switches between CPU, GPU, or Custom:
MV_INFERENCE_TARGET_CPU (Deprecated),
MV_INFERENCE_TARGET_GPU (Deprecated),
MV_INFERENCE_TARGET_CUSTOM (Deprecated).
The default type is CPU.
- Since :
- 5.5
Typedef Documentation
typedef void(* mv_inference_face_detected_cb)(mv_source_h source, int number_of_faces, const float *confidences, const mv_rectangle_s *locations, void *user_data) |
Called when faces in source are detected.
This callback is invoked each time when mv_inference_face_detect() is called to provide the results of face detection.
- Since :
- 5.5
- Remarks:
- The confidences and locations should not be released by app. They can be used only in the callback. The number of elements in confidences and locations is equal to number_of_faces.
- Parameters:
-
[in] source The handle to the source of the media where faces were detected. source is the same object for which mv_inference_face_detect() was called. It should be released by calling mv_destroy_source() when it's not needed anymore. [in] number_of_faces The number of faces [in] confidences Confidences of the detected faces. [in] locations Locations of the detected faces. [in] user_data The user data passed from callback invoking code
- Precondition:
- Call mv_inference_face_detect() function to perform detection of the faces in source and to invoke this callback as a result
- See also:
- mv_inference_face_detect()
typedef void(* mv_inference_facial_landmark_detected_cb)(mv_source_h source, int number_of_landmarks, const mv_point_s *locations, void *user_data) |
Called when facial landmarks in source are detected.
This type callback is invoked each time when mv_inference_facial_landmark_detect() is called to provide the results of the landmarks detection.
- Since :
- 5.5
- Remarks:
- The locations should not be released by app. They can be used only in the callback. The number of elements in locations is equal to number_of_landmarks.
- Parameters:
-
[in] source The handle to the source of the media where landmarks were detected. source is the same object for which mv_inference_facial_landmark_detect() was called. It should be released by calling mv_destroy_source() when it's not needed anymore. [in] number_of_landmarks The number of landmarks [in] locations Locations of the detected facial landmarks. [in] user_data The user data passed from callback invoking code
- Precondition:
- Call mv_inference_face_detect() function to perform detection of the faces in source and to invoke this callback as a result
- See also:
- mv_inference_face_detect()
typedef void* mv_inference_h |
The inference handle.
Contains information about location of detected landmarks for one or more poses.
- Since :
- 5.5
typedef void(* mv_inference_image_classified_cb)(mv_source_h source, int number_of_classes, const int *indices, const char **names, const float *confidences, void *user_data) |
Called when source is classified.
This callback is invoked each time when mv_inference_image_classify() is called to provide the results of image classification.
- Since :
- 5.5
- Remarks:
- The indices, names, and confidences should not be released by the app. They can be used only in the callback. The number of elements in indices, names, and confidences is equal to number_of_classes.
- Parameters:
-
[in] source The handle to the source of the media where an image was classified. source is the same object for which mv_inference_image_classify() was called. It should be released by calling mv_destroy_source() when it's not needed anymore. [in] number_of_classes The number of classes [in] indices The indices of the classified image. [in] names Names corresponding to the indices. [in] confidences Each element is the confidence that the corresponding image belongs to the corresponding class. [in] user_data The user data passed from callback invoking code
- Precondition:
- Call mv_inference_image_classify() function to perform classification of the image and to invoke this callback as a result
- See also:
- mv_inference_image_classify()
typedef void(* mv_inference_object_detected_cb)(mv_source_h source, int number_of_objects, const int *indices, const char **names, const float *confidences, const mv_rectangle_s *locations, void *user_data) |
Called when objects in source are detected.
This callback is invoked each time when mv_inference_object_detect() is called to provide the results of object detection.
- Since :
- 5.5
- Remarks:
- The indices, names, confidences, and locations should not be released by app. They can be used only in the callback. The number of elements in indices, names, confidences, and locations is equal to number_of_objects.
- Parameters:
-
[in] source The handle to the source of the media where an image was classified. source is the same object for which mv_inference_object_detect() was called. It should be released by calling mv_destroy_source() when it's not needed anymore. [in] number_of_objects The number of objects [in] indices The indices of objects. [in] names Names corresponding to the indices. [in] confidences Confidences of the detected objects. [in] locations Locations of the detected objects. [in] user_data The user data passed from callback invoking code
- Precondition:
- Call mv_inference_object_detect() function to perform detection of the objects in source and to invoke this callback as a result
- See also:
- mv_inference_object_detect()
typedef void(* mv_inference_pose_landmark_detected_cb)(mv_source_h source, mv_inference_pose_result_h locations, void *user_data) |
Called when poses in source are detected.
This type callback is invoked each time when mv_inference_pose_landmark_detect() is called to provide the results of the pose landmark detection.
- Since :
- 6.0
- Remarks:
- The locations should not be released by app. They can be used only in the callback.
- Parameters:
-
[in] source The handle to the source of the media where landmarks were detected. source is the same object for which mv_inference_pose_landmark_detect() was called. It should be released by calling mv_destroy_source() when it's not needed anymore. [in] locations Locations of the detected pose landmarks. [in] user_data The user data passed from callback invoking code
- See also:
- mv_inference_pose_landmark_detect()
typedef void* mv_inference_pose_result_h |
The inference pose result handle.
- Since :
- 6.0
typedef bool(* mv_inference_supported_engine_cb)(const char *engine, bool supported, void *user_data) |
Called to provide information for supported engines for inference.
- Since :
- 5.5
- Parameters:
-
[in] engine The supported engine. The engine can be used only in the callback. To use outside, make a copy. [in] supported The flag whether the engine is supported or not [in] user_data The user data passed from mv_inference_foreach_supported_engine()
- Returns:
true
to continue with the next iteration of the loop, otherwisefalse
to break out of the loop
- Precondition:
- mv_inference_foreach_supported_engine()
typedef void* mv_pose_h |
The pose handle.
- Since :
- 6.0
Enumeration Type Documentation
Enumeration for inference backend. MV_INFERENCE_BACKEND_OPENCV An open source computer vision and machine learning software library. (https://opencv.org/about/) MV_INFERENCE_BACKEND_TFLITE Google-introduced open source inference engine for embedded systems, which runs Tensorflow Lite model. (https://www.tensorflow.org/lite/guide/get_started) MV_INFERENCE_BACKEND_ARMNN ARM-introduced open source inference engine for CPUs, GPUs and NPUs, which enables efficient translation of existing neural network frameworks such as TensorFlow, TensorFlow Lite and Caffes, allowing them to run efficiently without modification on Embedded hardware. (https://developer.arm.com/ip-products/processors/machine-learning/arm-nn) MV_INFERENCE_BACKEND_MLAPI Samsung-introduced open source ML single API framework of NNStreamer, which runs various NN models via tensor filters of NNStreamer. (Deprecated since 7.0) (https://github.com/nnstreamer/nnstreamer) MV_INFERENCE_BACKEND_ONE Samsung-introduced open source inference engine called On-device Neural Engine, which performs inference of a given NN model on various devices such as CPU, GPU, DSP and NPU. (https://github.com/Samsung/ONE)
- Since :
- 5.5
- See also:
- mv_inference_prepare()
- Enumerator:
MV_INFERENCE_BACKEND_NONE None
MV_INFERENCE_BACKEND_OPENCV OpenCV
MV_INFERENCE_BACKEND_TFLITE TensorFlow-Lite
MV_INFERENCE_BACKEND_ARMNN ARMNN (Since 6.0)
MV_INFERENCE_BACKEND_MLAPI - Deprecated:
- ML Single API of NNStreamer (Deprecated since 7.0)
MV_INFERENCE_BACKEND_ONE On-device Neural Engine (Since 6.0)
MV_INFERENCE_BACKEND_NNTRAINER NNTrainer (Since 7.0)
MV_INFERENCE_BACKEND_SNPE SNPE Engine (Since 7.0)
MV_INFERENCE_BACKEND_MAX - Deprecated:
- Backend MAX (Deprecated since 7.0)
Enumeration for human body parts.
- Since :
- 6.0
- Enumerator:
Enumeration for human pose landmark.
- Since :
- 6.0
- Enumerator:
Enumeration for inference target.
- Deprecated:
- Deprecated since 6.0. Use mv_inference_target_device_e instead.
- Since :
- 5.5
Function Documentation
int mv_inference_configure | ( | mv_inference_h | infer, |
mv_engine_config_h | engine_config | ||
) |
Configures the network of the inference.
Use this function to configure the network of the inference which is set to engine_config.
- Since :
- 5.5
- Parameters:
-
[in] infer The handle to the inference [in] engine_config The handle to the configuration of engine.
- Returns:
0
on success, otherwise a negative error value
- Return values:
-
MEDIA_VISION_ERROR_NONE Successful MEDIA_VISION_ERROR_NOT_SUPPORTED Not supported MEDIA_VISION_ERROR_INVALID_PARAMETER Invalid parameter in engine_config MEDIA_VISION_ERROR_INVALID_PATH Invalid path of model data in engine_config
int mv_inference_create | ( | mv_inference_h * | infer | ) |
Creates inference handle.
Use this function to create an inference. After the creation the inference has to be prepared with mv_inference_prepare() function to prepare a network for the inference.
- Since :
- 5.5
- Remarks:
- If the app sets MV_INFERENCE_MODEL_CONFIGURATION_FILE_PATH, MV_INFERENCE_MODEL_WEIGHT_FILE_PATH, and MV_INFERENCE_MODEL_USER_FILE_PATH to media storage, then the media storage privilege http://tizen.org/privilege/mediastorage is needed.
If the app sets any of the paths mentioned in the previous sentence to external storage, then the external storage privilege http://tizen.org/privilege/externalstorage is needed.
If the required privileges aren't set properly, mv_inference_prepare() will returned MEDIA_VISION_ERROR_PERMISSION_DENIED. - The infer should be released using mv_inference_destroy().
- Parameters:
-
[out] infer The handle to the inference to be created
- Returns:
0
on success, otherwise a negative error value
- Return values:
-
MEDIA_VISION_ERROR_NONE Successful MEDIA_VISION_ERROR_NOT_SUPPORTED Not supported MEDIA_VISION_ERROR_INVALID_PARAMETER Invalid parameter MEDIA_VISION_ERROR_OUT_OF_MEMORY Out of memory
int mv_inference_destroy | ( | mv_inference_h | infer | ) |
Destroys inference handle and releases all its resources.
- Since :
- 5.5
- Parameters:
-
[in] infer The handle to the inference to be destroyed
- Returns:
0
on success, otherwise a negative error value
- Return values:
-
MEDIA_VISION_ERROR_NONE Successful MEDIA_VISION_ERROR_NOT_SUPPORTED Not supported MEDIA_VISION_ERROR_INVALID_PARAMETER Invalid parameter
- Precondition:
- Create inference handle by using mv_inference_create()
- See also:
- mv_inference_create()
int mv_inference_face_detect | ( | mv_source_h | source, |
mv_inference_h | infer, | ||
mv_inference_face_detected_cb | detected_cb, | ||
void * | user_data | ||
) |
Performs face detection on the source.
Use this function to launch face detection. Each time when mv_inference_face_detect() is called, detected_cb will receive a list of faces and their locations in the media source.
- Since :
- 5.5
- Remarks:
- This function is synchronous and may take considerable time to run.
- Parameters:
-
[in] source The handle to the source of the media [in] infer The handle to the inference [in] detected_cb The callback which will be called for detecting faces on media source. This callback will receive the detection results. [in] user_data The user data passed from the code where mv_inference_face_detect() is invoked. This data will be accessible in detected_cb callback.
- Returns:
0
on success, otherwise a negative error value
- Return values:
-
MEDIA_VISION_ERROR_NONE Successful MEDIA_VISION_ERROR_NOT_SUPPORTED Not supported MEDIA_VISION_ERROR_INVALID_PARAMETER Invalid parameter MEDIA_VISION_ERROR_INTERNAL Internal error MEDIA_VISION_ERROR_NOT_SUPPORTED_FORMAT Source colorspace isn't supported
- Precondition:
- Create a source handle by calling mv_create_source()
- Create an inference handle by calling mv_inference_create()
- Configure an inference handle by calling mv_inference_configure()
- Prepare an inference by calling mv_inference_prepare()
- Postcondition:
- detected_cb will be called to provide detection results
- See also:
- mv_inference_face_detected_cb()
int mv_inference_facial_landmark_detect | ( | mv_source_h | source, |
mv_inference_h | infer, | ||
mv_rectangle_s * | roi, | ||
mv_inference_facial_landmark_detected_cb | detected_cb, | ||
void * | user_data | ||
) |
Performs facial landmarks detection on the source.
Use this function to launch facial landmark detection. Each time when mv_inference_facial_landmark_detect() is called, detected_cb will receive a list facial landmark's locations in the media source.
- Since :
- 5.5
- Remarks:
- This function is synchronous and may take considerable time to run.
- Parameters:
-
[in] source The handle to the source of the media [in] infer The handle to the inference [in] roi Rectangular area including a face in source which will be analyzed. If NULL, then the whole source will be analyzed. [in] detected_cb The callback which will receive the detection results. [in] user_data The user data passed from the code where mv_inference_facial_landmark_detect() is invoked. This data will be accessible in detected_cb callback.
- Returns:
0
on success, otherwise a negative error value
- Return values:
-
MEDIA_VISION_ERROR_NONE Successful MEDIA_VISION_ERROR_NOT_SUPPORTED Not supported MEDIA_VISION_ERROR_INVALID_PARAMETER Invalid parameter MEDIA_VISION_ERROR_INTERNAL Internal error MEDIA_VISION_ERROR_NOT_SUPPORTED_FORMAT Source colorspace isn't supported
- Precondition:
- Create a source handle by calling mv_create_source()
- Create an inference handle by calling mv_inference_create()
- Configure an inference handle by calling mv_inference_configure()
- Prepare an inference by calling mv_inference_prepare()
- Postcondition:
- detected_cb will be called to provide detection results
int mv_inference_foreach_supported_engine | ( | mv_inference_h | infer, |
mv_inference_supported_engine_cb | callback, | ||
void * | user_data | ||
) |
Traverses the list of supported engines for inference.
Using this function the supported engines can be obtained. The names can be used with mv_engine_config_h related getters and setters to get/set MV_INFERENCE_BACKEND_TYPE attribute value.
- Since :
- 5.5
- Parameters:
-
[in] infer The handle to the inference [in] callback The iteration callback function [in] user_data The user data to be passed to the callback function
- Returns:
0
on success, otherwise a negative error value
- Return values:
-
MEDIA_VISION_ERROR_NONE Successful MEDIA_VISION_ERROR_NOT_SUPPORTED Not supported MEDIA_VISION_ERROR_INVALID_PARAMETER Invalid parameter
- See also:
- mv_inference_supported_engine_cb()
int mv_inference_image_classify | ( | mv_source_h | source, |
mv_inference_h | infer, | ||
mv_rectangle_s * | roi, | ||
mv_inference_image_classified_cb | classified_cb, | ||
void * | user_data | ||
) |
Performs image classification on the source.
Use this function to launch image classification. Each time when mv_inference_image_classify() is called, classified_cb will receive classes which the media source may belong to.
- Since :
- 5.5
- Remarks:
- This function is synchronous and may take considerable time to run.
- Parameters:
-
[in] source The handle to the source of the media [in] infer The handle to the inference [in] roi Rectangular area in the source which will be analyzed. If NULL, then the whole source will be analyzed. [in] classified_cb The callback which will be called for classification on source. This callback will receive classification results. [in] user_data The user data passed from the code where mv_inference_image_classify() is invoked. This data will be accessible in classified_cb callback.
- Returns:
0
on success, otherwise a negative error value
- Return values:
-
MEDIA_VISION_ERROR_NONE Successful MEDIA_VISION_ERROR_NOT_SUPPORTED Not supported MEDIA_VISION_ERROR_INVALID_PARAMETER Invalid parameter MEDIA_VISION_ERROR_INVALID_OPERATION Invalid operation MEDIA_VISION_ERROR_INTERNAL Internal error MEDIA_VISION_ERROR_NOT_SUPPORTED_FORMAT Source colorspace isn't supported
- Precondition:
- Create a source handle by calling mv_create_source()
- Create an inference handle by calling mv_inference_create()
- Configure an inference handle by calling mv_inference_configure()
- Prepare an inference by calling mv_inference_prepare()
- Postcondition:
- classified_cb will be called to provide classification results
- See also:
- mv_inference_image_classified_cb()
int mv_inference_object_detect | ( | mv_source_h | source, |
mv_inference_h | infer, | ||
mv_inference_object_detected_cb | detected_cb, | ||
void * | user_data | ||
) |
Performs object detection on the source.
Use this function to launch object detection. Each time when mv_inference_object_detect() is called, detected_cb will receive a list of objects and their locations in the media source.
- Since :
- 5.5
- Remarks:
- This function is synchronous and may take considerable time to run.
- Parameters:
-
[in] source The handle to the source of the media [in] infer The handle to the inference [in] detected_cb The callback which will be called for detecting objects in the media source. This callback will receive the detection results. [in] user_data The user data passed from the code where mv_inference_object_detect() is invoked. This data will be accessible in detected_cb callback.
- Returns:
0
on success, otherwise a negative error value
- Return values:
-
MEDIA_VISION_ERROR_NONE Successful MEDIA_VISION_ERROR_NOT_SUPPORTED Not supported MEDIA_VISION_ERROR_INVALID_PARAMETER Invalid parameter MEDIA_VISION_ERROR_INTERNAL Internal error MEDIA_VISION_ERROR_NOT_SUPPORTED_FORMAT Source colorspace isn't supported
- Precondition:
- Create a source handle by calling mv_create_source()
- Create an inference handle by calling mv_inference_create()
- Configure an inference handle by calling mv_inference_configure()
- Prepare an inference by calling mv_inference_prepare()
- Postcondition:
- detected_cb will be called to provide detection results
- See also:
- mv_inference_object_detected_cb()
int mv_inference_pose_get_label | ( | mv_inference_pose_result_h | result, |
int | pose_index, | ||
int * | label | ||
) |
Gets a label of a pose.
- Since :
- 6.0
- Parameters:
-
[in] result The handle to inference result [in] pose_index The pose index between 0 and the number of poses which can be gotten by mv_inference_pose_get_number_of_poses() [out] label The label of a pose
- Returns:
0
on success, otherwise a negative error value
- Return values:
-
MEDIA_VISION_ERROR_NONE Successful MEDIA_VISION_ERROR_NOT_SUPPORTED Not supported MEDIA_VISION_ERROR_INVALID_PARAMETER Invalid parameter
int mv_inference_pose_get_landmark | ( | mv_inference_pose_result_h | result, |
int | pose_index, | ||
int | pose_part, | ||
mv_point_s * | location, | ||
float * | score | ||
) |
Gets landmark location of a part of a pose.
- Since :
- 6.0
- Parameters:
-
[in] result The handle to inference result [in] pose_index The pose index between 0 and the number of poses which can be gotten by mv_inference_pose_get_number_of_poses() [in] pose_part The landmark index between 0 and the number of landmarks which can be gotten by mv_inference_pose_get_number_of_landmarks() [out] location The location of a landmark [out] score The score of a landmark
- Returns:
0
on success, otherwise a negative error value
- Return values:
-
MEDIA_VISION_ERROR_NONE Successful MEDIA_VISION_ERROR_NOT_SUPPORTED Not supported MEDIA_VISION_ERROR_INVALID_PARAMETER Invalid parameter
int mv_inference_pose_get_number_of_landmarks | ( | mv_inference_pose_result_h | result, |
int * | number_of_landmarks | ||
) |
Gets the number of landmarks per a pose.
- Since :
- 6.0
- Parameters:
-
[in] result The handle to inference result [out] number_of_landmarks The pointer to the number of landmarks
- Returns:
0
on success, otherwise a negative error value
- Return values:
-
MEDIA_VISION_ERROR_NONE Successful MEDIA_VISION_ERROR_NOT_SUPPORTED Not supported MEDIA_VISION_ERROR_INVALID_PARAMETER Invalid parameter
int mv_inference_pose_get_number_of_poses | ( | mv_inference_pose_result_h | result, |
int * | number_of_poses | ||
) |
Gets the number of poses.
- Since :
- 6.0
- Parameters:
-
[in] result The handle to inference result [out] number_of_poses The pointer to the number of poses
- Returns:
0
on success, otherwise a negative error value
- Return values:
-
MEDIA_VISION_ERROR_NONE Successful MEDIA_VISION_ERROR_NOT_SUPPORTED Not supported MEDIA_VISION_ERROR_INVALID_PARAMETER Invalid parameter
int mv_inference_pose_landmark_detect | ( | mv_source_h | source, |
mv_inference_h | infer, | ||
mv_rectangle_s * | roi, | ||
mv_inference_pose_landmark_detected_cb | detected_cb, | ||
void * | user_data | ||
) |
Performs pose landmarks detection on the source.
Use this function to launch pose landmark detection. Each time when mv_inference_pose_landmark_detect() is called, detected_cb will receive a list of pose landmark's locations in the media source.
- Since :
- 6.0
- Remarks:
- This function is synchronous and may take considerable time to run.
- Parameters:
-
[in] source The handle to the source of the media [in] infer The handle to the inference [in] roi Rectangular area including a face in source which will be analyzed. If NULL, then the whole source will be analyzed. [in] detected_cb The callback which will receive the detection results. [in] user_data The user data passed from the code where mv_inference_pose_landmark_detect() is invoked. This data will be accessible in detected_cb callback.
- Returns:
0
on success, otherwise a negative error value
- Return values:
-
MEDIA_VISION_ERROR_NONE Successful MEDIA_VISION_ERROR_NOT_SUPPORTED Not supported MEDIA_VISION_ERROR_INVALID_PARAMETER Invalid parameter MEDIA_VISION_ERROR_INTERNAL Internal error MEDIA_VISION_ERROR_NOT_SUPPORTED_FORMAT Source colorspace isn't supported
- Precondition:
- Create a source handle by calling mv_create_source()
- Create an inference handle by calling mv_inference_create()
- Configure an inference handle by calling mv_inference_configure()
- Prepare an inference by calling mv_inference_prepare()
- Postcondition:
- detected_cb will be called to provide detection results
int mv_inference_prepare | ( | mv_inference_h | infer | ) |
Prepares inference.
Use this function to prepare inference based on the configured network.
- Since :
- 5.5
- Parameters:
-
[in] infer The handle to the inference
- Returns:
0
on success, otherwise a negative error value
- Return values:
-
MEDIA_VISION_ERROR_NONE Successful MEDIA_VISION_ERROR_NOT_SUPPORTED Not supported MEDIA_VISION_ERROR_PERMISSION_DENIED Permission denied MEDIA_VISION_ERROR_INVALID_PARAMETER Invalid parameter MEDIA_VISION_ERROR_INVALID_DATA Invalid model data MEDIA_VISION_ERROR_OUT_OF_MEMORY Out of memory MEDIA_VISION_ERROR_INVALID_OPERATION Invalid operation MEDIA_VISION_ERROR_NOT_SUPPORTED_FORMAT Not supported format
int mv_pose_compare | ( | mv_pose_h | pose, |
mv_inference_pose_result_h | action, | ||
int | parts, | ||
float * | score | ||
) |
Compares an action pose with the pose which is set by mv_pose_set_from_file().
Use this function to compare action pose with the pose which is set by mv_pose_set_from_file(). Parts to be compared can be selected by mv_inference_human_body_part_e. Their similarity will be given by the score between 0 ~ 1.
- Since :
- 6.0
- Remarks:
- If action contains multiple poses, the first pose is used for comparison.
- Parameters:
-
[in] pose The handle to the pose [in] action The action pose [in] parts The parts to be compared [out] score The similarity score
- Returns:
0
on success, otherwise a negative error value
- Return values:
-
MEDIA_VISION_ERROR_NONE Successful MEDIA_VISION_ERROR_NOT_SUPPORTED Not supported MEDIA_VISION_ERROR_INVALID_PARAMETER Invalid parameter MEDIA_VISION_ERROR_INVALID_OPERATION Invalid operation
- Precondition:
- Sets the pose by using mv_pose_set_from_file()
- Detects the pose by using mv_inference_pose_landmark_detect()
int mv_pose_create | ( | mv_pose_h * | pose | ) |
Creates pose handle.
Use this function to create a pose.
- Since :
- 6.0
- Remarks:
- The pose should be released using mv_pose_destroy().
- Parameters:
-
[out] pose The handle to the pose to be created
- Returns:
0
on success, otherwise a negative error value
- Return values:
-
MEDIA_VISION_ERROR_NONE Successful MEDIA_VISION_ERROR_NOT_SUPPORTED Not supported MEDIA_VISION_ERROR_INVALID_PARAMETER Invalid parameter MEDIA_VISION_ERROR_OUT_OF_MEMORY Out of memory
- See also:
- mv_pose_destroy()
int mv_pose_destroy | ( | mv_pose_h | pose | ) |
Destroys pose handle and releases all its resources.
- Since :
- 6.0
- Parameters:
-
[in] pose The handle to the pose to be destroyed
- Returns:
0
on success, otherwise a negative error value
- Return values:
-
MEDIA_VISION_ERROR_NONE Successful MEDIA_VISION_ERROR_NOT_SUPPORTED Not supported MEDIA_VISION_ERROR_INVALID_PARAMETER Invalid parameter
- Precondition:
- Create pose handle by using mv_pose_create()
- See also:
- mv_pose_create()
int mv_pose_set_from_file | ( | mv_pose_h | pose, |
const char * | motion_capture_file_path, | ||
const char * | motion_mapping_file_path | ||
) |
Sets a motion capture file and its pose mapping file to the pose.
Use this function to set a motion capture file and its pose mapping file. These are used by mv_pose_compare() to compare a given pose by mv_inference_pose_landmark_estimation().
- Since :
- 6.0
- Remarks:
- If the app sets paths to media storage, then the media storage privilege http://tizen.org/privilege/mediastorage is needed.
If the app sets the paths to external storage, then the external storage privilege http://tizen.org/privilege/externalstorage is needed.
If the required privileges aren't set properly, mv_pose_set_from_file() will returned MEDIA_VISION_ERROR_PERMISSION_DENIED.
- Parameters:
-
[in] pose The handle to the pose [in] motion_capture_file_path The file path to the motion capture file [in] motion_mapping_file_path The file path to the motion mapping file
- Returns:
0
on success, otherwise a negative error value
- Return values:
-
MEDIA_VISION_ERROR_NONE Successful MEDIA_VISION_ERROR_NOT_SUPPORTED Not supported MEDIA_VISION_ERROR_PERMISSION_DENIED Permission denied MEDIA_VISION_ERROR_INVALID_PARAMETER Invalid parameter MEDIA_VISION_ERROR_INVALID_PATH Invalid path of capture or mapping file MEDIA_VISION_ERROR_INTERNAL Internal error