{"id":541,"date":"2023-03-30T17:39:56","date_gmt":"2023-03-30T09:39:56","guid":{"rendered":"https:\/\/doc.orionstar.com\/en\/?post_type=lsvr_kba&#038;p=541"},"modified":"2024-01-23T11:16:43","modified_gmt":"2024-01-23T03:16:43","slug":"visual-ability","status":"publish","type":"lsvr_kba","link":"https:\/\/doc.orionstar.com\/en\/knowledge-base\/visual-ability\/","title":{"rendered":"Visual Ability"},"content":{"rendered":"\n<h4 id=\"api-reference-visual-ability-introduction\"><strong>Introduction<\/strong><a href=\"https:\/\/ainirobot.gatsbyjs.io\/docs\/apk\/apk-development\/#api-reference-visual-ability-introduction\"><\/a><\/h4>\n\n\n\n<p>Visual ability currently mainly refers to two modules of person detection and recognition. Api provides mainly in PersonApi and RobotApi.<\/p>\n\n\n\n<p>Personnel information detection is a local capability. When a person is standing in front of the robot (excluding poor light conditions), the robot can detect the person in front. When the person is standing far away, both the face and the human body can be detected, and when the person is standing closer Only face information can be detected. When the person id is greater than or equal to 0, it means that the person&#8217;s face information is complete, and the person&#8217;s face photo can be obtained for recognition.<\/p>\n\n\n\n<p>Person recognition requires the use of face photos for recognition. This ability is deprecated for legal reasons. You can use Google or Microsoft person recognition instead.<\/p>\n\n\n\n<h4 id=\"api-reference-visual-ability-person-information\"><strong>Person information<\/strong><a href=\"https:\/\/ainirobot.gatsbyjs.io\/docs\/apk\/apk-development\/#api-reference-visual-ability-person-information\"><\/a><\/h4>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">int id; \/\/face id\nint angle; \/\/face angle\ndouble distance; \/\/distance\nint headSpeed; \/\/robot head speed\nlong latency; \/\/data latency\nint facewidth; \/\/facewidth\nint faceheight; \/\/faceheight\ndouble faceAngleX; \/\/X-axis angle of face\ndouble faceAngleY; \/\/Y-axis angle of face\ndouble angleInView; \/\/The angle of the person relative to the robot's head\nint faceX; \/\/X-axis coordinate of face\nint faceY; \/\/Y-axis coordinate of face\nint bodyX; \/\/X-axis coordinate of human body\nint bodyY: \/\/Y-axis coordinate of human body\n\/\/The following fields are deprecated for legal reasons\nString remoteFaceId \/\/deprecated\nint age; \/\/deprecated\nString gender; \/\/deprecated\nint glasses;\/\/deprecated\nboolean with_body;\/\/Does a human body exist?\nboolean with_face;\/\/Does a face exist?<\/pre>\n\n\n\n<h4 id=\"api-reference-visual-ability-person-change-monitoring\"><strong>Person change monitoring<\/strong><a href=\"https:\/\/ainirobot.gatsbyjs.io\/docs\/apk\/apk-development\/#api-reference-visual-ability-person-change-monitoring\"><\/a><\/h4>\n\n\n\n<p>Method name: registerPersonListener \/ unregisterPersonListener<\/p>\n\n\n\n<p>stopGetAllPersonInfo<\/p>\n\n\n\n<p>Calling method:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">PersonListener listener = new PersonListener() {    \n    @Override    \n    public void personChanged() {        \n        super.personChanged();\/\/when person changed, use \"getAllPersons()\" to get all people info in the robot's field of view  \n    }\n};\n\/\/register\nPersonApi.getInstance().registerPersonListener(listener);\n\/\/unregister\nPersonApi.getInstance().unregisterPersonListener(listener);<\/pre>\n\n\n\n<p>Parameter Description:<\/p>\n\n\n\n<ul>\n<li>listener: person information change monitoring<\/li>\n<\/ul>\n\n\n\n<p>Applicable Platform:<\/p>\n\n\n\n<figure class=\"wp-block-table is-style-regular\"><table><thead><tr><th>GreetBot<\/th><th>Mini<\/th><th>Lucki<\/th><th>DeliverBot<\/th><th>BigScreenBot<\/th><\/tr><\/thead><tbody><tr><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h4 id=\"api-reference-visual-ability-get-all-personnel-information\"><strong>Get all personnel information<\/strong><a href=\"https:\/\/ainirobot.gatsbyjs.io\/docs\/apk\/apk-development\/#api-reference-visual-ability-get-all-personnel-information\"><\/a><\/h4>\n\n\n\n<p>Method name: getAllPersons<\/p>\n\n\n\n<p>Calling method:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">\/\/get all people infos in the robot's field of view\nList&lt;Person&gt; personList = PersonApi.getInstance().getAllPersons();\n\/\/get all people infos within 1m of the robot's field of view\nList&lt;Person&gt; personList = PersonApi.getInstance().getAllPersons(1);<\/pre>\n\n\n\n<p>Parameter Description:<\/p>\n\n\n\n<ul>\n<li>maxDistance: What is the field of view of the robot, in meters<\/li>\n<\/ul>\n\n\n\n<p>Return:<\/p>\n\n\n\n<ul>\n<li>personList: personnel information list<\/li>\n<\/ul>\n\n\n\n<p>Applicable Platform:<\/p>\n\n\n\n<figure class=\"wp-block-table is-style-regular\"><table><thead><tr><th>GreetBot<\/th><th>Mini<\/th><th>Lucki<\/th><th>DeliverBot<\/th><th>BigScreenBot<\/th><\/tr><\/thead><tbody><tr><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h4 id=\"api-reference-visual-ability-get-a-list-of-people-who-have-detected-a-human-body\"><strong>Get a list of people who have detected a human body<\/strong><a href=\"https:\/\/ainirobot.gatsbyjs.io\/docs\/apk\/apk-development\/#api-reference-visual-ability-get-a-list-of-people-who-have-detected-a-human-body\"><\/a><\/h4>\n\n\n\n<p>Method name: getAllBodyList<\/p>\n\n\n\n<p>Calling method:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">\/\/get all body infos in the robot's field of view\nList&lt;Person&gt; personList = PersonApi.getInstance().getAllBodyList();\n\/\/get all body infos within 1m of the robot's field of view\nList&lt;Person&gt; personList = PersonApi.getInstance().getAllBodyList(1);<\/pre>\n\n\n\n<p>Parameter Description:<\/p>\n\n\n\n<ul>\n<li>maxDistance: What is the field of view of the robot, in meters<\/li>\n<\/ul>\n\n\n\n<p>Return:<\/p>\n\n\n\n<ul>\n<li>personList: personnel information list<\/li>\n<\/ul>\n\n\n\n<p>Applicable Platform:<\/p>\n\n\n\n<figure class=\"wp-block-table is-style-regular\"><table><thead><tr><th>GreetBot<\/th><th>Mini<\/th><th>Lucki<\/th><th>DeliverBot<\/th><th>BigScreenBot<\/th><\/tr><\/thead><tbody><tr><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h4 id=\"api-reference-visual-ability-get-a-list-of-people-whose-faces-are-detected\"><strong>Get a list of people whose faces are detected<\/strong><a href=\"https:\/\/ainirobot.gatsbyjs.io\/docs\/apk\/apk-development\/#api-reference-visual-ability-get-a-list-of-people-whose-faces-are-detected\"><\/a><\/h4>\n\n\n\n<p>Method name: getAllFaceList<\/p>\n\n\n\n<p>Calling method:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">\/\/get all face infos in the robot's field of view\nList&lt;Person&gt; personList = PersonApi.getInstance().getAllFaceList();\n\/\/get all face infos within 1m of the robot's field of view\nList&lt;Person&gt; personList = PersonApi.getInstance().getAllFaceList(1);<\/pre>\n\n\n\n<p>Parameter Description:<\/p>\n\n\n\n<ul>\n<li>maxDistance: What is the field of view of the robot, in meters<\/li>\n<\/ul>\n\n\n\n<p>Return:<\/p>\n\n\n\n<ul>\n<li>personList: personnel information list<\/li>\n<\/ul>\n\n\n\n<p>Applicable Platform:<\/p>\n\n\n\n<figure class=\"wp-block-table is-style-regular\"><table><thead><tr><th>GreetBot<\/th><th>Mini<\/th><th>Lucki<\/th><th>DeliverBot<\/th><th>BigScreenBot<\/th><\/tr><\/thead><tbody><tr><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h4 id=\"api-reference-visual-ability-a-list-of-people-who-have-detected-complete-faces\"><strong>a list of people who have detected complete faces<\/strong><a href=\"https:\/\/ainirobot.gatsbyjs.io\/docs\/apk\/apk-development\/#api-reference-visual-ability-a-list-of-people-who-have-detected-complete-faces\"><\/a><\/h4>\n\n\n\n<p>Method name: getCompleteFaceList<\/p>\n\n\n\n<p>Calling method:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">List&lt;Person&gt; personList = PersonApi.getInstance().getCompleteFaceList();<\/pre>\n\n\n\n<p>Return:<\/p>\n\n\n\n<ul>\n<li>personList: personnel information list<\/li>\n<\/ul>\n\n\n\n<p>Applicable Platform:<\/p>\n\n\n\n<figure class=\"wp-block-table is-style-regular\"><table><thead><tr><th>GreetBot<\/th><th>Mini<\/th><th>Lucki<\/th><th>DeliverBot<\/th><th>BigScreenBot<\/th><\/tr><\/thead><tbody><tr><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h4 id=\"api-reference-visual-ability-get-the-person-who-is-following-the-focus\"><strong>Get the person who is following the focus<\/strong><a href=\"https:\/\/ainirobot.gatsbyjs.io\/docs\/apk\/apk-development\/#api-reference-visual-ability-get-the-person-who-is-following-the-focus\"><\/a><\/h4>\n\n\n\n<p><em>(this method is only valid when the focus is following)<\/em><\/p>\n\n\n\n<p>Method name: getFocusPerson<\/p>\n\n\n\n<p>Calling method:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">Person person = PersonApi.getInstance().getFocusPerson();<\/pre>\n\n\n\n<p>Return:<\/p>\n\n\n\n<ul>\n<li>person: personnel information<\/li>\n<\/ul>\n\n\n\n<p>Applicable Platform:<\/p>\n\n\n\n<figure class=\"wp-block-table is-style-regular\"><table><thead><tr><th>GreetBot<\/th><th>Mini<\/th><th>Lucki<\/th><th>DeliverBot<\/th><th>BigScreenBot<\/th><\/tr><\/thead><tbody><tr><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h4 id=\"api-reference-visual-ability-get-face-photos\"><strong>Get face photos<\/strong><a href=\"https:\/\/ainirobot.gatsbyjs.io\/docs\/apk\/apk-development\/#api-reference-visual-ability-get-face-photos\"><\/a><\/h4>\n\n\n\n<p>Method name: getPictureById<\/p>\n\n\n\n<p>Calling method:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">RobotApi.getInstance().getPictureById(reqId, faceId, count, new CommandListener() {    \n    @Override    \n    public void onResult(int result, String message) {        \n        try {            \n            JSONObject json = new JSONObject(message);            \n            String status = json.optString(\"status\");            \/\/get photos successfully            \n            if (Definition.RESPONSE_OK.equals(status)) {                \n                JSONArray pictures = json.optJSONArray(\"pictures\");                \n                if (!TextUtils.isEmpty(pictures.optString(0))) {                    \n                    \/\/Photo storage local full path                    \n                    String picturePath = pictures.optString(0);                \n                }            \n            }        \n        } catch (JSONException | NullPointerException e) {            \n            e.printStackTrace();        \n        }    \n    }\n});<\/pre>\n\n\n\n<p>Parameter Description:<\/p>\n\n\n\n<ul>\n<li>faceID: face id, which can be obtained through local face recognition (Person id)<\/li>\n\n\n\n<li>count: the number of photos that need to be obtained, this parameter is currently invalid, and there is only one by default<\/li>\n<\/ul>\n\n\n\n<p><em>Note: The pictures saved in this interface need to be deleted manually after use<\/em><\/p>\n\n\n\n<p>Applicable Platform:<\/p>\n\n\n\n<figure class=\"wp-block-table is-style-regular\"><table><thead><tr><th>GreetBot<\/th><th>Mini<\/th><th>Lucki<\/th><th>DeliverBot<\/th><th>BigScreenBot<\/th><\/tr><\/thead><tbody><tr><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h4 id=\"api-reference-visual-ability-start-focus-follow\"><strong>Start focus follow<\/strong><a href=\"https:\/\/ainirobot.gatsbyjs.io\/docs\/apk\/apk-development\/#api-reference-visual-ability-start-focus-follow\"><\/a><\/h4>\n\n\n\n<p>Method name: startFocusFollow<\/p>\n\n\n\n<p>Calling method:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">RobotApi.getInstance().startFocusFollow(reqId, faceId, lostTimeout, maxDistance, new ActionListener() {    \n    @Override    \n    public void onStatusUpdate(int status, String data) {        \n        switch (status) {            \n            case Definition.STATUS_TRACK_TARGET_SUCCEED:                \n                \/\/follow the target succeed                \n                break;            \n            case Definition.STATUS_GUEST_LOST:                \n                \/\/lost target                \n                break;            \n            case Definition.STATUS_GUEST_FARAWAY:                \n                \/\/target out of range                \n                break;            \n            case Definition.STATUS_GUEST_APPEAR:                \n                \/\/target in range again                \n                break;        \n        }    \n    }    \n    @Override \n    public void onError(int errorCode, String errorString) {        \n        switch (errorCode) {            \n            case Definition.ERROR_SET_TRACK_FAILED:            \n            case Definition.ERROR_TARGET_NOT_FOUND:                \n                \/\/can't find target                \n                break;            \n            case Definition.ACTION_RESPONSE_ALREADY_RUN:                \n                \/\/not ready, please stop all focus follow and try again                \n                break;            \n            case Definition.ACTION_RESPONSE_REQUEST_RES_ERROR:                \n                \/\/There are already api calls that need to control the chassis (such as guidance and navigation). Please stop before continuing to call                \n                break;        \n        }    \n    }    \n    @Override \n    public void onResult(int status, String responseString) {        \n        Log.d(TAG, \"startTrackPerson onResult status: \" + status);        \n        switch (status) {            \n            case Definition.ACTION_RESPONSE_STOP_SUCCESS:                \n                \/\/api call \"stopFocusFollow\" succeed                \n                break;        \n        }    \n    }\n});<\/pre>\n\n\n\n<p>Parameter Description:<\/p>\n\n\n\n<ul>\n<li>faceID: face id, which can be obtained through local face recognition (Person id).<\/li>\n\n\n\n<li>lostTimeout\uff1aHow long can we not identify the target and report the target loss status.<\/li>\n\n\n\n<li>maxDistance\uff1aHow far is the target to report the over-range status. Unit meter.<\/li>\n<\/ul>\n\n\n\n<p><em>Note1: When this api is used, it will take up chassis resources. You cannot perform any chassis operations at the same time, including navigation.<\/em><\/p>\n\n\n\n<p><em>Note2: Don&#8217;t call this api repeatedly to track the same person, just start startFocusFollow again after the following is lost or stopped.<\/em><\/p>\n\n\n\n<p>Applicable Platform:<\/p>\n\n\n\n<figure class=\"wp-block-table is-style-regular\"><table><thead><tr><th>GreetBot<\/th><th>Mini<\/th><th>Lucki<\/th><th>DeliverBot<\/th><th>BigScreenBot<\/th><\/tr><\/thead><tbody><tr><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h4 id=\"api-reference-visual-ability-stop-focus-follow\"><strong>Stop focus follow<\/strong><a href=\"https:\/\/ainirobot.gatsbyjs.io\/docs\/apk\/apk-development\/#api-reference-visual-ability-stop-focus-follow\"><\/a><\/h4>\n\n\n\n<p>Method name: stopFocusFollow<\/p>\n\n\n\n<p>Calling method:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">RobotApi.getInstance().stopFocusFollow(reqId);<\/pre>\n\n\n\n<p>Applicable Platform:<\/p>\n\n\n\n<figure class=\"wp-block-table is-style-regular\"><table><thead><tr><th>GreetBot<\/th><th>Mini<\/th><th>Lucki<\/th><th>DeliverBot<\/th><th>BigScreenBot<\/th><\/tr><\/thead><tbody><tr><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><td>Yes<\/td><\/tr><\/tbody><\/table><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Introduction Visual ability currently mainly refers to two modules of person detection and recognition. Api provides mainly in PersonApi and RobotApi. Personnel information detection is a local capability. When a person is standing in front of the robot (excluding poor light conditions), the robot can detect the person in front. When the person is standing [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","template":"","format":"standard","meta":[],"lsvr_kba_cat":[6],"lsvr_kba_tag":[],"_links":{"self":[{"href":"https:\/\/doc.orionstar.com\/en\/wp-json\/wp\/v2\/lsvr_kba\/541"}],"collection":[{"href":"https:\/\/doc.orionstar.com\/en\/wp-json\/wp\/v2\/lsvr_kba"}],"about":[{"href":"https:\/\/doc.orionstar.com\/en\/wp-json\/wp\/v2\/types\/lsvr_kba"}],"author":[{"embeddable":true,"href":"https:\/\/doc.orionstar.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/doc.orionstar.com\/en\/wp-json\/wp\/v2\/comments?post=541"}],"version-history":[{"count":4,"href":"https:\/\/doc.orionstar.com\/en\/wp-json\/wp\/v2\/lsvr_kba\/541\/revisions"}],"predecessor-version":[{"id":746,"href":"https:\/\/doc.orionstar.com\/en\/wp-json\/wp\/v2\/lsvr_kba\/541\/revisions\/746"}],"wp:attachment":[{"href":"https:\/\/doc.orionstar.com\/en\/wp-json\/wp\/v2\/media?parent=541"}],"wp:term":[{"taxonomy":"lsvr_kba_cat","embeddable":true,"href":"https:\/\/doc.orionstar.com\/en\/wp-json\/wp\/v2\/lsvr_kba_cat?post=541"},{"taxonomy":"lsvr_kba_tag","embeddable":true,"href":"https:\/\/doc.orionstar.com\/en\/wp-json\/wp\/v2\/lsvr_kba_tag?post=541"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}