{"id":171438,"date":"2015-03-05T01:57:24","date_gmt":"2015-03-05T01:57:24","guid":{"rendered":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/project\/human-activity-detection-in-rgbd-videos\/"},"modified":"2016-04-06T23:12:20","modified_gmt":"2016-04-06T23:12:20","slug":"human-activity-detection-in-rgbd-videos","status":"publish","type":"msr-project","link":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/project\/human-activity-detection-in-rgbd-videos\/","title":{"rendered":"Human activity detection in RGBD videos"},"content":{"rendered":"<div class=\"asset-content\">The ability to detect human actions in real-time is fundamental to several applications such as surveillance, gaming, and sign language detection. These applications demand accurate and<br \/>\nrobust localization of actions at low latencies which remains a very challenging computer<br \/>\nvision task. In this project we present efficient descriptors for action detection on RGBD sequences.<\/div>\n<p><!-- .asset-content --><\/p>\n<div id=\"en-usprojectsactiondetectionrgbddefault\" class=\"page-content\">\n<p>In this project we introduce a real-time system for action detection. The system uses a small set of robust features extracted from 3D skeleton data. Features are effectively described based on the probability distribution of skeleton data. The descriptor computes a pyramid of sample covariance matrices and mean vectors to encode the relationship between the features. For handling the intra-class variations of actions, such as action temporal scale variations, the descriptor is computed using different window scales for each action. Discriminative elements of the descriptor are mined using feature selection. The system achieves accurate detection results on difficult unsegmented sequences. Experiments on MSRC-12 and G3D datasets show that the proposed system outperforms the state-of-the-art in detection accuracy with very low latency. To the best of our knowledge, we are the first to propose using multi-scale description in action detection from 3D skeleton data.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-213139 aligncenter\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2015\/03\/rgbd-300x279.jpg\" alt=\"rgbd\" width=\"300\" height=\"279\" srcset=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2015\/03\/rgbd-300x279.jpg 300w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2015\/03\/rgbd.jpg 467w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>The ability to detect human actions in real-time is fundamental to several applications such as surveillance, gaming, and sign language detection. These applications demand accurate and robust localization of actions at low latencies which remains a very challenging computer vision task. In this project we present efficient descriptors for action detection on RGBD sequences. In [&hellip;]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13562],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-171438","msr-project","type-msr-project","status-publish","hentry","msr-research-area-computer-vision","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"3\/5\/2015","related-publications":[167866],"related-downloads":[],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[],"msr_research_lab":[],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/171438","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":0,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/171438\/revisions"}],"wp:attachment":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/media?parent=171438"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=171438"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=171438"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=171438"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=171438"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}