{"id":279831,"date":"2010-08-30T20:28:08","date_gmt":"2010-08-31T03:28:08","guid":{"rendered":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/?post_type=msr-project&#038;p=279831"},"modified":"2016-11-19T02:24:56","modified_gmt":"2016-11-19T10:24:56","slug":"video-collage-2","status":"publish","type":"msr-project","link":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/project\/video-collage-2\/","title":{"rendered":"Video Collage"},"content":{"rendered":"<div id=\"dedM\" class=\"deM\">\n<p>Video Collage is a kind of synthesized image that enable users to quickly browse the video content. Given a video, Video Collage is able to select the most representative images from the video, extract salient regions of interest from these images, and seamlessly arrange ROI on a given canvas. Video Collage can be used for Windows Vista Explorer, Live Search Video, as well as MSN Soapbox.<\/p>\n<\/div>\n<div class=\"cl\"><\/div>\n<div class=\"conM \">\n<p><b>Publications:<\/b><\/p>\n<ul>\n<li>Tao Mei, Bo Yang, Shi-Qiang Yang, Xian-Sheng Hua. &#8220;Video Collage: Presenting a Video Sequence Using a Single Image,&#8221; The Visual Computer, Vol. 25, Issue 1, pp. 39-51, Jan. 2009. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"http:\/\/www.springerlink.com\/content\/a588026736u22023\/\">PDF<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Yan Wang, Tao Mei, Jingdong Wang, Xian-Sheng Hua. &#8220;Dynamic Video Collage,&#8221; International Conference on MultiMedia Modeling (MMM), LNCS 5916, pp. 793-795, Chongqing, China, 2010. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"http:\/\/www.springerlink.com\/index\/v6471018218m0346.pdf\">PDF<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Bo Yang, Tao Mei, Li-Feng Sun, Shi-Qiang Yang, Xian-Sheng Hua. &#8220;Free-Shaped Video Collage,&#8221; International Conference on Multi-Media Modeling (MMM), LNCS 4903, pp. 175-185, Kyoto, Japan, Jan. 2008. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"http:\/\/www.springerlink.com\/content\/045586607423w004\/\">PDF<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Tang Wang, Tao Mei, Xian-Sheng Hua, Xue-Liang Liu, He-Qin Zhou. &#8220;Video Collage: A Novel Presentation of Video Sequence,&#8221; In Proceedings of IEEE International Conference on Multimedia & Expo (ICME), pp. 1479-1482, Beijing, China, July 2007. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"http:\/\/ieeexplore.ieee.org\/xpls\/abs_all.jsp?arnumber=4284941\">PDF<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Video Collage is a kind of synthesized image that enable users to quickly browse the video content. Given a video, Video Collage is able to select the most representative images from the video, extract salient regions of interest from these images, and seamlessly arrange ROI on a given canvas. Video Collage can be used for [&hellip;]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13551],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-279831","msr-project","type-msr-project","status-publish","hentry","msr-research-area-graphics-and-multimedia","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"2006-08-01","related-publications":[],"related-downloads":[],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[],"msr_research_lab":[199560],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/279831","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":0,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/279831\/revisions"}],"wp:attachment":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/media?parent=279831"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=279831"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=279831"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=279831"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=279831"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}