{"id":613563,"date":"2019-10-08T03:09:50","date_gmt":"2019-10-08T10:09:50","guid":{"rendered":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/?post_type=msr-event&#038;p=613563"},"modified":"2025-08-06T11:53:42","modified_gmt":"2025-08-06T18:53:42","slug":"msra-academic-day-2019","status":"publish","type":"msr-event","link":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/event\/msra-academic-day-2019\/","title":{"rendered":"MSRA Academic Day 2019"},"content":{"rendered":"\n\n<p><strong>Venue:<\/strong> Microsoft Research Asia, Beijing<\/p>\n<p><strong>QR Code<\/strong>:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-615825\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/msra-academic-day-2019-qr-code.png\" alt=\"\" width=\"130\" height=\"130\" srcset=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/msra-academic-day-2019-qr-code.png 256w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/msra-academic-day-2019-qr-code-150x150.png 150w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/msra-academic-day-2019-qr-code-180x180.png 180w\" sizes=\"auto, (max-width: 130px) 100vw, 130px\" \/><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p>The Academic Day 2019 event brings together the intellectual power of researchers from across Microsoft Research Asia and the academic community to attain a shared understanding of the contemporary ideas and issues facing the field of tech. Together, we will advance the frontier of technology towards an ideal world of computing.<\/p>\n<p>Through our Microsoft Research Outreach Programs, Microsoft Research Asia has been actively collaborating with academic institutions to promote and progress further development in computer science and other technology domains. We have an ever-expanding partnership with leading universities across the Asia Pacific region to advance state-of-the-art research through various programs and initiatives.<\/p>\n<p>We are excited for \u201cMicrosoft Research Asia Academic Day 2019\u201d to facilitate comprehensive and insightful exchanges between Microsoft Research Asia and the academic community.<\/p>\n<h2>Program Chairs<\/h2>\n<ul class=\"msr-people-list stripped ms-row no-margin-bottom\">\n<li class=\"xs-col-12-24 s-col-8-24 m-col-6-24 l-col-8-24 margin-bottom-sp3\"><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/miran_lee.png\" alt=\"\" width=\"300\" height=\"300\" \/>\n<p class=\"body-alt no-margin-bottom\">Miran Lee<\/p>\n<p class=\"body-alt no-margin-bottom\">Outreach Director<\/p>\n<\/li>\n<li class=\"xs-col-12-24 s-col-8-24 m-col-6-24 l-col-8-24 margin-bottom-sp3\"><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/yongqiang_xiong.jpg\" alt=\"Portrait of Susan Dumais\" width=\"150\" height=\"150\" \/>\n<p class=\"body-alt no-margin-bottom\">Yongqiang Xiong<\/p>\n<p class=\"body-alt no-margin-bottom\">Principal Research Manager<\/p>\n<\/li>\n<li class=\"xs-col-12-24 s-col-8-24 m-col-6-24 l-col-8-24 margin-bottom-sp3\"><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/07\/lyx-2019.png\" alt=\"\" width=\"150\" height=\"150\" \/>\n<p class=\"body-alt no-margin-bottom\">Yunxin Liu<\/p>\n<p class=\"body-alt no-margin-bottom\">Principal Research Manager<\/p>\n<\/li>\n<li class=\"xs-col-12-24 s-col-8-24 m-col-6-24 l-col-8-24 margin-bottom-sp3\"><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2016\/08\/avatar_user__1470987161-180x180.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\n<p class=\"body-alt no-margin-bottom\">Tao Qin<\/p>\n<p class=\"body-alt no-margin-bottom\">Senior Principal Research Manager<\/p>\n<\/li>\n<li class=\"xs-col-12-24 s-col-8-24 m-col-6-24 l-col-8-24 margin-bottom-sp3\"><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2016\/07\/avatar_user__1468038567-180x180.png\" alt=\"\" width=\"150\" height=\"150\" \/>\n<p class=\"body-alt no-margin-bottom\">Wenjun Zeng<\/p>\n<p class=\"body-alt no-margin-bottom\">Senior Principal Research Manager<\/p>\n<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<h2>November 7<\/h2>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-208\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-208\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-207\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tWorkshop on System and Networking for AI\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-207\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-208\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Abstract<\/strong>: We live in a world of connected entities including various systems (ranging from big cloud and edge systems to individual memory and disk systems) networked together. Innovations in systems and networking are key driving forces in the era of big data and artificial intelligence, to empower advanced intelligent algorithms with reliable, secure, scalable and efficient computing capacity to process huge volumes of data. We have witnessed the significant progress in cloud systems, and recently, edge computing, in particular AI on Edge, has attracted increasing attention from both academia and industry. This workshop aims to report and discuss the most recent progress and trends on general system and networking area, especially on various infrastructure support for machine learning systems.<\/p>\n<p><strong>Event owners<\/strong>: <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/yunliu\/\">Yunxin Liu<\/a>, <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/yqx\/\">Yongqiang Xiong<\/a><\/p>\n<table class=\"msr-table-schedule\" style=\"border-spacing: inherit;border-collapse: collapse\" width=\"100%\">\n<thead class=\"thead\">\n<tr class=\"tr\">\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 20%\" width=\"20%\">Time (CST)<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 30%\" width=\"30%\">Workshops<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 35%\" width=\"35%\">Speaker<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 15%\" width=\"15%\">Location<\/th>\n<\/tr>\n<\/thead>\n<tbody class=\"tbody\">\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">2:00 PM\u20132:10 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Welcome and introductions<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Yunxin Liu & Yongqiang Xiong, Microsoft Research<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Dong Zhi Men, Microsoft Tower 1-1F<\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">2:10 PM\u20133:25 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Research at Microsoft (25 mins per talk x3)<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\n<ul>\n<li>Peng Cheng, Microsoft Research<\/li>\n<li>Ting Cao, Microsoft Research<\/li>\n<li>Quanlu Zhang, Microsoft Research<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">3:25 PM\u20134:40 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Research talks (25 mins per talk x3)<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\n<ul>\n<li>Chuan Wu, University of Hong Kong<\/li>\n<li>Xuanzhe Liu, Peking University<\/li>\n<li>Rajesh Krishna Balan, Singapore Management University<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">4:40 PM\u20135:20 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Panel with discussion<\/p>\n<p>Title: \u201cWhat\u2019s missing in system & networking for AI?\u201d<\/p>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\n<ul>\n<li>Yunxin Liu, Microsoft Research (Moderator)<\/li>\n<li>Yongqiang Xiong, Microsoft Research (Moderator)<\/li>\n<li>Chuan Wu, University of Hong Kong<\/li>\n<li>Xuanzhe Liu, Peking University<\/li>\n<li>Rajesh Krishna Balan, Singapore Management University<\/li>\n<li>Peng Cheng, Microsoft Research<\/li>\n<li>Ting Cao, Microsoft Research<\/li>\n<li>Quanlu Zhang, Microsoft Research<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">5:20 PM\u20135:30 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Wrap-up and closing<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-210\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-210\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-209\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tWorkshop on Low-Resource Machine Learning\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-209\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-210\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Abstract<\/strong>: Deep learning has greatly driven this wave of AI. While deep learning has made many breakthroughs in recent years, its success heavily relies on big labeled data, big model, and big computing. As edge computing becomes the trend and more and more IoT devices become available, deep learning faces the low-resource challenge: how to learn from limited labeled data, with limited model size, and limited computation resources. The theme of this workshop is low-resource machine learning: learning from low-resource data, learning compact models, and learning with limited computational resources. This workshop aims to report latest progress and discuss the trends and frontiers of research on low-resource machine learning.<\/p>\n<p><strong>Event owner<\/strong>: <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/taoqin\/\">Tao Qin<\/a><\/p>\n<table class=\"msr-table-schedule\" style=\"border-spacing: inherit;border-collapse: collapse\" width=\"100%\">\n<thead class=\"thead\">\n<tr class=\"tr\">\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 20%\" width=\"20%\">Time (CST)<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 30%\" width=\"30%\">Workshops<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 35%\" width=\"35%\">Speaker<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 15%\" width=\"15%\">Location<\/th>\n<\/tr>\n<\/thead>\n<tbody class=\"tbody\">\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">2:00 PM\u20132:10 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Welcome and introductions<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Tao Qin, Microsoft Research<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Xi Zhi Men, Microsoft Tower 1-1F<\/td>\n<td><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">2:10 PM\u20133:25 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Research at Microsoft (25 mins per talk x3)<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\n<ul>\n<li>Yingce Xia, Microsoft Research<\/li>\n<li>Xu Tan, Microsoft Research<\/li>\n<li>Guolin Ke, Microsoft Research<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">3:25 PM\u20134:40 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Research talks (25 mins per talk x3)<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\n<ul>\n<li>Jaegul Choo, Korea University<\/li>\n<li>Sinno Jialin Pan, Nanyang Technological University<\/li>\n<li>Sung Ju Hwang, KAIST<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">4:40 PM\u20135:20 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Panel with discussion<\/p>\n<p>Title: \u201cChallenges and Future of Low-Resource Machine Leaning\u201d<\/p>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\n<ul>\n<li>Tao Qin, Microsoft Research (Moderator)<\/li>\n<li>Jaegul Choo, Korea University<\/li>\n<li>Sung Ju Hwang, KAIST<\/li>\n<li>Shujie Liu, Microsoft Research<\/li>\n<li>Dongdong Zhang, Microsoft Research<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">5:20 PM\u20135:30 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Wrap-up and closing<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-212\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-212\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-211\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tWorkshop on Multimodal Representation Learning and Applications\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-211\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-212\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Abstract<\/strong>: We live in a world of multimedia (text, image, video, audio, sensor data, 3D, etc.). These modalities are integral components of real-world events and applications. A full understanding of multimedia relies heavily on feature learning, entity recognition, knowledge, reasoning, language representation, etc. Cross-modal learning, which requires joint feature learning and cross-modal relationship modeling, has attracted increasing attention from both academia and industry. This workshop aims to report and discuss the most recent progress and trends on multimodal representation learning for multimedia applications.<\/p>\n<p><strong>Event owners<\/strong>: <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<\/a>, <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/nanduan\/\">Nan Duan<\/a><\/p>\n<table class=\"msr-table-schedule\" style=\"border-spacing: inherit;border-collapse: collapse\" width=\"100%\">\n<thead class=\"thead\">\n<tr class=\"tr\">\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 20%\" width=\"20%\">Time (CST)<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 30%\" width=\"30%\">Workshops<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 35%\" width=\"35%\">Speaker<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 15%\" width=\"15%\">Location<\/th>\n<\/tr>\n<\/thead>\n<tbody class=\"tbody\">\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">2:00 PM\u20132:10 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Welcome and introductions<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Wenjun Zeng, Microsoft Research<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Tian An Men, Microsoft Tower 1-1F<\/td>\n<td><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">2:10 PM\u20133:25 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Research at Microsoft (25 mins per talk x3)<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\n<ul>\n<li>Nan Duan, Microsoft Research<\/li>\n<li>Yue Cao, Microsoft Research<\/li>\n<li>Chong Luo, Microsoft Research<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">3:25 PM\u20134:40 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Research talks (25 mins per talk x3)<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\n<ul>\n<li>Gunhee Kim, Seoul National University<\/li>\n<li>Winston Hsu, National Taiwan University<\/li>\n<li>Jiwen Lu, Tsinghua University<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">4:40 PM\u20135:20 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Panel with discussion<\/p>\n<p>Title: Opportunities and Challenges for Cross-Modal Learning<\/p>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\n<ul>\n<li>Wenjun Zeng, Microsoft Research (Moderator)<\/li>\n<li>Xilin Chen, Chinese Academy of Science<\/li>\n<li>Winston Hsu, National Taiwan University<\/li>\n<li>Gunhee Kim, Seoul National University<\/li>\n<li>Nan Duan, Microsoft Research<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">5:20 PM\u20135:30 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Wrap-up and closing<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<h2>November 8<\/h2>\n<table class=\"msr-table-schedule\" style=\"border-spacing: inherit;border-collapse: collapse\" width=\"100%\">\n<thead class=\"thead\">\n<tr class=\"tr\">\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 20%\" width=\"20%\">Time (CST)<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 30%\" width=\"30%\">Workshops<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 35%\" width=\"35%\">Speaker<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 15%\" width=\"15%\">Location<\/th>\n<\/tr>\n<\/thead>\n<tbody class=\"tbody\">\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">09:00 \u2013 09:30<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Welcome & MSRA Overview<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Hsiao-Wuen Hon<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Gu Gong, Microsoft Tower 1-1F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">09:30 \u2013 09:40<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Fellowship Award Ceremony<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Presenter: Hsiao-Wuen Hon<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">09:40 \u2013 10:00<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Photo session & Break<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">10:00 \u2013 10:40<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Panel Discussion<\/p>\n<p>Title: \u201cHow to foster a computer scientist\u201d<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Moderator: Tim Pan, Microsoft Research<\/p>\n<p>Panelists:<\/p>\n<ul>\n<li>Bohyung Han, Seoul National University<\/li>\n<li>Junichi Rekimoto, The University of Tokyo<\/li>\n<li>Winston Hsu, National Taiwan University<\/li>\n<li>Xin Tong, Microsoft Research<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">10:40 \u2013 11:55<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Technology Showcase by Microsoft Research Asia (5)<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\n<ul>\n<li>\u201cOneOCR For Digital Transformation\u201d by Qiang Huo<\/li>\n<li>\u201cNN grammar check\u201d by Tao Ge<\/li>\n<li>\u201cAutoSys: Learning based approach for system optimization\u201d by Mao Yang<\/li>\n<li>\u201cDual learning and its application in translation and speech from ML\u201d by Tao Qin(Yingce Xia and Xu Tan)<\/li>\n<li>\u201cSpreadsheet Intelligence for Ideas in Excel\u201d by Shi Han<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">12:00 \u2013 14:00<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Technology Showcase by Academic Collaborators<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Lunch, Microsoft Tower1-1F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">14:00 \u2013 17:30<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Breakout Sessions<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Language and Knowledge<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leader: Xing Xie<\/p>\n<p>Speakers: Seung-won Hwang, Min Zhang, Lei Chen, Masatoshi Yoshikawa, Shou-De Lin, Rui Yan, Hiroaki Yamane, Chenhui Chu, Tadashi Nomoto<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Zhong Guan Cun, Microsoft Tower 2-4F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">System and Networking<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leaders: Lidong Zhou, Yunxin Liu<\/p>\n<p>Speakers: Insik Shin, Wenfei Wu, Rajesh Krishna Balan, Youyou Lu, Chuck Yoo, Yu Zhang, Atsuko Miyaji, Jingwen Leng, Yao Guo, Heejo Lee, Cheng Li<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">San Li Tun, Microsoft Tower 2-4F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Computer Vision<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leader: Wenjun Zeng<\/p>\n<p>Speakers: Gunhee Kim, Tianzhu Zhang, Yonggang Wen, Wen-Huang Cheng, Jiaying Liu, Bohyung Han, Wei-Shi Zheng, Jun Takamatsu, Xueming Qian<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Qian Men, Microsoft Tower 2-4F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Graphics<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leader: Xin Tong<\/p>\n<p>Speakers: Min H. Kim, Seungyong Lee, Sung-eui Yoon<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Di Tan, Microsoft Tower 2-4F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Multimedia<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leader: Yan Lu<\/p>\n<p>Speakers: Seung Ah Lee, Huanjing Yue, Hiroki Watanabe, Minsu Cho, Zhou Zhao, Seungmoon Choi<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Gu Lou, Microsoft Tower 2-4F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Healthcare<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leader: Eric Chang<\/p>\n<p>Speakers: Ryo Furukawa, Winston Hsu<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Dong Cheng, Microsoft Tower 2-4F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Data, Knowledge, and Intelligence<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leaders: Jian-Guang Lou, Qingwei Lin<\/p>\n<p>Speakers: Shixia Liu, Huamin Qu, Jong Kim, Yingcai Wu<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Xi Cheng, Microsoft Tower 2-4F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Machine Learning<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leader: Tao Qin<\/p>\n<p>Speakers: Hongzhi Wang, Seong-Whan Lee, Sinno Jialin Pan, Lijun Zhang, Jaegul Choo, Mingkui Tan, Liwei Wang<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Ri Tan, Microsoft Tower 2-4F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Speech<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leader: Frank Soong<\/p>\n<p>Speakers: Jun Du, Hong-Goo Kang<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Guo Zi Jian, Microsoft Tower 2-4F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">17:30-18:00<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Transition Break<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">18:15 \u2013 20:30<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Banquet<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Ballroom located @ 3F, Tylfull Hotel<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<h2>Workshops<\/h2>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-214\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-214\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-213\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tAI Platform Acceleration with Programmable Hardware\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-213\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-214\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Peng Cheng, Microsoft Research<\/p>\n<p>Programmable hardware has been used to build high throughput, low latency real-time core AI engine such as BrainWave. Instead of AI engine, we focus on solving AI-platform-related bottlenecks, for instance in this case, storage and networking I\/O, model distribution, synchronization and data pre-processing in machine learning tasks, with acceleration from programmable hardware. Our proposed system enables direct hardware-assisted device-to-device interconnection with inline processing. We choose FPGA as our first prototype to build a general platform for AI acceleration since FPGA has been widely deployed in Azure to achieve high performance with much lower economy cost. Our system can accelerate AI in many aspects. It now enables GPUs directly fetch training data from storage to GPU memory to bypass costly CPU involvement. As an intelligent hub, it can also do inline data pre-processing efficiently. More accelerating scenarios are under development including in-network inference acceleration and hardware parameter server for distributed machine learning, etc.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-216\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-216\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-215\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tAudio captioning and knowledge-grounded conversation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-215\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-216\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Gunhee Kim, Seoul National University<\/p>\n<p>In this talk, I will introduce two recent works about NLP from Vision and Learning Lab of Seoul National University. First, we present our work that explores the problem of audio captioning: generating natural language description for any kind of audio in the wild, which has been surprisingly unexplored in previous research. We not only contribute a large-scale dataset of about 46K audio clips to human-written text pairs collected via crowdsourcing but also propose two novel components that help improve audio captioning performance of attention-based neural models. Second, I discuss about our work on knowledge-grounded dialogues, in which we address the problem of better modeling the knowledge selection in the multi-turn knowledge-grounded dialogue. We propose a sequential latent variable model as the first approach to this matter. Our experimental results show that the proposed model improves the knowledge selection accuracy and subsequently the performance of utterance generation. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-218\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-218\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-217\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBuilding Large-Scale Decentralized Intelligent Software Systems\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-217\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-218\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<\/p>\n<p><strong>Speaker:<\/strong> Xuanzhe Liu, Peking University<\/p>\n<p>We are in the fast-growing flood of &#8220;data&#8221; and we significantly benefit from the &#8220;intelligence&#8221; derived from it. Such intelligence heavily relies on the centralized paradigm, i.e., the cloud-based systems and services. However, we realize that we are also at the dawn of emerging &#8220;decentralized&#8221; fashion to make intelligence more pervasive and even &#8220;handy&#8221; over smartphones, wearables, IoT devices, along with the collaborations among them and the cloud. This talk tries to discuss some technical challenges and opportunities of building the decentralized intelligence, mostly from a software system perspective, covering aspects of programming abstraction, performance, privacy, energy, and interoperability. We also share our recent efforts on building such software systems and industrial experiences. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-220\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-220\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-219\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tColoring with Limited Data: Few-Shot Colorization via Memory-Augmented Networks\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-219\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-220\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<\/p>\n<p><strong>Speaker:<\/strong> Jaegul Choo, Korea University<\/p>\n<p>Despite recent advancements in deep learning-based automatic colorization, they are still limited when it comes to few-shot learning. Existing models require a significant amount of training data. To tackle this issue, we present a novel memory-augmented colorization model that can produce high-quality colorization with limited data. In particular, our model can capture rare instances and successfully colorize them. We also propose a novel threshold triplet loss that enables unsupervised training of memory networks without the need of class labels. Experiments show that our model has superior quality in both few-shot and one-shot colorization tasks.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-222\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-222\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-221\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFastSpeech: Fast, Robust and Controllable Text to Speech\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-221\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-222\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker:<\/strong> Xu Tan, Microsoft Research<\/p>\n<p>Neural network based end-to-end text to speech (TTS) has significantly improved the quality of synthesized speech. However, neural network based end-to-end models suffer from slow inference speed, and the synthesized speech is usually not robust (i.e., some words are skipped or repeated) and lack of controllability (voice speed or prosody control). In this work, we propose a novel feed-forward network based on Transformer to generate mel-spectrogram in parallel for TTS. Experiments show that our parallel model matches autoregressive models in terms of speech quality, nearly eliminates the problem of word skipping and repeating in particularly hard cases, and can adjust voice speed smoothly. Most importantly, compared with autoregressive Transformer TTS, our model speeds up the mel-spectrogram generation by 270x and the end-to-end speech synthesis by 38x.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-224\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-224\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-223\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tImproving the Performance of Video Analytics Using WiFi Signals\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-223\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-224\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker:<\/strong> Rajesh Krishna Balan, Singapore Management University<\/p>\n<p>Automatic analysis of the behaviour of large groups of people is an important requirement for a large class of important applications such as crowd management, traffic control, and surveillance. For example, attributes such as the number of people, how they are distributed, which groups they belong to, and what trajectories they are taking can be used to optimize the layout of a mall to increase overall revenue. A common way to obtain these attributes is to use video camera feeds coupled with advanced video analytics solutions. However, solely utilizing video feeds is challenging in high people-density areas, such as a normal mall in Asia, as the high people density significantly reduces the effectiveness of video analytics due to factors such as occlusion. In this work, we propose to combine video feeds with WiFi data to achieve better classification results of the number of people in the area and the trajectories of those people. In particular, we believe that our approach will combine the strengths. of the two different sensors, WiFi and video, while reducing the weaknesses of each sensor. This work has started fairly recently and we will present our thoughts and current results up to now.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-226\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-226\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-225\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tLearning Beyond 2D Images\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-225\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-226\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker:<\/strong> Winston Hsu, National Taiwan University<\/p>\n<p>We observed super-human capabilities from current (2D) convolutional networks for the images &#8212; either for discriminative or generative models. For this talk, we will show our recent attempts in visual cognitive computing beyond 2D images. We will first demonstrate the huge opportunities as augmenting the leaning with temporal cues, 3D (point cloud) data, raw data, audio, etc. over emerging domains such as entertainment, security, healthcare, manufacturing, etc. In an explainable manner, we will justify how to design neural networks leveraging the novel (and diverse) modalities. We will demystify the pros and cons for these novel signals. We will showcase a few tangible applications ranging from video QA, robotic object referring, situation understanding, autonomous driving, etc. We will also review the lessons we learned as designing the advanced neural networks which accommodate the multimodal signals in an end-to-end manner. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-228\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-228\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-227\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tLightGBM: A highly efficient gradient boosting machine\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-227\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-228\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<\/p>\n<p><strong>Speaker<\/strong>: Guolin Ke, Microsoft Research<\/p>\n<p>Gradient Boosting Decision Tree (GBDT) is a popular machine learning algorithm and widely-used in the real-world applications. We open-sourced LightGBM, which contains many critical optimizations for the efficient training of GBDT and becomes one of the most popular GBDT tools. During this talk, I will introduce the key technologies behind LightGBM.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-230\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-230\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-229\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMobiDL: Unleash the Mobile CPU Computing Power for Deep Learning Inference\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-229\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-230\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker:<\/strong> Ting Cao, Microsoft Research<\/p>\n<p>Deep learning (DL) models are increasingly deployed into real-world applications on mobile devices. However, current mobile DL frameworks neglect the CPU asymmetry, and the CPUs are seriously underutilized. We propose MobiDL for mobile DL inference, targeting improved CPU utilization and energy efficiency through novel designs for hardware asymmetry and appropriate frequency setting. It integrates four main techniques: 1) cost-model directed matrix block partition; 2) prearranged memory layout for model parameters; 3) asymmetry-aware task scheduling; and 4) data-reuse based CPU frequency setting. During the one-time initialization, the proper block partition, parameter layout, and efficient frequency for DL models can be configured by MobiDL. During inference, MobiDL scheduling balances tasks to fully utilize all the CPU cores. Evaluation shows that for CNN models, MobiDL can achieve 85% performance and 72% energy efficiency improvement on average compared to default TensorFlow. For RNN, it achieves up-to 17.51X performance and 8.26X energy efficiency improvement. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-232\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-232\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-231\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMulti-agent dual learning\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-231\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-232\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<\/p>\n<p><strong>Speaker<\/strong>: Yingce Xia, Microsoft Research<\/p>\n<p>Dual learning is our recently proposed framework, where a primal task (e.g. Chinese-to-English translation) and a dual task (e.g., English-to-Chinese translation) are jointly optimized through a feedback signal. We extend standard dual learning to multi-agent dual learning, where multiple models for the primal task and multiple models for the dual task are evolved. In the case, the feedback signal is enhanced and we can get better performances. Experimental results on low-resource settings show that our method works pretty well. On WMT&#8217;19 machine translation competition, we won four top places using multi-agent dual learning.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-234\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-234\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-233\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMulti-view Deep Learning for Visual Content Understanding\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-233\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-234\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker:<\/strong> Jiwen Lu, Tsinghua University<\/p>\n<p>In this talk, I will overview the trend of multi-view deep learning techniques and discuss how they are used to improve the performance of various visual content understanding tasks. Specifically, I will present three multi-view deep learning approaches: multi-view deep metric learning, multi-modal deep representation learning, and multi-agent deep reinforcement learning, and show how these methods are used for visual content understanding tasks. Lastly, I will discuss some open problems in multi-view deep learning to show how to further develop more advanced multi-view deep learning methods for computer vision in the future. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-236\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-236\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-235\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tNNI: An open source toolkit for neural architecture search and hyper-parameter tuning\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-235\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-236\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<\/p>\n<p><strong>Speaker<\/strong>: Quanlu Zhang, Microsoft Research<\/p>\n<p>Recent years have witnessed the great success of deep learning in a broad range of applications. Model tuning becomes a key step for finding good models. To be effective in practice, a system is demanded to facilitate this tuning procedure from both programming effort and searching efficiency. Thus, we open source NNI (Neural Network Intelligence), a toolkit for neural architecture search and hyper-parameter tuning, which provides easy-to-use interface, rich built-in AutoML algorithms. Moreover, it is highly extensible to support various new tuning algorithms and requirements. With high scalability, many trials could run in parallel on various training platforms.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-238\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-238\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-237\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tPre-training for Video-Language Cross-Modal Tasks\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-237\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-238\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker:<\/strong> Chong Luo, Microsoft Research<\/p>\n<p>Video-language cross-modal tasks are receiving increasing interests in recent years, from video retrieval, video captioning, to spatial-temporal localization in video by language query. In this talk, we will present the research and application of some of these tasks. We will show how pre-trained single-modality models have made these tasks tractable and discuss the paradigm shift in deep neural network design with pre-trained models. In addition, we propose a universal cross-modality pre-training framework which may benefit a wide range of video-language tasks. We hope that our work will provide inspiration to other researchers in solving these interesting but challenging cross-modal tasks. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-240\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-240\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-239\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tResource Scheduling for Distributed Deep Training\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-239\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-240\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<\/p>\n<p><strong>Speaker:<\/strong> Chuan Wu, University of Hong Kong<\/p>\n<p>More and more companies\/institutions are running AI clouds\/machine learning clusters with various ML model training workloads, to support various AI-driven services. Efficient resource scheduling is the key to maximize the performance of ML workloads, as well as hardware efficiency of the very expensive ML cluster. A large room exists in improving today\u2019s ML cluster schedulers, e.g., to include interference awareness in task placement and to schedule not only computation but also communication, etc. In this talk, I will share our recent work on designing deep learning job schedulers for ML clusters, aiming at expediting training speeds and minimizing training completion time. Our schedulers decide communication scheduling, the number of workers\/PSs, and the placement of workers\/PSs for jobs in the cluster, through both heuristics with theoretical support and reinforcement learning approaches. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-242\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-242\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-241\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tTransferable Recursive Neural Networks for Fine-grained Sentiment Analysis\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-241\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-242\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<\/p>\n<p><strong>Speaker:<\/strong> Sinno Jialin Pan, Nanyang Technological University<\/p>\n<p>In \ufb01ne-grained sentiment analysis, extracting aspect terms and opinion terms from user-generated texts is the most fundamental task in order to generate structured opinion summarization. Existing studies have shown that the syntactic relations between aspect and opinion words play an important role for aspect and opinion terms extraction. However, most of the works either relied on pre-de\ufb01ned rules or separated relation mining with feature learning. Moreover, these works only focused on single-domain extraction which failed to adapt well to other domains of interest, where only unlabeled data is available. In real-world scenarios, annotated resources are extremely scarce for many domains or languages. In this talk, I am going to introduce our recent series of works on transfer learning for cross-domain and cross-language fine-grained sentiment analysis based on recursive neural networks. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-244\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-244\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-243\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tVL-BERT: Pre-training of Generic Visual-Linguistic Representations\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-243\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-244\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<\/p>\n<p><strong>Speaker:<\/strong> Yue Cao, Microsoft Research<\/p>\n<p>We introduce a new pre-trainable generic representation for visual-linguistic tasks, called Visual-Linguistic BERT (VL-BERT for short). VL-BERT adopts the simple yet powerful Transformer model as the backbone and extends it to take both visual and linguistic embedded features as input. In it, each element of the input is either of a word from the input sentence, or a region-of-interest (RoI) from the input image. It is designed to fit for most of the visual-linguistic downstream tasks. To better exploit the generic representation, we pre-train VL-BERT on the massive-scale Conceptual Captions dataset, together with text-only corpus. Extensive empirical analysis demonstrates that the pre-training procedure can better align the visual-linguistic clues and benefit the downstream tasks, such as visual commonsense reasoning, visual question answering and referring expression comprehension. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-246\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-246\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-245\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tWhen Language Meets Vision: Multi-modal NLP with Visual Contents\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-245\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-246\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<\/p>\n<p><strong>Speaker:<\/strong> Nan Duan, Microsoft Research<\/p>\n<p>In this talk, I will introduce our latest work on multi-modal NLP, including (i) multi-modal pre-training, which aims to learn the joint representations between language and visual contents; (ii) multi-modal reasoning, which aims to handle complex queries by manipulating knowledge extracted from language and visual contents; (iii) video-based QA\/summarization, which aims to make video contents readable and searchable. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<h2>Breakout Sessions<\/h2>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-248\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-248\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-247\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tAdaptive Regret for Online Learning\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-247\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-248\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Lijun Zhang, Nanjing University<\/p>\n<p>To deal with changing environments, a new performance measure\u2014adaptive regret, defined as the maximum static regret over any interval, is proposed in online learning. Under the setting of online convex optimization, several algorithms have been developed to minimize the adaptive regret. However, existing algorithms are problem-independent and lack universality. In this talk, I will briefly introduce our two contributions in this direction. The first one is to establish problem-dependent bounds of adaptive regret by exploiting the smoothness condition. The second one is to design an universal algorithm that can handle multiple types of functions simultaneously.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-250\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-250\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-249\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tAdvances and Challenges on Human-Computer Conversational Systems\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-249\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-250\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Rui Yan, Peking University<\/p>\n<p>Nowadays, automatic human-computer conversational systems have attracted great attention from both industry and academia. Intelligent products such as XiaoIce (by Microsoft) have been released, while tons of Artificial Intelligence companies have been established. We see that the technology behind the conversational systems is accumulating and now open to the public gradually. With the investigation of researchers, conversational systems are more than scientific fictions: they become real. It is interesting to review the recent advances of human-computer conversational systems, especially the significant changes brought by deep learning techniques. It would also be exciting to anticipate the development and challenges in the future.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-252\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-252\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-251\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tAI and Data: a closed Loop\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-251\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-252\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Hongzhi Wang, Harbin Institute of Technology<\/p>\n<p>Data is the base of modern Artificial Intelligence (AI). Efficient and effective AI requires the support of data acquirement, governance, management, analytics and mining, which brings new challenges. From another aspect, the advances of AI provide new chances for data process to increase its automation. Thus, AI and data forms a closed loop and promote each other. In this talk, the speaker will demonstrate the mutual promotion of AI and data with some examples and discuss the further chance of promote bother of these areas.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-254\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-254\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-253\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tArtificial Intelligence for Fashion\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-253\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-254\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Wen-Huang Cheng, National Chiao Tung University<\/p>\n<p>The fashion industry is one of the biggest in the world, representing over 2 percent of global GDP (2018). Artificial intelligence (AI) has been a predominant theme in the fashion industry and is impacting its every part in scales from personal to industrial and beyond. In recent years, I and my research group have devoted to advanced AI research on helping revolutionize the fashion industry to enable innovative applications and services with improved user experience. In this talk, I would like to give an overview of the major outcomes of our researches and discuss what research subjects we can further work on together with Microsoft researchers to make new impact on the fashion domains.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-256\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-256\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-255\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBERT is not all you need\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-255\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-256\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Seung-won Hwang, Yonsei University<\/p>\n<p>This talk is inspired by a question to my talk at MSRA faculty summit last year: presenting NLP models where injecting (diverse forms of) knowledge contributes to meaningfully enhancing the accuracy and robustness. Then Chin-yew asked: \u201cDo you think BERT implicitly contains all these information already?\u201d This talk an extended investigation to support my short answer at the talk. The title is a spoiler.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-258\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-258\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-257\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBig Data, AI and HI, What is Next?\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-257\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-258\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Lei Chen, Hong Kong University of Science and Technology<\/p>\n<p>Recently, AI has become quite popular and attractive, not only to academia but also to the industry. The successful stories of AI on various applications raise significant public interests in AI. Meanwhile, human intelligence is turning out to be more sophisticated, and Big Data technology is everywhere to improve our life quality. The question that we all want to ask is \u201cwhat is the next?&#8221;. In this talk, I will discuss about DHA, a new computing paradigm, which combines big Data, Human intelligence, and AI (DHA). Specifically, I will first briefly explain the motivation of the DHA. Then I will present challenges, after that, I will highlight some possible solutions to build such a new paradigm.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-260\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-260\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-259\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCombinatorial Inference against Label Noise\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-259\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-260\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Bohyung Han, Seoul National University<\/p>\n<p>Label noise is one of the critical sources that degrade generalization performance of deep neural networks significantly. To handle the label noise issue in a principled way, we propose a unique classification framework of constructing multiple models in heterogeneous coarse-grained meta-class spaces and making joint inference of the trained models for the final predictions in the original (base) class space. Our approach reduces noise level by simply constructing meta-classes and improves accuracy via combinatorial inferences over multiple constituent classifiers. Since the proposed framework has distinct and complementary properties for the given problem, we can even incorporate additional off-the-shelf learning algorithms to improve accuracy further. We also introduce techniques to organize multiple heterogeneous meta-class sets using k-means clustering and identify a desirable subset leading to learn compact models. Our extensive experiments demonstrate outstanding performance in terms of accuracy and efficiency compared to the state- of-the-art methods under various synthetic noise configurations and in a real-world noisy dataset.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-262\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-262\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-261\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCommunication-Efficient Geo-Distributed Multi-Task Learning\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-261\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-262\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Sinno Jialin Pan, Nanyang Technological University<\/p>\n<p>Multi-task learning aims to learn multiple tasks jointly by exploiting their relatedness to improve the generalization performance for each task. Traditionally, to perform multi-task learning, one needs to centralize data from all the tasks to a single machine. However, in many real-world applications, data of different tasks is owned by different organizations and geo-distributed over different local machines. Due to heavy communication caused by transmitting the data and the issue of data privacy and security, it is impossible to send data of different task to a master machine to perform multi-task learning. In this paper, we present our recent work on distributed multi-task learning, which jointly learns multiple tasks in the parameter server paradigm without sharing any training data, and has a theoretical guarantee on convergence to the solution obtained by the corresponding centralized multi-task learning algorithm.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-264\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-264\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-263\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCompact Snapshot Hyperspectral Imaging with Diffracted Rotation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-263\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-264\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Min H. Kim, KAIST<\/p>\n<p>Traditional snapshot hyperspectral imaging systems include various optical elements: a dispersive optical element (prism), a coded aperture, several relay lenses, and an imaging lens, resulting in an impractically large form factor. We seek an alternative, minimal form factor of snapshot spectral imaging based on recent advances in diffractive optical technology. We there- upon present a compact, diffraction-based snapshot hyperspectral imaging method, using only a novel diffractive optical element (DOE) in front of a conventional, bare image sensor. Our diffractive imaging method replaces the common optical elements in hyperspectral imaging with a single optical element. To this end, we tackle two main challenges: First, the traditional diffractive lenses are not suitable for color imaging under incoherent illumination due to severe chromatic aberration because the size of the point spread function (PSF) changes depending on the wavelength. By leveraging this wavelength-dependent property alternatively for hyperspectral imaging, we introduce a novel DOE design that generates an anisotropic shape of the spectrally-varying PSF. The PSF size remains virtually unchanged, but instead the PSF shape rotates as the wavelength of light changes. Second, since there is no dispersive element and no coded aperture mask, the ill-posedness of spectral reconstruction increases significantly. Thus, we pro- pose an end-to-end network solution based on the unrolled architecture of an optimization procedure with a spatial-spectral prior, specifically designed for deconvolution-based spectral reconstruction. Finally, we demonstrate hyperspectral imaging with a fabricated DOE attached to a conventional DSLR sensor. Results show that our method compares well with other state- of-the-art hyperspectral imaging methods in terms of spectral accuracy and spatial resolution, while our compact, diffraction-based spectral imaging method uses only a single optical element on a bare image sensor.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-266\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-266\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-265\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tContextDM: Context-aware Permanent Data Management Framework for Android\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-265\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-266\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Jong Kim, Pohang University of Science and Technology (POSTECH)<\/p>\n<p>The data management practices by third-party apps have failed in terms of manageability and security because the modern systems cannot provide a fine-grained data management and security due to lack of understanding about stored data. As results, users suffer from storage shortage, data stealing, and data tampering.<\/p>\n<p>To tackle the problem, we propose a novel and general data management framework, ContextDM, that sheds light on the storage to help system services and aid-apps for storage to have a better understanding on permanent data. In specific, the framework provides permanent data with metadata that includes contextual semantic information in terms of importance and sensitivity of data. Further, we show the effectiveness of our framework by demonstrating ContextDM based aid-tools that automatically identifying important and useless data as well as sensitive data that is disclosed.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-268\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-268\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-267\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tControlling Deep Natural Language Generation Models\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-267\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-268\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Shou-De Lin, National Taiwan University<\/p>\n<p>Deep Neural Network based solutions have shown promising results in natural language generation recently. From Autoencoder to the Seq2Seq models to the GAN-based solutions, deep learning models can already generate text that pass Turing Test, making the outputs non-distinguishable to human generated ones. However, researchers have pointed out that the content generated from deep neural networks can be fairly unpredictable, meaning that it is non-trivial for human to control the outputs to be generated. This talk will be discussing how to control the outputs of an NLG model and demonstrating some of our recent works along this line.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-270\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-270\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-269\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCross-lingual Visual Grounding and Multimodal Machine Translation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-269\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-270\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Chenhui Chu, Osaka University<\/p>\n<p>In this talk, we will introduce two of our recent work on multilingual and multimodal processing: cross-lingual visual grounding and multimodal machine translation. Visual grounding is a vision and language understanding task aiming at locating a region in an image according to a specific query phrase. We will present our work on cross-lingual visual grounding to expand the task to different languages. In addition, we will introduce our work on multimodal machine translation that incorporate semantic image regions with both visual and textural attention.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-272\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-272\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-271\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCryptographi-based security solutions for internet of things\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-271\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-272\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Atsuko Miyaji, Osaka University<\/p>\n<p>The consequences of security failures in the era of internet of things (IoT) can be catastrophic, as have been demonstrated by a rapidly growing list of IoT security incidents. As a result, people have begun to recognize the importance and value of bringing the highest level of security to IoT. Tradition wisdom has it that, though technologically superior, public-key cryptography (PKC) is too expensive to deploy in IoT devices and networks. In this talk, we present our cost-effective improvement of elliptic curve cryptography (ECC) in terms of memory and computational resource.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-274\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-274\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-273\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDeep Efficient Image (Video) Restoration\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-273\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-274\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Huanjing Yue, Tianjin University<\/p>\n<p>In this talk, I will introduce our team\u2019s work on image (video) denoising and demoir\u00e9ing.<\/p>\n<p>Realistic noise, which is introduced when capturing images under high ISO modes or low light conditions, is more complex than Gaussian noise, and therefore is difficult to be removed. By exploring the spatial, channel, and temporal correlations via deep CNNs, we can efficiently remove noise for images and videos. We construct two datasets to facilitate research on realistic noise removal for images and videos.<\/p>\n<p>Moir\u00e9 patterns, caused by aliasing between the grid of the display device and the array of camera sensor, greatly degrade the visual quality of recaptured screen images. Considering that the recaptured screen image and the original screen content usually have a large difference in brightness, we construct a moir\u00e9 removal and brightness improvement (MRBI) database with moir\u00e9-free and moir\u00e9 image pairs to facilitate the supervised learning and quantitative evaluation. Correspondingly, we propose a CNN based moir\u00e9 removal and brightness improvement method. Our work provides a benchmark dataset and a good baseline method for the demoir\u00e9ing task.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-276\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-276\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-275\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDeep Reinforcement Learning for the Transfer from Simulation to the Real World with Uncertainties for AI Curling Robot System\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-275\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-276\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Seong-Whan Lee, Korea University<\/p>\n<p>Recently, deep reinforcement learning (DRL) has even enabled real world applications such as robotics. Here we teach a robot to succeed in curling (Olympic discipline), which is a highly complex real-world application where a robot needs to carefully learn to play the game on the slippery ice sheet in order to compete well against human opponents. This scenario encompasses fundamental challenges: uncertainty, non-stationarity, infinite state spaces and most importantly scarce data. One fundamental objective of this study is thus to better understand and model the transfer from simulation to real-world scenarios with uncertainty. We demonstrate our proposed framework and show videos, experiments and statistics about Curly our AI curling robot being tested on a real curling ice sheet. Curly performed well both, in classical game situations and when interacting with human opponents.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-278\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-278\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-277\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDevelopment of a 3D endoscopic system with abilities of multi-frame, wide-area scanning \t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-277\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-278\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Ryo Furukawa, Hiroshima City University<\/p>\n<p>For effective in situ endoscopic diagnosis and treatment, or robotic surgery, 3D endoscopic systems have been attracting many researchers. We have been developing a 3D endoscopic system based on an active stereo technique, which projects a special pattern wherein each feature is coded. We believe it is a promising approach because of simplicity and high precision. However, previous works of this approach have problems. First, the quality of 3D reconstruction depended on stabilities of feature extraction from the images captured by the endoscope camera. Second, due to the limited pattern projection area, the reconstructed region was relatively small. In this talk, we describe our works of a learning-based technique using CNNs to solve the first problem and an extended bundle adjustment technique, which integrates multiple shapes into a consistent single shape, to address the second. The effectiveness of the proposed techniques compared to previous techniques was evaluated experimentally.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-280\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-280\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-279\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDifferential Privacy for Spatial and Temporal Data\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-279\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-280\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Masatoshi Yoshikawa, Kyoto University<\/p>\n<p>Differential Privacy (DP) has received increased attention as a rigorous privacy framework. In this talk, we introduce our recent studies on extension of DP to spatial temporal data. The topics include i) DP mechanism under temporal correlations in the context of continuous data release; and ii) location privacy for location-based service over road networks.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-282\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-282\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-281\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDissecting and Accelerating Neural Network via Graph Instrumentation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-281\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-282\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Jingwen Leng, Shanghai Jiao Tong University<\/p>\n<p>Despite the enormous success of deep neural network, there is still no solid understanding of deep neural network\u2019s working mechanism. As such, one fundamental question arises &#8211; how should architects and system developers perform optimizations centering DNNs? Treating them as black box leads to efficiency and security issues: 1) DNN models require fixed computation budge regardless of input; 2) a human-imperceivable perturbation to the input causes a DNN misclassification. This talk will present our efforts toward addressing those challenges. We recognize an increasing need of monitoring and modifying the DNN\u2019s runtime behavior, as evident by our recent work effective path, and other researchers\u2019 work of network pruning and quantization. As such, we present our on-going effort of building a graph instrumentation framework that provides programmers with the great convenience of achieving those abilities.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-284\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-284\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-283\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDynamic GPU Memory Management for DNNs\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-283\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-284\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Yu Zhang, University of Science & Technology of China<\/p>\n<p>While deep learning researchers are seeking deeper and wider nonlinear networks, there is an increasing challenge for deploying deep neural network applications on low-end GPU devices for mobile and edge computing due to the limited size of GPU DRAM. The existing deep learning frameworks lack effective GPU memory management for different reasons. It is hard to apply effective GPU memory management on dynamic computation graphs which cannot get global computation graph (e.g. PyTorch), or can only impose limited dynamic GPU memory management strategies for static computation graphs (e.g. Tensorflow). In this talk, I will analyze the state of the art GPU memory management in the existing DL frameworks, present challenges on GPU memory management faced by running deep neural networks on low-end resource-constrained devices and finally give our thinking.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-286\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-286\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-285\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tEmotional Speech Synthesis with Granularized Control\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-285\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-286\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker: <\/strong>Hong-Goo Kang, Yonsei University<\/p>\n<p>Tangible interaction allows a user to interact with a computer using ordinary physical objects. It substantially expands the interaction space owing to the natural affordance and metaphors provided by real objects. However, tangible interaction requires to identify the object held by the user or how the user is touching the object. In this talk, I will introduce two sensing techniques for tangible interaction, which exploits active sensing using mechanical vibration. A vIn end-to-end deep learning-based emotional text-to-speech (TTS) systems such as the ones using Tacotron networks, it is very important to provide additional embedding vectors to flexibly control the distinct characteristic of target emotion.<\/p>\n<p>This talk introduces a couple of methods to effectively estimate representative embedding vectors. Using the mean of embedding vectors is a simple approach, but the expressiveness of synthesized speech is not satisfactory. To enhance the expressiveness, we needs to consider the distribution of emotion embedding vectors. An inter-to-intra (I2I) distance ratio-based algorithm recently proposed by our research team shows much higher performance than the conventional mean-based one. The I2I algorithm is also useful for gradually changing the intensity of expressiveness. Listening test results verify that the emotional expressiveness and control-ability of the I2I algorithm is superior to those of the mean-based one. ibration is transmitted from an exciter worn in the user\u2019s hand or fingers, and the transmitted vibration is measured using a sensor. By comparing the input-output pair, we can recognize the object held between two fingers or the fingers touching the object. The mechanical vibrations also provide pleasant confirmation feedback to the user. Details will be shared in the talk. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-288\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-288\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-287\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFairness in Recommender Systems\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-287\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-288\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<\/p>\n<p><strong>Speaker<\/strong>: Min Zhang, Tsinghua University<\/p>\n<p>Recommender systems have played significant roles in our daily life, and are expected to be available to any user, regardless of their gender, age or other demographic factors. Recently, there has been a growing concern about the bias that can creep into personalization algorithms and produce unfairness issues. In this talk, I will introduce the trending topics and our recent research progresses at THUIR (Tsinghua University Information Retrieval) group on fairness issue in recommender systems, including the causes of unfairness and the approaches to handle it. These series of work provide new ideas for building fairness-aware recommender system, and have been published on related top-tier international conferences SIGIR 2018, WWW 2019, SIGIR 2019, etc.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-290\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-290\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-289\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFLUID: Flexible User Interface Distribution for Ubiquitous Multi-device Interaction\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-289\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-290\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Insik Shin, KAIST<\/p>\n<p>The growing trend of multi-device ownerships creates a need and an opportunity to use applications across multiple devices. However, in general, the current app development and usage still remain within the single-device paradigm, falling far short of user expectations. For example, it is currently not possible for a user to dynamically partition an existing live streaming app with chatting capabilities across different devices, such that she watches her favorite broadcast on her smart TV while real-time chatting on her smartphone. In this paper, we present FLUID, a new Android-based multi-device platform that enables innovative ways of using multiple devices. FLUID aims to i) allow users to migrate or replicate individual user interfaces (UIs) of a single app on multiple devices (high flexibility), ii) require no additional development effort to support unmodified, legacy applications (ease of development), and iii) support a wide range of apps that follow the trend of using custom-made UIs (wide applicability). FLUID, on the other hand, meets the goals by carefully analyzing which UI states are necessary to correctly render UI objects, deploying only those states on different devices, supporting cross-device function calls transparently, and synchronizing the UI states of replicated UI objects across multiple devices. Our evaluation with 20 unmodified, real-world Android apps shows that FLUID can transparently support a wide range of apps and is fast enough for interactive use.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-292\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-292\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-291\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tGlobal Texture Mapping for Dynamic Objects\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-291\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-292\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Seungyong Lee, Pohang University of Science and Technology (POSTECH)<\/p>\n<p>In this talk, I will introduce a novel framework to generate a global texture atlas for a deforming geometry. Our approach distinguishes from prior arts in two aspects. First, instead of generating a texture map for each timestamp to color a dynamic scene, our framework reconstructs a global texture atlas that can be consistently mapped to a deforming object. Second, our approach is based on a single RGB-D camera, without the need of a multiple-camera setup surrounding a scene. In our framework, the input is a 3D template model with an RGB-D image sequence, and geometric warping fields are found using a state-of-the-art non-rigid registration method to align the template mesh to noisy and incomplete input depth images. With these warping fields, our multi-scale approach for texture coordinate optimization generates a sharp and clear texture atlas that is consistent with multiple color observations over time. Our approach provides a handy configuration to capture a dynamic geometry along with a clean texture atlas, and we demonstrate it with practical scenarios, particularly human performance capture.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-294\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-294\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-293\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tGradient Descent Finds Global Minima of Deep Neural Networks\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-293\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-294\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Liwei Wang, Peking University<\/p>\n<p>Gradient descent finds a global minimum in training deep neural networks despite the objective function being non-convex. The current paper proves gradient descent achieves zero training loss in polynomial time for a deep over-parameterized neural network with residual connections (ResNet). Our analysis relies on the particular structure of the Gram matrix induced by the neural network architecture. This structure allows us to show the Gram matrix is stable throughout the training process and this stability implies the global optimality of the gradient descent algorithm. We further extend our analysis to deep residual convolutional neural networks and obtain a similar convergence result.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-296\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-296\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-295\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tGraph-based Action Assessment\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-295\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-296\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Wei-Shi Zheng, Sun Yat-sen University<\/p>\n<p>We present a new model to assess the performance of actions visually from videos by graph-based joint relation modelling. Previous works mainly focused on the whole scene including the performer&#8217;s body and background, yet they ignored the detailed joint interactions. This is insufficient for fine-grained and accurate action assessment, because the action quality of each joint is dependent of its neighboring joints. Therefore, we propose to learn the detailed joint motion based on the joint relations. We build trainable Joint Relation Graphs, and analyze joint motion on them. We propose two novel modules, namely the Joint Commonality Module and the Joint Difference Module, for joint motion learning. The Joint Commonality Module models the general motion for certain body parts, and the Joint Difference Module models the motion differences within body parts. We evaluate our method on six public Olympic actions for performance assessment. Our method outperforms previous approaches (+0.0912) and the whole-scene model (+0.0623) in terms of the Spearman&#8217;s Rank Correlation. We also demonstrate our model&#8217;s ability of interpreting the action assessment process.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-298\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-298\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-297\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tIntelligent Action Analytics with Multi-Modal Reasoning\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-297\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-298\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Jiaying Liu, Peking University<\/p>\n<p>In this talk, we focus on intelligent action analytics in videos with multi-modal reasoning, which is important but remains under explored. We first present challenges in this problem by introducing PKU-MMD dataset collected by ourselves, i.e., multi-modal complementary feature learning, noise-robust feature learning, and dealing with tedious label annotation, etc. To tackle the above issues, we propose initial solutions with multi-modal reasoning. A modality compensation network is proposed to explicitly explore relationship of different modalities and further boost multi-modal feature learning. A noise-invariant network is developed to recognize human actions from noisy skeletons by referring denoised skeletons. To light up the community, we introduce possible future work in the end, such as self-supervised learning, language-guided reasoning.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-300\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-300\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-299\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tKafe: can OS kernel handle packets fast enough\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-299\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-300\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Chuck Yoo, Korea University<\/p>\n<p>It is widely believed that commodity operating systems cannot deliver high-speed packet processing, and a number of alternative approaches (including user-space network stacks) have been proposed. This talk revisits the inef\ufb01ciency of packet processing inside kernel and explores whether a redesign of kernel network stacks can improve the incompetence. We present a case through a redesign: Kafe \u2013 a kernel-based advanced forwarding engine. Contrary to the belief, Kafe can process packets as fast as user-space network stacks. Kafe neither adds any new API nor depends on proprietary hardware features.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-302\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-302\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-301\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tLearning Multi-label Feature for Fine-Grained Food Recognition\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-301\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-302\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Xueming Qian, Xi&#8217;an Jiaotong University<\/p>\n<p>Fine-grained food recognition is the detailed classification provide more specialized and professional attribute information of food. It is the basic work to realize healthy diet recommendation and cooking instructions, nutrition intake management and caf\u00e9teria self-checkout system. Chinese food appearance without the structured information, and ingredients composition is an important consideration. We proposed a new method for fine-grained food and ingredients recognition, include Attention Fusion Network (AFN) and Food-Ingredient Joint Learning. In AFN, it is focus on important attention regional features, and generates the feature descriptor. In Food-Ingredient Joint Learning, we proposed the balance focal loss to solve the issue of imbalanced ingredients multi-label. Finally, a series of experiments to prove results have significantly improved on the existing methods.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-304\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-304\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-303\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tLearning to Appreciate: Transforming Multimedia Communications via Deep Video Analytics\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-303\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-304\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Yonggang Wen, Nanyang Technological University<\/p>\n<p>Media-rich applications will continue to dominate mobile data traffic with an exponential growth, as predicted by Cisco Video Index. The improved quality of experience (QoE) for the video consumers plays an important role in shaping this growth. However, most of the existing approaches in improving video QoE are system-centric and model-based, in that they tend to derive insights from system parameters (e.g., bandwidth, buffer time, etc) and propose various mathematical models to predict QoE scores (e.g., mean opinion score, etc). In this talk, we will share our latest work in developing a unified and scalable framework to transform multimedia communications via deep video analytics. Specifically, our framework consists two main components. One is a deep-learning based QoE prediction algorithm, by combining multi-modal data inputs to provide a more accurate assessment of QoE in real-time manner. The other is a model-free QoE optimization paradigm built upon deep reinforcement learning algorithm. Our preliminary results verify the effectiveness of our proposed framework. We believe that the hybrid approach of multimedia communications and computing would fundamentally transform how we optimization multimedia communications system design and operations.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-306\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-306\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-305\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tLensless Imaging for Biomedical Applications\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-305\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-306\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Seung Ah Lee, Yonsei University<\/p>\n<p>Miniaturization of microscopes can be a crucial stepping stone towards realizing compact,cost-effective and portable platforms for biomedical research and healthcare. This talk reports on implementations lensless microscopes and lensless cameras for a variety of biological imaging applications in the form of mass-producible semiconductor devices, which transforms the fundamental design of optical imaging systems.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-308\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-308\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-307\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tLeveraging Generative Adversarial Networks for Data Augmentation by Disentangling Class-Independent Features\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-307\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-308\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Jaegul Choo, Korea University<\/p>\n<p>Considering its success in generating high-quality, realistic data, generative adversarial networks (GANs) have potentials to be used for data augmentation to improve the prediction accuracy in diverse problems where the limited amount of training data is given. However, GANs themselves require a nontrivial amount of data for their training, so data augmentation via GANs does not often improve the accuracy in practice. This talk will briefly review existing literature and our on-going approach based on feature disentanglement. I will conclude the talk with further research issues that I would like to address in the future.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-310\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-310\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-309\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tManipulatable Auditory Perception in Wearable Computing\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-309\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-310\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Hiroki Watanabe, Hokkaido University<\/p>\n<p>Since auditory perception is passive sense, we often do not notice important information and acquire unimportant information. We focused on a earphone-type wearable computer (hearable device) that not only has speakers but also microphones. In a hearable computing environment, we always attach microphones and speakers to the ears. Therefore, we can manipulate our auditory perception using a hearable device. We manipulated the frequency of the input sound from the microphones and transmitted the converted sound from the speakers. Thus, we could acquire the sound that is not heard with our normal auditory perception and eliminate the unwanted sound according to the user\u2019s requirements.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-312\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-312\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-311\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tModel Centric DevOps for Network Functions\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-311\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-312\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Wenfei Wu, Tsinghua University<\/p>\n<p>Network Functions play important roles in improving performance and enhancing security in modern computer networks. More and more NFs are being developed, integrated, and managed in production networks. However, the connection between the development and the operation for network functions has not drawn attention yet, which slows down the development and delivery of NFs and complicates NF network management.<\/p>\n<p>We propose that building a common abstraction layer for network functions would benefit both the development and operation. For NF development, having a uniform abstraction layer to describe NF behaviors would make the cross-platform development to be rapid and agile, which accelerate the NF delivery for NF vendors, and we would introduce our recent NF development framework based on language and compiler technologies. For NF operation, having a behavior model would ease the network reasoning, which can avoid runtime bugs, and more crucially, the behavior model is guaranteed to reflect the actual implementation; we would introduce our NF verification work based on the NF modeling language. Around our model-centric NF development and operation, we also other NF model works which lay the foundation of NF modeling language, and fill in the semantic gap between legacy NFs and NF models.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-314\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-314\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-313\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tNAT: Neural Architecture Transformer for Accurate and Compact Architectures\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-313\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-314\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Mingkui Tan, South China University of Technology<\/p>\n<p>Architecture design is one of the key factors behind the success of deep neural networks. Existing deep architectures are either manually designed or automatically searched by some Neural Architecture Search (NAS) methods. However, even a well-searched architecture may still contain many non-significant or redundant modules or operations (e.g., convolution or pooling), which not only incur substantial memory consumption and computational cost but may also deteriorate the performance. Thus, it is necessary to optimize the operations inside the architecture to improve the performance without introducing extra computational cost. However, such a constrained optimization problem is an NP-hard problem and is very hard to solve. To address this problem, we cast the optimization problem into a Markov decision process (MDP) and learn a Neural Architecture Transformer (NAT) to replace the redundant operations with the more computationally efficient ones (e.g., skip connection or directly removing the connection). In MDP, we train NAT with reinforcement learning to obtain the architecture optimization policies w.r.t. different architectures. To verify the effectiveness of the proposed method, we apply NAT on both hand-crafted architectures and NAS based architectures. Extensive experiments on two benchmark datasets, i.e., CIFAR-10 and ImageNet, show that the transformed architecture significantly outperforms both the original architecture and the architectures optimized by the existing methods.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-316\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-316\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-315\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tNovelty-aware exploration in RL and Conditional GANs for diversity\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-315\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-316\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Gunhee Kim, Seoul National University<\/p>\n<p>In this talk, I will introduce two recent works on machine learning from Vision and Learning Lab of Seoul National University. First, we present our work in reinforcement learning. We introduce an information-theoretic exploration strategy named Curiosity-Bottleneck (CB) that distills task-relevant information from observation. In our experiments, we observe that the CB algorithm robustly measures the state novelty in distractive environments where state-of-the-art exploration methods often degenerate. Second, we propose novel training schemes with a new set of losses that can prevent conditional GANs from losing the diversity in their outputs. We perform thorough experiments on image-to-image translation, super-resolution and image inpainting and show that our methods achieve a great diversity in outputs while retaining or even improving the visual fidelity of generated samples.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-318\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-318\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-317\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tNumerical\/quantitative system for common sense natural language processing\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-317\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-318\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Hiroaki Yamane, RIKEN AIP & The University of Tokyo<\/p>\n<p>Numerical common sense (e.g., \u201ca person with a height of 2m is very tall\u201d) is essential when deploying artificial intelligence (AI) systems in society. We construct methods for converting contextual language to numerical variables for quantitative\/numerical common sense in natural language processing (NLP).<\/p>\n<p>We are living the world where we need common sense. We use some common sense when observing objects: A 165 cm human cannot be bigger than a 1 km bridge. The weight of the aforementioned human ranges from 40 kg to 90 kg. If one\u2019s weight is less than 50 kg, they are more likely to be very thin. This can be also applied to money. If the latest Surface Pro is $500, it is quite cheap. There is a necessity to account for common sense in future AI system.<\/p>\n<p>To address this problem, we first use a crowdsourcing service to obtain sufficient data for a subjective agreement on numerical common sense. Second, to examine whether common sense is attributed to current word embedding, we examined the performance of a regressor trained on the obtained data.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-320\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-320\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-319\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tParaphrasing and Simplification with Lean Vocabulary\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-319\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-320\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Tadashi Nomoto, The SOKENDAI Graduate School of Advanced Studies<\/p>\n<p>In this work, we examine whether it is possible to achieve the state of the art performance in paraphrase generation with reduced vocabulary. Our approach consists of building a convolution to sequence model (Conv2Seq) partially guided by the reinforcement learning, and training it on the sub-word representation of the input. The experiment on the Quora dataset, which contains over 140,000 pairs of sentences and corresponding paraphrases, found that with less than 1,000 token types, we were able to achieve performance that exceeded that of the current state of the art. We also report that the same architecture works equally well for text simplification, with little change.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-322\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-322\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-321\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tRay-SSL: Ray Tracing based Sound Source Localization considering Reflection and Diffraction\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-321\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-322\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Sung-eui Yoon, KAIST<\/p>\n<p>In this talk, we discuss a novel, ray tracing based technique for 3D sound source localization for indoor and outdoor environments. Unlike prior approaches, which are mainly based on continuous sound signals from a stationary source, our formulation is designed to localize the position instantaneously from signals within a single frame. We consider direct sound and indirect sound signals that reach the microphones after reflecting off surfaces such as ceilings or walls. We then generate and trace direct and reflected acoustic paths using backward acoustic ray tracing and utilize these paths with Monte Carlo localization to estimate a 3D sound source position. For complex cases with many objects, we also found that diffraction effects caused by the wave characteristics of sound become dominant. We propose to handle such non-trivial problems even with ray tracing, since directly applying wave simulation is prohibitively expensive.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-324\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-324\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-323\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tRecent Advances and Trends in Visual Tracking\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-323\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-324\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Tianzhu Zhang, University of Science and Technology of China<\/p>\n<p>Visual tracking is one of the most fundamental topics in computer vision with various applications in video surveillance, human computer interaction and vehicle navigation. Although great progress has been made in recent years, it remains a challenging problem due to factors such as illumination changes, geometric deformations, partial occlusions, fast motions and background clutters. In this talk, I will first review several recent models of visual tracking including particle filtering, classifier learning for tracking, sparse tracking, deep learning tracking, and correlation filter based tracking. Then, I will review several recent works of our group including correlation particle filter tracking, and graph convolutional tracking.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-326\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-326\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-325\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tRelational Knowledge Distillation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-325\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-326\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Minsu Cho, Pohang University of Science and Technology (POSTECH)<\/p>\n<p>Knowledge distillation aims at transferring knowledge acquired in one model (a teacher) to another model (a student) that is typically smaller. Previous approaches can be expressed as a form of training the student to mimic output activations of individual data examples represented by the teacher. We introduce a novel approach, dubbed relational knowledge distillation (RKD), that transfers mutual relations of data examples instead. For concrete realizations of RKD, we propose distance-wise and angle-wise distillation losses that penalize structural differences in relations. Experiments conducted on different tasks show that the proposed method improves educated student models with a significant margin. In particular for metric learning, it allows students to outperform their teachers&#8217; performance, achieving the state of the arts on standard benchmark datasets.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-328\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-328\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-327\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tRequirements of Computer Vision for Household Robots\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-327\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-328\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Jun Takamatsu, Nara Institute of Science and Technology<\/p>\n<p>For household robots that work in everyday-life dynamic environments, the computer vision (CV) to recognize the environments is essential. Unfortunately, CV issues in household robots sometimes cannot be solved by the methods that were usually proposed in the CV fields. In this talk, I exemplify the two examples and would like to ask their solutions. The first example is CV in learning-from-observation, where it is not enough to recognize names of actions, such as walk and jump. The second example is analysis of usage of time. This requires recognizing activities in the level such as watch TV and spend one\u2019s hobby.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-330\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-330\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-329\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tSoftware and Hardware Co-design for Networked Memory\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-329\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-330\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Youyou Lu, Tsinghua University<\/p>\n<p>Non-volatile memory (NVM) and remote direct memory access (RDMA) provide extremely high performance in storage and network hardware. Comparatively, the software overhead in the file systems become a non-negligible part in persistent memory storage systems. To achieve efficient networked memory design, I will present this design choices in Octopus. Octopus is a distributed file system that redesigns file system internal mechanisms by closely coupling NVM and RDMA features. I will further discuss the possible hardware enhancements for networked memory for research in my group.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-332\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-332\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-331\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tSystem support for designing efficient gradient compression algorithms for distributed DNN training\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-331\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-332\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Cheng Li, University of Science and Technology of China<\/p>\n<p>Training DNN models across a large number of connected devices or machines has been at norm. Studies suggest that the major bottleneck of scaling out the training jobs is to exchange the huge amount of gradients per mini-batch. Thus, a few compression algorithms have been proposed, such as Deep Gradients Compression, Terngrad, and evaluated to demonstrate their benefits of reducing the transmission cost. However, when re-implementing these algorithms and integrating them into mainstream frameworks such as MxNet, we identified that they performed less efficiently than what was claimed in their original papers. The major gap is that the developers of those algorithms did not necessarily understand the internals of the deep learning frameworks. As a consequence, we believe that there is lack of system support for enabling the algorithm developers to primarily focus on the innovations of the compression algorithms, rather than the efficient implementations which may take into account various levels of parallelism. To this end, we propose a domain-specific language that allows the algorithm developers to sketch their compression algorithms, a translator that converts the high-level descriptions into low-level highly optimized GPU codes, and a compiler that generates new computation DAGs that fuses the compression algorithms with proper operators that produce gradients.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-334\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-334\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-333\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tTowards solving the cocktail party problem: from speech separation to speech recognition\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-333\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-334\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Jun Du, University of Science and Technology of China<\/p>\n<p>Solving the cocktail party problem is one ultimate goal for the machine to achieve the human-level auditory perception. Speech separation and recognition are two related key techniques. With the emergence of deep learning, new milestones are achieved for both speech separation and recognition. In this talk, I will introduce our recent progress and future trends in these areas with the development of DIHARD and CHiME Challenges.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-336\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-336\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-335\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tToward Ubiquitous Operating Systems: Challenges and Research Directions\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-335\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-336\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Yao Guo, Peking University<\/p>\n<p>In recent years, operating systems have expanded beyond traditional computing systems into the cloud, IoT devices, and other emerging technologies and will soon become ubiquitous. We call this new generation of OSs as ubiquitous operating systems (UOSs). Despite the apparent differences among existing OSs, they all have in common so-called \u201csoftware-defined\u201d capabilities\u2014namely, resource virtualization and function programmability. In this talk, I will present our vision and some recent work toward the development of ubiquitous operating systems (UOSs).<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-338\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-338\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-337\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tVibration-Mediated Sensing Techniques for Tangible Interaction\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-337\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-338\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker: <\/strong>Seungmoon Choi, Pohang University of Science and Technology (POSTECH)<\/p>\n<p>Tangible interaction allows a user to interact with a computer using ordinary physical objects. It substantially expands the interaction space owing to the natural affordance and metaphors provided by real objects. However, tangible interaction requires to identify the object held by the user or how the user is touching the object. In this talk, I will introduce two sensing techniques for tangible interaction, which exploits active sensing using mechanical vibration. A vibration is transmitted from an exciter worn in the user\u2019s hand or fingers, and the transmitted vibration is measured using a sensor. By comparing the input-output pair, we can recognize the object held between two fingers or the fingers touching the object. The mechanical vibrations also provide pleasant confirmation feedback to the user. Details will be shared in the talk.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-340\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-340\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-339\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tVideo Analytics in Crowded Spaces\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-339\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-340\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Rajesh Krishna Balan, Singapore Management University<\/p>\n<p>I will describe the flow of work I am starting on video analytics in crowded spaces. This includes malls, conferences centres, and university campuses in Asia. The goal of this work is to use video analytics, combined with other sensors to accurately count the number of people in the environments, track their movement trajectories, and discover their demographics and persona.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-342\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-342\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-341\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tVideo Dialog via Progressive Inference and Cross-Transformer\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-341\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-342\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Zhou Zhao, Zhejiang University<\/p>\n<p>Video dialog is a new and challenging task, which requires the agent to answer questions combining video information with dialog history. And different from single-turn video question answering, the additional dialog history is important for video dialog, which often includes contextual information for the question. Existing visual dialog methods mainly use RNN to encode the dialog history as a single vector representation, which might be rough and straightforward. Some more advanced methods utilize hierarchical structure, attention and memory mechanisms, which still lack an explicit reasoning process. In this paper, we introduce a novel progressive inference mechanism for video dialog, which progressively updates query information based on dialog history and video content until the agent think the information is sufficient and unambiguous. In order to tackle the multimodal fusion problem, we propose a cross-transformer module, which could learn more fine-grained and comprehensive interactions both inside and between the modalities. And besides answer generation, we also consider question generation, which is more challenging but significant for a complete video dialog system. We evaluate our method on two largescale datasets, and the extensive experiments show the effectiveness of our method.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-344\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-344\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-343\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tVisual Analytics of Sports Data\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-343\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-344\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Yingcai Wu, Zhejiang University<\/p>\n<p>With the rapid development of sensing technologies and wearable devices, large sports data have been acquired daily. The data usually implies a wide spectrum of information and rich knowledge about sports. Visual analytics, which facilitates analytical reasoning by interactive visual interfaces, has proven its value in solving various problems. In this talk, I will discuss our research experiences in visual analytics of sports data and introduce several recent studies of our group of making sense of sports data through interactive visualization.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-346\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-346\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-345\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tVisual Analytics for Data Quality Improvement\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-345\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-346\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Shixia Liu, Tsinghua University<\/p>\n<p>The quality of training data is crucial to the success of supervised and semi-supervised learning. Errors in data have long been known to limit the performance of machine learning models. This talk presents the motivation, major challenges of interactive data quality analysis and improvement. With that perspective, I will then discuss some of my recent efforts on 1) analyzing and correcting poor label quality, and 2) resolving the poor coverage of the training data caused by dataset bias.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-348\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-348\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-347\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tVIS+AI: Making AI more Explainable and VIS more Intelligent\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-347\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-348\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Huamin Qu, Hong Kong University of Science and Technology<\/p>\n<p>VIS for AI and AI for VIS have become hot research topics recently. On the one side, visualization plays an important role in explainable AI. On the other side, AI has been transforming the visualization field and automated the whole visualization system development pipeline. In this talk, I will introduce the emerging opportunities of combining AI and VIS to leverage both human intelligence and artificial intelligence to solve some grand challenging problems facing both fields and the society.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-350\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-350\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-349\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tWhat We Learned from Medical Image Learning\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-349\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-350\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Winston Hsu, National Taiwan University<\/p>\n<p>We observed super-human capabilities from convolutional networks for image learning. It is a natural extension for advancing the technologies towards healthcare applications such as medical image segmentation (CT, MRI), registration, detection, prediction, etc. In the past few years, working closely with the university hospitals, we found many exciting developments in this aspect. However, we also learn a lot as working in the cross-disciplinary setup, which requires strong devotions and deep technologies from the medical and machine learning domains. We\u2019d like to take this opportunity to share what we failed and succeeded for the few attempts in advancing machine learning for medical applications. We will identity promising working models (also the misunderstandings between these two disciplines) derived with the medical experts and evidence the great opportunities to discover new treatment or diagnosis methods across numerous common diseases.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<h2>Workshops<\/h2>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Rajesh-Krishna-Balan.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Rajesh Krishna Balan<\/strong><\/p>\n<p>Singapore Management University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-352\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-352\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-351\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-351\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-352\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Prof. Balan is an ACM Distinguished Scientist and has worked in the area of mobile systems for over 18 years. He obtained his Ph.D. in Computer Science in 2006 from Carnegie Mellon University under the guidance of Professor Mahadev Satyanarayanan. He has been a general chair for both MobiSys 2016 and UbiComp 2018 and has served as a program chair for HotMobile 2012 and MobiSys 2019. In addition, he also organised student workshop, called ASSET, that ran at MobiCom 2019, COMSNETS 2018, and MobiSys 2016. Prof. Balan has a strong interest in applied research and was a director for LiveLabs (http:\/\/www.livelabs.smu.edu.sg), a large research \/ startup lab that turned real-world environments (such as a university, a convention centre, and a resort island) into living testbeds for mobile systems experiments. He founded a startup to more effectively provide LiveLabs technologies to interested commercial clients. These experiences have given Prof Balan a great insight into how hard and meaningful it is to translate research into tangible systems that are tested and deployed in the real world.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Ting-Cao.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Ting Cao<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-354\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-354\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-353\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-353\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-354\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Ting Cao is now a Researcher in System Research Group of MSRA. Her research interests include HW\/SW co-design, high-level language implementation, software management of heterogeneous hardware, big data and deep learning frameworks. She has reputable publications in ISCA, ASPLOS, PLDI, Proceedings of the IEEE, etc. She got her PhD from the Australian National University. Before joining MSRA, she was a senior software engineer in the Compiler and Computing Language Lab in Huawei Technologies.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Yue-Cao.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Yue Cao<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-356\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-356\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-355\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-355\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-356\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Yue Cao is now a researcher at Microsoft Research Asia. He received the B.E. degree in Computer Software at 2014 and Ph.D. degree in Software Engineering at 2019, both from Tsinghua University, China. He was awarded the Top-grade Scholarship of Tsinghua University at 2018, and Microsoft Research Asia PhD Fellowship at 2017. His research interests include computer vision and deep learning. He has published more than 20 papers in the top-tier conferences with more than 1,700 citations.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Xilin-Chen.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Xilin Chen<\/strong><\/p>\n<p>Chinese Academy of Sciences<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-358\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-358\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-357\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-357\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-358\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Xilin Chen is a professor with the Institute of Computing Technology, Chinese Academy of Sciences (CAS). He has authored one book and more than 300 papers in refereed journals and proceedings in the areas of computer vision, pattern recognition, image processing, and multimodal interfaces. He is currently an associate editor of the IEEE Transactions on Multimedia, and a Senior Editor of the Journal of Visual Communication and Image Representation, a leading editor of the Journal of Computer Science and Technology, and an associate editor-in-chief of the Chinese Journal of Computers, and Chinese Journal of Pattern Recognition and Artificial Intelligence. He served as an Organizing Committee member for many conferences, including general co-chair of FG13 \/ FG18, program co-chair of ICMI 2010. He is \/ was an area chair of CVPR 2017 \/ 2019 \/ 2020, and ICCV 2019. He is a fellow of the IEEE, IAPR, and CCF.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Peng-Cheng.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Peng Cheng<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-360\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-360\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-359\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-359\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-360\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Peng Cheng is the researcher in Networking Research Group, MSRA. His research interests are computer networking and networked systems. His recent work is focusing on Hardware-based System in Data Center. He has publications in NSDI, CoNEXT, EuroSys, SIGCOMM, etc. He received his Ph.D. in Computer Science and Technology from Tsinghua University in 2015.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jaegul-Choo.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Jaegul Choo<\/strong><\/p>\n<p>Korea University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-362\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-362\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-361\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-361\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-362\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Jaegul Choo (https:\/\/sites.google.com\/site\/jaegulchoo\/ ) is an associate professor in the Dept. of Computer Science and Engineering at Korea University. He has been a research scientist at Georgia Tech from 2011 to 2015, where he also received M.S in 2009 and Ph.D in 2013. His research areas include computer vision, and natural language processing, data mining, and visual analytics, and his work has been published in premier venues such as KDD, WWW, WSDM, CVPR, ECCV, EMNLP, AAAI, IJCAI, ICDM, ICWSM, IEEE VIS, EuroVIS, CHI, TVCG, CFG, and CG&A. He earned the Best Student Paper Award at ICDM in 2016, the NAVER Young Faculty Award in 2015, the Outstanding Research Scientist Award at Georgia Tech in 2015, and the Best Poster Award at IEEE VAST (as part of IEEE VIS) in 2014.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Nan-Duan.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Nan Duan<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-364\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-364\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-363\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-363\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-364\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr. Nan DUAN is a Principle Research Manager at Microsoft Research Asia. He is working on fundamental NLP tasks, especially on question answering, natural language understanding, language + vision, pre-training and reasoning.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Winston-HSU.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Winston Hsu<\/strong><\/p>\n<p>National Taiwan University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-366\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-366\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-365\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-365\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-366\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Prof. Winston Hsu is an active researcher dedicated to large-scale image\/video retrieval\/mining, visual recognition, and machine intelligence. He is a Professor in the Department of Computer Science and Information Engineering, National Taiwan University. He and his team have been recognized with technical awards in multimedia and computer vision research communities including IBM Research Pat Goldberg Memorial Best Paper Award (2018), Best Brave New Idea Paper Award in ACM Multimedia 2017, First Place for IARPA Disguised Faces in the Wild Competition (CVPR 2018), First Prize in ACM Multimedia Grand Challenge 2011, ACM Multimedia 2013\/2014 Grand Challenge Multimodal Award, etc. Prof. Hsu is keen to realizing advanced researches towards business deliverables via academia-industry collaborations and co-founding startups. He was a Visiting Scientist at Microsoft Research Redmond (2014) and had his 1-year sabbatical leave (2016-2017) at IBM TJ Watson Research Center. He served as the Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) and IEEE Transactions on Multimedia, two premier journals, and was on the Editorial Board for IEEE Multimedia Magazine (2010 \u2013 2017).<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Sung-Ju-Hwang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Sung Ju Hwang<\/strong><\/p>\n<p>KAIST<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-368\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-368\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-367\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-367\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-368\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Sung Ju Hwang is an assistant professor in the Graduate School of Artificial Intelligence and School of Computing at KAIST. He received his Ph.D. degree in computer science at University of Texas at Austin, under the supervision of Professor Kristen Grauman. Sung Ju Hwang&#8217;s research interest is mainly on developing machine learning models for tackling practical challenges in various application domains, including but not limited to, visual recognition, natural language understanding, healthcare and finance. He regularly presents papers at various top-tier AI conferences, such as NeurIPS, ICML, ICLR, CVPR, ICCV, AAAI and ACL.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Guolin-Ke.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Guolin Ke<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-370\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-370\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-369\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-369\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-370\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Guolin Ke is currently a Researcher in Machine Learning Group, Microsoft Research Asia. His research interests mainly lie in machine learning algorithms.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Gunhee-Kim.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Gunhee Kim<\/strong><\/p>\n<p>Seoul National University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-372\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-372\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-371\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-371\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-372\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Gunhee Kim is an associate professor in the Department of Computer Science and Engineering of Seoul National University from 2015. He was a postdoctoral researcher at Disney Research for one and a half years. He received his PhD in 2013 under supervision of Eric P. Xing from Computer Science Department of Carnegie Mellon University. Prior to starting PhD study in 2009, he earned a master\u2019s degree under supervision of Martial Hebert in Robotics Institute, CMU. His research interests are solving computer vision and web mining problems that emerge from big image data shared online, by developing scalable and effective machine learning and optimization techniques. He is a recipient of 2014 ACM SIGKDD doctoral dissertation award, and 2015 Naver New faculty award.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2016\/07\/avatar_user__1469100866-180x180.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Shujie Liu<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-374\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-374\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-373\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-373\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-374\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr. Shujie Liu is a Principle Researcher in Natural Language Computing group at Microsoft Research Asia, Beijing, China. Shujie joined MSRA-NLC in Jul. 2012 after he received his Ph.D in Jun. 2012 from Department of Computer Science of Harbin Institute of Technology.<\/p>\n<p>Shujie\u2019s research interests include natural language processing and deep learning. He is now working on fundamental NLP problems, models, algorithms and innovations.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Xuanzhe-Liu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Xuanzhe Liu<\/strong><\/p>\n<p>Peking University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-376\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-376\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-375\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-375\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-376\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Prof. Xuanzhe Liu is now an associate professor with the Institute of Software, Peking University, since 2011. He now leads the DAAS (Data, Analytics, Applications, and Systems) lab in Peking University. Prof. Liu\u2019s recent research interests are focused on measuring, engineering, and operating large-scale service-based and intelligent software systems (such as mobility and Web), mostly from a data-driven perspective. Prof. Liu has published more than 80 papers on premier conferences such as WWW, ICSE, OOPSLA, MobiCom, UbiComp, EuroSys, and IMC, and impactful journals such as ACM TOIS\/TOIT and IEEE TSE\/TMC\/TSC. He won the Best Paper Award of WWW 2019. He was also recognized by several academic awards, such as the CCF-IEEE CS Young Scientist Award, the Honorable Young Faculty Award of Yangtze River Scholar Program, and so on. Prof. Liu was a visiting researcher with Microsoft Research (with &#8220;Star-Track Young Faculty Program&#8221;) from 2013-2014, and the winner of Microsoft Ph.D. Fellowship in 2007.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jiwen-Lu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Jiwen Lu<\/strong><\/p>\n<p>Tsinghua University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-378\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-378\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-377\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-377\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-378\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Jiwen Lu is currently an Associate Professor with the Department of Automation, Tsinghua University, China. His current research interests include computer vision, machine learning, and intelligent robotics. He has authored\/co-authored over 200 scientific papers in these areas, where over 70 of them are IEEE Transactions papers and over 50 of them are CVPR\/ICCV\/ECCV papers. He was a recipient of the National 1000 Young Talents Program of China in 2015, and the National Science Fund of China Award for Excellent Young Scholars in 2018. He serves as the Co-Editor-of-Chief for PR Letters, an Associate Editor for T-IP\/T-CSVT\/T-BIOM\/PR. He is the Program Co-Chair of ICME\u20192020, AVSS\u20192020 and DICTA\u20192019, and an Area Chair for CVPR\u20192020, ICME\u20192017-2019, ICIP\u20192017-2019, and ICPR 2018.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Chong-Luo.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Chong Luo<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-380\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-380\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-379\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-379\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-380\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p> Dr. Chong Luo joined Microsoft Research Asia in 2003 and is now a Principal Researcher at the Intelligent Multimedia Group (IMG). She is an adjunct professor and a Ph.D. advisor at the University of Science and Technology of China (USTC), China. Her current research interests include computer vision, cross-modality multimedia analysis and processing, and multimedia communications. In particular, she is interested in visual object tracking, audio-visual and text-visual video analysis, and hybrid digital-analog transmission of wireless video. She is currently a member of the Multimedia Systems and Applications (MSA) Technical Committee (TC) of the IEEE Circuits and Systems (CAS) society. She is an IEEE senior member.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Sinno-Jialin-Pan.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Sinno Jialin Pan<\/strong><\/p>\n<p>Nanyang Technological University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-382\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-382\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-381\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-381\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-382\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr Sinno Jialin Pan is a Provost&#8217;s Chair Associate Professor with the School of Computer Science and Engineering, and Deputy Director of the Data Science and AI Research Centre at Nanyang Technological University (NTU), Singapore. He received his Ph.D. degree in computer science from the Hong Kong University of Science and Technology (HKUST) in 2011. Prior to joining NTU, he was a scientist and Lab Head of text analytics with the Data Analytics Department, Institute for Infocomm Research, Singapore from Nov. 2010 to Nov. 2014. He joined NTU as a Nanyang Assistant Professor (university named assistant professor) in Nov. 2014. He was named to &#8220;AI 10 to Watch&#8221; by the IEEE Intelligent Systems magazine in 2018. His research interests include transfer learning, and its applications to wireless-sensor-based data mining, text mining, sentiment analysis, and software engineering.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2018\/03\/Xu-Tan-Profile-Photo-360-x-360.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Xu Tan<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-384\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-384\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-383\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-383\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-384\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Xu Tan is currently a Senior Researcher in Machine Learning Group, Microsoft Research Asia (MSRA). He graduated from Zhejiang University on March, 2015. His research interests mainly lie in machine learning, deep learning, low-resource learning, and their applications on natural language processing and speech processing, including neural machine translation, text to speech, etc.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Chuan-Wu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Chuan Wu<\/strong><\/p>\n<p>University of Hong Kong<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-386\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-386\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-385\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-385\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-386\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Chuan Wu received her B.Engr. and M.Engr. degrees in 2000 and 2002 from the Department of Computer Science and Technology, Tsinghua University, China, and her Ph.D. degree in 2008 from the Department of Electrical and Computer Engineering, University of Toronto, Canada. Between 2002 and 2004, She worked in the Information Technology industry in Singapore. Since September 2008, Chuan Wu has been with the Department of Computer Science at the University of Hong Kong, where she is currently an Associate Professor. Her current research is in the areas of cloud computing, distributed machine learning\/big data analytics systems, and smart elderly care technologies\/systems. She is a senior member of IEEE, a member of ACM, and an associate editor of IEEE Transactions on Cloud Computing, IEEE Transactions on Multimedia, IEEE Transactions on Circuits and Systems for Video Technology and ACM Transactions on Modeling and Performance Evaluation of Computing Systems. She was the co-recipient of the best paper awards of HotPOST 2012 and ACM e-Energy 2016.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Yingce-Xia.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Yingce Xia<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-388\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-388\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-387\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-387\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-388\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>I am currently a researcher at machine learning group, Microsoft Research Asia. I received my Ph.D. degree from University of Science and Technology in 2018, supervised by Dr. Tie-Yan Liu and Prof. Nenghai Yu. Prior to that, I obtained my bachelor degree from University of Science and Technology of China in 2013.<\/p>\n<p>My research revolves around dual learning (a new learning paradigm proposed by our group) and deep learning (with application to neural machine translation and image processing).<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2016\/09\/avatar_user__1474853894-180x180.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Dongdong Zhang<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-390\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-390\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-389\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-389\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-390\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr. Dongdong Zhang is a researcher in Natural Language Computing group at Microsoft Research Asia, Beijing, China. He received his Ph.D in Dec. 2005 from Department of Computer Science of Harbin Institute of Technology under the supervision of Prof. Jianzhong Li. Before that, he received a B.S. degree and M.S. degree from the same department in 1999 and 2001 respectively.<\/p>\n<p>Dongdong\u2019s research interests include natural language processing, machine translation and machine learning. He is now working on research and development of advanced statistical machine translation systems (SMT) as well as related fundamental NLP problems, models, algorithms and innovations.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Quanlu-Zhang.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Quanlu Zhang<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-392\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-392\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-391\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-391\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-392\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Quanlu Zhang is a senior researcher at MSRA. He obtained his PhD in computer science from Peking University. His current focuses are on the areas of AutoML systems, GPU cluster management, resource scheduling, and storage support for DL workload. Some works have been published on conferences such as OSDI, SoCC, FAST etc.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<h2>Breakout Sessions<\/h2>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Rajesh-Krishna-Balan.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Rajesh Krishna Balan<\/strong><\/p>\n<p>Singapore Management University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-394\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-394\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-393\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-393\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-394\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Prof. Balan is an ACM Distinguished Scientist and has worked in the area of mobile systems for over 18 years. He obtained his Ph.D. in Computer Science in 2006 from Carnegie Mellon University under the guidance of Professor Mahadev Satyanarayanan. He has been a general chair for both MobiSys 2016 and UbiComp 2018 and has served as a program chair for HotMobile 2012 and MobiSys 2019. In addition, he also organised student workshop, called ASSET, that ran at MobiCom 2019, COMSNETS 2018, and MobiSys 2016. Prof. Balan has a strong interest in applied research and was a director for LiveLabs (http:\/\/www.livelabs.smu.edu.sg), a large research \/ startup lab that turned real-world environments (such as a university, a convention centre, and a resort island) into living testbeds for mobile systems experiments. He founded a startup to more effectively provide LiveLabs technologies to interested commercial clients. These experiences have given Prof Balan a great insight into how hard and meaningful it is to translate research into tangible systems that are tested and deployed in the real world.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/lei-chen.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Lei Chen<\/strong><\/p>\n<p>Hong Kong University of Science and Technology<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-396\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-396\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-395\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-395\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-396\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Lei Chen has BS degree in computer science and engineering from Tianjin University, Tianjin, China, MA degree from Asian Institute of Technology, Bangkok, Thailand, and Ph.D. in computer science from the University of Waterloo, Canada. He is a professor in the Department of Computer Science and Engineering, Hong Kong University of Science and Technology (HKUST). Currently, Prof. Chen serves as the director of Big Data Institute at HKUST, the director of Master of Science on Big Data Technology and director of HKUST MOE\/MSRA Information Technology Key Laboratory. Prof. Chen\u2019s research includes human-powered machine learning, crowdsourcing, Blockchain, social media analysis, probabilistic and uncertain databases, and privacy-preserved data publishing. Prof. Chen got the SIGMOD Test-of-Time Award in 2015.The system developed by Prof. Chen\u2019s team won the excellent demonstration award in VLDB 2014. Currently, Pro. Chen serves as Editor-in-Chief of VLDB Journal, associate editor-in-chief of IEEE Transaction on Data and Knowledge Engineering and Program Committee Co-Chair for VLDB 2019. He is an ACM Distinguished Member and an IEEE Senior Member<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Wen-Huang-Cheng.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Wen-Huang Cheng<\/strong><\/p>\n<p>National Chiao Tung University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-398\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-398\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-397\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-397\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-398\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Wen-Huang Cheng is Professor with the Institute of Electronics, National Chiao Tung University (NCTU), Hsinchu, Taiwan, where he is the Founding Director with the Artificial Intelligence and Multimedia Laboratory (AIMMLab). Before joining NCTU, he led the Multimedia Computing Research Group at the Research Center for Information Technology Innovation (CITI), Academia Sinica, Taipei, Taiwan, from 2010 to 2018. His current research interests include multimedia, artificial intelligence, computer vision, machine learning, social media, and financial technology. He has actively participated in international events and played important leading roles in prestigious journals and conferences and professional organizations, like Associate Editor for IEEE Multimedia, General co-chair for ACM ICMR (2021), TPC co-chair for ICME (2020), Chair-Elect for IEEE MSA-TC, governing board member for IAPR. He has received numerous research and service awards, including the 2018 MSRA Collaborative Research Award, the 2017 Ta-Yu Wu Memorial Award from Taiwan\u2019s Ministry of Science and Technology (the highest national research honor for young Taiwanese researchers under age 42), the Top 10% Paper Award from the 2015 IEEE MMSP, the K. T. Li Young Researcher Award from the ACM Taipei\/Taiwan Chapter in 2014, the 2017 Significant Research Achievements of Academia Sinica, the 2016 Y. Z. Hsu Scientific Paper Award, the Outstanding Youth Electrical Engineer Award from the Chinese Institute of Electrical Engineering in 2015, and the Outstanding Reviewer Award of 2018 IEEE ICME.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Minsu-Cho.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Minsu Cho<\/strong><\/p>\n<p>Pohang University of Science and Technology (POSTECH)<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-400\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-400\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-399\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-399\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-400\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Minsu Cho is an assistant professor at the Department of Computer Science and Engineering at POSTECH, South Korea, leading POSTECH Computer Vision Lab. Before joining POSTECH in the fall of 2016, he has worked as a postdoc and a starting researcher in Inria (the French National Institute for computer science and applied mathematics) and ENS (\u00c9cole Normale Sup\u00e9rieure), Paris, France. He completed his Ph.D. in 2012 at Seoul National University, Korea. His research lies in the areas of computer vision and machine learning, especially in the problems of object discovery, weakly-supervised learning, semantic correspondence, and graph matching. In general, he is interested in the relationship between correspondence and supervision in visual learning. He is an editorial board member of International Journal of Computer Vision (IJCV) and has been serving area chairs in top computer vision conferences including CVPR 2018, ICCV 2019, and CVPR 2020.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Seungmoon-Choi.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Seungmoon Choi<\/strong><\/p>\n<p>Pohang University of Science and Technology (POSTECH)<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-402\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-402\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-401\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-401\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-402\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Seungmoon Choi, PhD, is a Professor of Computer Science and Engineering at POSTECH in Korea. He received the BS and MS degrees from Seoul National University and the PhD degree from Purdue University. His main research area is haptics, the science and technology for the sense of touch, as well as its application to various domains including robotics, virtual reality, human-computer interaction, and consumer electronics. He received a 2011 Early Career Award from the IEEE Technical Committee on Haptics.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jaegul-Choo.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Jaegul Choo<\/strong><\/p>\n<p>Korea University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-404\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-404\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-403\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-403\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-404\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Jaegul Choo (https:\/\/sites.google.com\/site\/jaegulchoo\/ ) is an associate professor in the Dept. of Computer Science and Engineering at Korea University. He has been a research scientist at Georgia Tech from 2011 to 2015, where he also received M.S in 2009 and Ph.D in 2013. His research areas include computer vision, and natural language processing, data mining, and visual analytics, and his work has been published in premier venues such as KDD, WWW, WSDM, CVPR, ECCV, EMNLP, AAAI, IJCAI, ICDM, ICWSM, IEEE VIS, EuroVIS, CHI, TVCG, CFG, and CG&A. He earned the Best Student Paper Award at ICDM in 2016, the NAVER Young Faculty Award in 2015, the Outstanding Research Scientist Award at Georgia Tech in 2015, and the Best Poster Award at IEEE VAST (as part of IEEE VIS) in 2014.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Chenhui-Chu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Chenhui Chu<\/strong><\/p>\n<p>Osaka University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-406\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-406\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-405\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-405\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-406\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Chenhui Chu received his B.S. in Software Engineering from Chongqing University in 2008, and M.S., and Ph.D. in Informatics from Kyoto University in 2012 and 2015, respectively. He is currently a research assistant professor at Osaka University. His research won the MSRA collaborative research 2019 grant award, 2018 AAMT Nagao award, and CICLing 2014 best student paper award. He is on the editorial board of the Journal of Natural Language Processing, Journal of Information Processing, and a steering committee member of Young Researcher Association for NLP Studies. His research interests center on natural language processing, particularly machine translation and language and vision understanding.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jun-Du.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Jun Du<\/strong><\/p>\n<p>University of Science and Technology of China<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-408\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-408\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-407\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-407\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-408\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Jun Du received the B.Eng. and Ph.D. degrees from the Department of Electronic Engineering and Information Science, University of Science and Technology of China (USTC), in 2004 and 2009, respectively. From July 2009 to June 2010, he was with iFlytek Research leading a team to develop the ASR prototype system of the mobile app \u201ciFlytek Input\u201d. From July 2010 to January 2013, he joined MSRA as an Associate Researcher, working on handwriting recognition, OCR, and speech recognition. Since February 2013, he has been with the National Engineering Laboratory for Speech and Language Information Processing (NEL-SLIP), USTC. His main research interest includes speech signal processing and pattern recognition applications. He has published more than 100 conference and journal papers with more than 2300 citations in Google Scholar. His team is one of the pioneers in deep-learning-based speech enhancement area, publishing two ESI highly cited papers. As the corresponding author, the IEEE-ACM TASLP paper \u201cA Regression Approach to Speech Enhancement Based on Deep Neural Networks\u201d also received 2018 IEEE Signal Processing Society Best Paper Award. Based on those research achievements of speech enhancement, he led a joint team with members from USTC and iFlytek Research to win the champions of all three tasks in the 2016 CHiME-4 challenge and all four tasks in 2018 CHiME-5 challenge. Currently he is the associate editor of IEEE-ACM TASLP. He is one of the organizers for DIHARD Challenge 2018 and 2019.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Ryo-Furukawa.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Ryo Furukawa<\/strong><\/p>\n<p>Hiroshima City University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-410\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-410\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-409\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-409\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-410\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Ryo Furukawa is an associate professor of Faculty of Information Sciences, Hiroshima City University, Hiroshima, Japan. He received his Ph.D. from Nara Institute of Science and Technology, Japan. His research area includes shape-capturing, 3D modeling, image-based rendering, and medical image analysis. He has won academic awards including ACCV Songde Ma Outstanding Paper Award (2007), PSIVT Best Paper Award (2009), IEVC2014 Best Paper Award (2014), IEEE WACV Best Paper Honorable Mention (2017), MICCAI Workshop CARE, KUKA Best Paper Award 3rd Place (2018).<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Yao-Guo.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Yao Guo<\/strong><\/p>\n<p>Peking University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-412\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-412\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-411\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-411\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-412\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Yao Guo is a professor and vice chair of the Department of Computer Science at Peking University. His recent research interests mainly focus on mobile app analysis, as well as privacy and security of mobile systems. He has received multiple awards for his research work and teaching, including First Prize of National Technology Invention Award, an Honorable Mention Award from UbiComp 2016, as well as a Teaching Excellence Award from Peking University. He received his PhD in computer engineering from University of Massachusetts, Amherst in 2007, and BS\/MS degrees in computer science from Peking University.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Bohyung-Han.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Bohyung Han<\/strong><\/p>\n<p>Seoul National University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-414\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-414\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-413\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-413\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-414\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Bohyung Han is an Associate Professor in the Department of Electrical and Computer Engineering at Seoul National University, Korea. Prior to the current position, he was an Associate Professor in the Department of Computer Science and Engineering at POSTECH, Korea and a visiting research scientist in Machine Intelligence Group at Google, Venice, CA, USA. He is currently visiting Snap Research, Venice, CA. He received the B.S. and M.S. degrees from Seoul National University, Korea, in 1997 and 2000, respectively, and the Ph.D. in Computer Science at the University of Maryland, College Park, MD, USA, in 2005. He served or will be serving as an Area Chair or Senior Program Committee member of major conferences in computer vision and machine learning including CVPR, ICCV, NIPS\/NeurIPS, IJCAI and ACCV, a Tutorial Chair in ICCV 2019, a General Chair in ACCV 2022, a Demo Chair in ECCV 2022, a Workshop Chair in ACCV 2020, and a Demo Chair in ACCV 2014. His research interest is computer vision and machine learning with emphasis on deep learning.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Winston-HSU.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Winston Hsu<\/strong><\/p>\n<p>National Taiwan University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-416\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-416\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-415\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-415\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-416\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Prof. Winston Hsu is an active researcher dedicated to large-scale image\/video retrieval\/mining, visual recognition, and machine intelligence. He is a Professor in the Department of Computer Science and Information Engineering, National Taiwan University. He and his team have been recognized with technical awards in multimedia and computer vision research communities including IBM Research Pat Goldberg Memorial Best Paper Award (2018), Best Brave New Idea Paper Award in ACM Multimedia 2017, First Place for IARPA Disguised Faces in the Wild Competition (CVPR 2018), First Prize in ACM Multimedia Grand Challenge 2011, ACM Multimedia 2013\/2014 Grand Challenge Multimodal Award, etc. Prof. Hsu is keen to realizing advanced researches towards business deliverables via academia-industry collaborations and co-founding startups. He was a Visiting Scientist at Microsoft Research Redmond (2014) and had his 1-year sabbatical leave (2016-2017) at IBM TJ Watson Research Center. He served as the Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) and IEEE Transactions on Multimedia, two premier journals, and was on the Editorial Board for IEEE Multimedia Magazine (2010 \u2013 2017).<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Seung-won-Hwang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Seung-won Hwang<\/strong><\/p>\n<p>Yonsei University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-418\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-418\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-417\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-417\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-418\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Prof. Seung-won Hwang is a Professor of Computer Science at Yonsei University. Prior to joining Yonsei, she had been an Associate Professor at POSTECH for 10 years, after her PhD from UIUC. Her recent research interests has been machine intelligence from data, language, and knowledge, leading to 100+ publication at top-tier AI, DB\/DM, and NLP venues, including ACL, AAAI, EMNLP, IJCAI, KDD, SIGIR, SIGMOD, and VLDB. She has received best paper runner-up and outstanding collaboration award from WSDM and Microsoft Research respectively. Details can be found at http:\/\/dilab.yonsei.ac.kr\/~swhwang.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Hong-Gong-Kang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Hong-Goo Kang<\/strong><\/p>\n<p>Yonsei University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-420\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-420\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-419\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-419\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-420\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Hong-Goo Kang received the B.S., M.S., and Ph.D. degrees from Yonsei University, Korea in 1989, 1991, and 1995, respectively. From 1996 to 2002, he was a senior technical staff member at AT&T Labs-Research, Florham Park, New Jersey. He was an associate editor of the IEEE Transactions on Audio, Speech, and Language processing from 2005 to 2008, and served numerous conferences and program committees. In 2008~2009 and 2015~2016, respectively, he worked for Broadcom (Irvine, CA) and Google (Mountain View, CA) as a visiting scholar, where he participated in various projects on speech signal processing. His research interests include speech\/audio signal processing, machine learning, and human computer interface.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Gunhee-Kim.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Gunhee Kim<\/strong><\/p>\n<p>Seoul National University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-422\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-422\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-421\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-421\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-422\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Gunhee Kim is an associate professor in the Department of Computer Science and Engineering of Seoul National University from 2015. He was a postdoctoral researcher at Disney Research for one and a half years. He received his PhD in 2013 under supervision of Eric P. Xing from Computer Science Department of Carnegie Mellon University. Prior to starting PhD study in 2009, he earned a master\u2019s degree under supervision of Martial Hebert in Robotics Institute, CMU. His research interests are solving computer vision and web mining problems that emerge from big image data shared online, by developing scalable and effective machine learning and optimization techniques. He is a recipient of 2014 ACM SIGKDD doctoral dissertation award, and 2015 Naver New faculty award.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jong-Kim.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Jong Kim<\/strong><\/p>\n<p>Pohang University of Science and Technology (POSTECH)<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-424\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-424\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-423\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-423\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-424\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Jong Kim is a professor in the Department of Computer Science and Engineering at Pohang University of Science and Technology (POSTECH). He received his Ph.D. degree from Penn. State University in 1991. From 1991 to 1992, he worked at University of Michigan as a Research Fellow. His research interests include dependable computing, hardware security, mobile security, and machine learning security. He has published papers on top security and security conferences including S&P, NDSS, CCS, WWW, Micro, and RTSS.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Min-H.-Kim.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Min H. Kim<\/strong><\/p>\n<p>KAIST<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-426\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-426\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-425\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-425\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-426\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Min H. Kim is a KAIST-Endowed Chair Professor of Computer Science at KAIST, Korea, leading the Visual Computing Laboratory (VCLAB). Before coming to KAIST, he had been a postdoctoral researcher at Yale University, working on hyperspectral 3D imaging. He received his Ph.D. in computer science from University College London (UCL) in 2010, with a focus on HDR color reproduction for high-fidelity computer graphics. In addition to serving on international program committees, e.g., ACM SIGGRAPH Asia, Eurographics (EG), Pacific Graphics (PG), CVPR, and ICCV, he has worked as an associate editor of ACM Transactions on Graphics (TOG), ACM Transactions on Applied Perception (TAP), and Elsevier Computers and Graphics (CAG). His recent research interests include a wide variety of computational imaging in the field of computational photography, hyperspectral imaging, BRDF acquisition, and 3D imaging.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Heejo-Lee.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Heejo Lee<\/strong><\/p>\n<p>Korea University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-428\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-428\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-427\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-427\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-428\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Heejo Lee is a Professor in the Department of Computer Science and Engineering, Korea University (KU), Seoul, Korea and the director of CSSA (Center for Software Security and Assurance). Before joining KU, he was at AhnLab, Inc., the leading security company in Korea, as a CTO from 2001 to 2003. He received his BS, MS, PhD from POSTECH, and worked for Purdue and CMU. He is a recipient of the ISC^2 ISLA award and got the most prestigious recognition of Asia-Pacific community service star in 2016.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Seong-Whan-Lee.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Seong-Whan Lee<\/strong><\/p>\n<p>Korea University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-430\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-430\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-429\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-429\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-430\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Seong-Whan Lee is a full professor at Korea University, where he is the head of the Department of Artificial Intelligence and the Department of Brain and Cognitive Engineering.<\/p>\n<p>A Fellow of the IAPR(1998), IEEE(2009), and Korean Academy of Science and Technology(2009), he has served several professional societies as chairman or governing board member. He was the founding Co-Editor-in-Chief of the International Journal of Document Analysis and Recognition and has been an Associate Editor of several international journals: Pattern Recognition, ACM Trans. on Applied Perception, IEEE Trans. on Affective Computing, Image and Vision Computing, International Journal of Pattern Recognition and Artificial Intelligence, and International Journal of Image and Graphics.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Seung-Ah-Lee.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Seung Ah Lee<\/strong><\/p>\n<p>Yonsei University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-432\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-432\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-431\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-431\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-432\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Seung Ah Lee is an assistant professor at the Department of Electrical and Electronic Engineering at Yonsei University. Seung Ah joined Yonsei University in Fall 2018, currently leading the Optical Imaging Systems Laboratory. Prior to Yonsei, she was at Verily Life Sciences, a former Google [x] team, between 2015-2018 as a scientist. She received her PhD in Electrical Engineering at Caltech (2014) and a postdoctoral training at Stanford Bioengineering (2014-2015). She completed her BS (2007) and MS (2009) degree in Electrical Engineering at Seoul National University.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Seungyong-Lee.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Seungyong Lee<\/strong><\/p>\n<p>Pohang University of Science and Technology (POSTECH)<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-434\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-434\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-433\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-433\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-434\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Seungyong Lee is a professor of computer science and engineering at Pohang University of Science and Technology (POSTECH), Korea. He received a PhD degree in computer science from Korea Advanced Institute of Science and Technology (KAIST) in 1995. From 1995 to 1996, he worked at City College of New York as a postdoctoral researcher. Since 1996, he has been a faculty member of POSTECH, where he leads Computer Graphics Group. During his sabbatical years, he worked at MPI Informatik (2003-2004) and Creative Technologies Lab at Adobe Systems (2010-2011). His technologies on image deblurring and photo upright adjustment have been transferred to Adobe Creative Cloud and Adobe Photoshop Lightroom. His current research interests include image and video processing, deep learning based computational photography, and 3D scene reconstruction.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jingwen-Leng.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Jingwen Leng<\/strong><\/p>\n<p>Shanghai Jiao Tong University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-436\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-436\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-435\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-435\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-436\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Jingwen Leng is an Assistant Professor in the John Hopcroft Computer Science Center and Computer Science & Engineering Department at Shanghai Jiao Tong University. His research focuses on building efficient and resilient architectures for deep learning. He received his Ph.D. from the University of Texas at Austin, where he worked on improving the efficiency and resiliency of general-purpose GPUs.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Cheng-Li.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Cheng Li<\/strong><\/p>\n<p>University of Science and Technology of China<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-438\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-438\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-437\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-437\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-438\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Cheng Li is a research professor at the School of Computer Science and Technology, University of Science and Technology of China (USTC). His research interests lie in various topics related to improving performance, consistency, fault tolerance, and availability of distributed systems. Prior to joining USTC, he was an associated researcher at INESC-ID, Portugal, and a senior member of technical staff at Oracle Labs Swiss. He received his PhD degree from Max Planck Institute for Software Sytems (MPI-SWS) in 2016, and his bachelor degree from Nankai University in 2009. His work has been published in the premier peer-reviewed system research venues such as OSDI, USENIX ATC, EuroSys, TPDS and etc. He is a member of ACM Future Computing Academy. He was a co-chair on the Program Committee of the ACM SOSP 2017 Poster Session and ACM TURC 2018 SIGOPS\/ChinaSys workshop.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Shou-De-Lin.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Shou-De Lin<\/strong><\/p>\n<p>National Taiwan University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-440\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-440\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-439\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-439\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-440\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Shou-de Lin is currently a full professor in the CSIE department of National Taiwan University. He holds a BS degree in EE department from National Taiwan University, an MS-EE degree from the University of Michigan, an MS degree in Computational Linguistics and PhD in Computer Science both from the University of Southern California. He leads the Machine Discovery and Social Network Mining Lab in NTU. Before joining NTU, he was a post-doctoral research fellow at the Los Alamos National Lab. Prof. Lin&#8217;s research includes the areas of machine learning and data mining, social network analysis, and natural language processing. His international recognition includes the best paper award in IEEE Web Intelligent conference 2003, Google Research Award in 2007, Microsoft research award in 2008, 2015, 2016 merit paper award in TAAI 2010, 2014, 2016, best paper award in ASONAM 2011, US Aerospace AFOSR\/AOARD research award winner for 5 years. He is the all-time winners in ACM KDD Cup, leading or co-leading the NTU team to win 5 championships. He also leads a team to win WSDM Cup 2016. He has served as the senior PC for SIGKDD and area chair for ACL. He also served as the co-founder and chief scientist of a start-up The OmniEyes.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jiaying-Liu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Jiaying Liu<\/strong><\/p>\n<p>Peking University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-442\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-442\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-441\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-441\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-442\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Jiaying Liu is currently an Associate Professor with the Institute of Computer Science and Technology, Peking University. She received the Ph.D. degree (Hons.) in computer science from Peking University, Beijing China, 2010. She has authored over 100 technical articles in refereed journals and proceedings, and holds 42 granted patents. Her current research interests include multimedia signal processing, compression, and computer vision.<\/p>\n<p>Dr. Liu is a Senior Member of IEEE, CSIG and CCF. She was a Visiting Scholar with the University of Southern California, Los Angeles, from 2007 to 2008. She was a Visiting Researcher with the Microsoft Research Asia in 2015 supported by the Star Track Young Faculties Award. She has served as a member of Multimedia Systems & Applications Technical Committee (MSA TC), Visual Signal Processing and Communications Technical Committee (VSPC TC) and Education and Outreach Technical Committee (EO TC) in IEEE Circuits and Systems Society, a member of the Image, Video, and Multimedia (IVM) Technical Committee in APSIPA. She has also served as the Technical Program Chair of IEEE VCIP-2019\/ACM ICMR-2021, the Publicity Chair of IEEE ICIP-2019\/VCIP-2018\/MIPR 2020, the Grand Challenge Chair of IEEE ICME-2019, and the Area Chair of ICCV-2019. She was the APSIPA Distinguished Lecturer (2016-2017).<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Shixia-Liu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Shixia Liu<\/strong><\/p>\n<p>Tsinghua University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-444\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-444\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-443\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-443\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-444\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Shixia Liu is a tenured associate professor at Tsinghua University. Her research interests include explainble machine learning, interative data quality improvement, and visual text analytics. Shixia is an associate Editor-in-Chief of IEEE Transactions on Visualization and Computer Graphics, IEEE Transactions on Big Data, and ACM Transactions on Interactive Intelligent Systems . She was the Papers Co-Chairs of IEEE VAST 2016\/2017 and the program co-chair of PacifcVis 2014.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Youyou-Lu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Youyou Lu<\/strong><\/p>\n<p>Tsinghua University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-446\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-446\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-445\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-445\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-446\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Youyou Lu is an assistant professor in the Department of Computer Science and Technology at Tsinghua University. He obtained his B.S. degree from Nanjing University in 2009 and his Ph.D degree from Tsinghua University in 2015, both in Computer Science, and was a postdoctoral fellow at Tsinghua from 2015 to 2017. His current research interests include file and storage systems spanning from architectural to system levels. His research works have been published at a number of top-tier conferences including FAST, USENIX ATC, SC, EuroSys etc. His research won the Best Paper Award at NVMSA 2014 and was selected into the Best Papers at MSST 2015. He was elected in the Young Elite Scientists Sponsorship Program by CAST (China Association for Science and Technology) in 2015, and received the CCF Outstanding Doctoral Dissertation Award in 2016.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Atsuko-Miyaji.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Atsuko Miyaji<\/strong><\/p>\n<p>Osaka University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-448\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-448\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-447\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-447\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-448\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>She received the Dr. Sci. degrees in mathematics from Osaka University, Osaka, Japan in 1997. She joined Panasonic Co., LTD from 1990 to 1998.She was an associate professor at the Japan Advanced Institute of Science and Technology (JAIST) in 1998. She joined the UC Davis from 2002 to 2003. She has been a professor at JAIST, a professor at Osaka University, and an Auditor of Information-technology Promotion Agency Japan since 2007, 2015 and 2016 respectively. She has been an editor of ISO\/IEC since 2000.<\/p>\n<p>She received Young Paper Award of SCIS&#8217;93 in 1993, Notable Invention Award of the Science and Technology Agency in 1997, the IPSJ Sakai Special Researcher Award in 2002, the Standardization Contribution Award in 2003, Engineering Sciences Society: Certificate of Appreciation in 2005, the AWARD for the contribution to CULTURE of SECURITY in 2007, IPSJ\/ITSCJ Project Editor Award in 2007, 2008, 2009, 2010, 2012, 2016, and the Director-General of Industrial Science and Technology Policy and Environment Bureau Award in 2007, DoCoMo Mobile Science Awards in 2008, ADMA 2010 Best Paper Award, Prizes for Science and Technology, The Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology, ATIS 2016 Best Paper Award, IEEE Trustocm 2017 Best Paper Award, and IEICE milestone certification in 2017.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Tadashi-Nomoto.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Tadashi Nomoto<\/strong><\/p>\n<p>The SOKENDAI Graduate School of Advanced Studies<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-450\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-450\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-449\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-449\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-450\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Tadashi Nomoto is currently an associate professor at Graduate University for Advanced Studies (SOKENDAI) with a joint appointment to National Institute of Japanese Literature. He has been actively engaged in the area of natural language processing and information retrieval for more than a decade, both in academia and in industry. His research interests include computational linguistics, digital library, data mining, machine translation, and quantitative media analysis. He has published extensively in major international conferences (the likes of SIGIR, ACL, ICML, CIKM). He holds an MA in Linguistics from Sophia University, Japan, and a PhD in Computer Science from Nara Institute of Science and Technology located also in Japan.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Sinno-Jialin-Pan.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Sinno Jialin Pan<\/strong><\/p>\n<p>Nanyang Technological University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-452\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-452\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-451\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-451\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-452\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr Sinno Jialin Pan is a Provost&#8217;s Chair Associate Professor with the School of Computer Science and Engineering, and Deputy Director of the Data Science and AI Research Centre at Nanyang Technological University (NTU), Singapore. He received his Ph.D. degree in computer science from the Hong Kong University of Science and Technology (HKUST) in 2011. Prior to joining NTU, he was a scientist and Lab Head of text analytics with the Data Analytics Department, Institute for Infocomm Research, Singapore from Nov. 2010 to Nov. 2014. He joined NTU as a Nanyang Assistant Professor (university named assistant professor) in Nov. 2014. He was named to &#8220;AI 10 to Watch&#8221; by the IEEE Intelligent Systems magazine in 2018. His research interests include transfer learning, and its applications to wireless-sensor-based data mining, text mining, sentiment analysis, and software engineering.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/1998\/02\/asia-slt-tim-pan-1910.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Tim Pan<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-454\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-454\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-453\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-453\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-454\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr. Tim Pan is the senior director of Outreach of Microsoft Research Asia, responsible for the lab\u2019s academic collaboration in the Asia-Pacific region. He establishes strategies and directions, identifies business opportunities, and designs various programs and projects that strengthen partnership between Microsoft Research and academia.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Xueming-Qian.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Xueming Qian<\/strong><\/p>\n<p>Xi&#8217;an Jiaotong University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-456\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-456\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-455\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-455\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-456\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Xueming Qian PhD\/Professor, received the B.S. and M.S. degrees in Xi&#8217;an University of Technology, Xi&#8217;an, China, in 1999 and 2004, respectively, and the Ph.D. degree in the School of Electronics and Information Engineering, Xi&#8217;an Jiaotong University, Xi&#8217;an, China, in 2008. He was awarded Microsoft fellowship in 2006, outstanding doctoral dissertation of Xi&#8217;an Jiaotong University and Shaanxi Province in 2010 and 2011 respectively. He is the director of SMILES LAB. He was a visit scholar at Microsoft research Asia from August 2010 to March 2011. His research interests include social mobile multimedia mining learning and search.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Huamin-Qu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Huamin Qu<\/strong><\/p>\n<p>Hong Kong University of Science and Technology<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-458\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-458\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-457\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-457\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-458\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Huamin Qu is a full professor in the Department of Computer Science and Engineering (CSE) at the Hong Kong University of Science and Technology (HKUST). His main research interests are in data visualization and human-computer interaction, with focuses on explainable AI, urban informatics, social media analysis, E-learning, and text visualization. He has served as paper co-chairs for IEEE VIS\u201914, VIS\u201915, and VIS\u201918 and an associate editor of IEEE Transactions on Visualization and Computer Graphics (TVCG). He received a BS in Mathematics from Xi\u2019an Jiaotong University and a PhD in Computer Science from Stony Brook University.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Junichi-Rekimoto.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Junichi Rekimoto<\/strong><\/p>\n<p>The University of Tokyo<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-460\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-460\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-459\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-459\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-460\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Jun Rekimoto received his B.A.Sc., M.Sc., and Ph.D. in Information Science from Tokyo Institute of Technology in 1984, 1986, and 1996, respectively. From 1986 to 1994, he worked for the Software Laboratory of NEC. During 1992-1993, he worked in the Computer Graphics Laboratory at the University of Alberta, Canada, as a visiting scientist. Since 1994 he has worked for Sony Computer Science Laboratories (Sony CSL). In 1999 he formed, and has since directed, the Interaction Laboratory within Sony CSL.<\/p>\n<p>Rekimoto&#8217;s research interests include computer augmented environments, mobile\/wearable computing, virtual reality, and information visualization. He has authored dozens of refereed publications in the area of human-computer interactions, including ACM, CHI, and UIST. One of his publications was recognized with the 30th commemorative papers award from the Information Processing Society Japan (IPSJ) in 1992. He also received the Multi-Media Grand Prix Technology Award from the Multi-Media Contents Association Japan in 1998, the Yamashita Memorial Research Award from IPSJ in 1999, and the Japan Inter-Design Award in 2003. In 2007, He elected to ACM SIGCHI Academy.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Insik-Shin.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Insik Shin<\/strong><\/p>\n<p>KAIST<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-462\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-462\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-461\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-461\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-462\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Insik Shin is a professor in the School of Computing and a Chief Professor of Graduate School of Information Security at KAIST, Korea. He received a Ph.D. degree from the University of Pennsylvania. His research interests include real-time embedded systems, systems security, mobile computing, and cyber-physical systems. He serves on program committees of top international conferences, including RTSS, RTAS and ECRTS. He is a recipient of several best (student) paper awards, including MobiCom \u201919, RTSS \u201912, RTAS \u201912, and RTSS \u201903, KAIST Excellence Award, and Naver Young Faculty Award.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jun-Takamatsu.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Jun Takamatsu<\/strong><\/p>\n<p>Nara Institute of Science and Technology<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-464\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-464\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-463\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-463\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-464\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Jun Takamatsu received a Ph.D. degree in Computer Science from the University of Tokyo, Japan, in 2004. From 2004 to 2008, he was with the Institute of Industrial Science, the University of Tokyo. In 2007, he was with Microsoft Research Asia, as a visiting researcher. From 2008 to now, he joined Nara Institute of Science and Technology, Japan, as an associate professor. He was also with Carnegie Mellon University as a visitor in 2012 and 2013 and with Microsoft as a visiting scientist in 2018. His research interests are in robotics including learning-from-observation, task\/motion planning, and feasible motion analysis, 3D shape modeling and analysis, and physics-based vision.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Mingkui-Tan.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Mingkui Tan<\/strong><\/p>\n<p>South China University of Technology<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-466\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-466\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-465\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-465\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-466\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr. Mingkui Tan is currently a professor with the School of Software Engineering at South China University of Technology, China. He received his Bachelor Degree in Environmental Science and Engineering in 2006 and Master degree in Control Science and Engineering in 2009, both from Hunan University in Changsha, China. He received the PhD degree in Computer Science from Nanyang Technological University, Singapore, in 2014. From 2014-2016, he worked as a Senior Research Associate on machine learning and computer vision in the School of Computer Science, University of Adelaide, Australia. His research interests include machine learning, sparse analysis, deep learning and large-scale optimization. He has published about 70 research papers in top-tier conferences such as NeurIPS, ICML and KDD and international peer-reviewed journals such as TNNLS, JMLR and TIP.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2016\/03\/avatar_user__1459357947-177x180.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Xin Tong<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-468\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-468\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-467\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-467\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-468\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>I am now a principal researcher in Internet Graphics Group of Microsoft Research Asia . I obtained my Ph.D. degree in Computer Graphics from Tsinghua University in 1999. My Ph.D. thesis is about hardware assisted volume rendering. I got my B.S. Degree and Master Degree in Computer Science from Zhejiang University in 1993 and 1996 respectively.<\/p>\n<p>My research interests include appearance modeling and rendering, texture synthesis, and image based modeling and rendering. Specifically, my research concentrates on studying the underline principles of material light interaction and light transport, and developing efficient methods for appearance modeling and rendering. I am also interested in performance capturing and facial animation.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Hongzhi-Wang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Hongzhi Wang<\/strong><\/p>\n<p>Harbin Institute of Technology<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-470\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-470\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-469\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-469\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-470\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Hongzhi Wang, Professor, PHD supervisor, Vice Dean of Honors School of Harbin Institute of Technology, the secretary general of ACM SIGMOD China, CCF outstanding member, a member of CCF databases and big data committee. Research Fields include big data management and analysis, database and data quality. He was \u201cstarring track\u201d visiting professor at MSRA. He has been PI for more than 10 projects including NSFC key project, NSFC projects. He also serve as a member of ACM Data Science Task Force. His publications include over 200 papers including VLDB, SIGMOD, SIGIR papers, and 4 books. His papers were cited more than 1000 times. His personal website is http:\/\/homepage.hit.edu.cn\/wang.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Liwei-Wang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Liwei Wang<\/strong><\/p>\n<p>Peking University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-472\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-472\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-471\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-471\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-472\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Professor in School of Electronics Engineering and Computer Science, Peking University, researcher in Beijing Institute of Big Data Research, adjunct professor in Institute for Interdisciplinary Information Science, Tsinghua University. He was recognized by IEEE Intelligent Systems as one of AI\u2019s 10 to Watch in 2010, the first Asian scholar since the establishment of the award. He received the NSFC excellent young researcher grant in 2012. He was also supported by program for New Century Excellent Talents in University by the Ministry of Education.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Hiroki-Watanabe.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Hiroki Watanabe<\/strong><\/p>\n<p>Hokkaido University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-474\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-474\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-473\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-473\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-474\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Hiroki Watanabe is an assistant professor at Graduate School of Information Science and Technology, Hokkaido University, Japan. He received B. Eng. and M. Eng. and Ph.D. degrees from Kobe University in 2012, 2014, and 2017, respectively. He is working on wearable computing and ubiquitous computing.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Yonggang-Wen.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Yonggang Wen<\/strong><\/p>\n<p>Nanyang Technological University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-476\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-476\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-475\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-475\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-476\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr. Yonggang Wen is the Professor Computer Science and Engineering (SCSE) at Nanyang Technological University (NTU), Singapore. He also serves as the Associate Dean (Research) at the College of Engineering, and the Director of Nanyang Technopreneurship Centre at NTU. He received his PhD degree in Electrical Engineering and Computer Science (minor in Western Literature) from Massachusetts Institute of Technology (MIT), Cambridge, USA, in 2007.<\/p>\n<p>Dr. Wen has worked extensively in learning-based system prototyping and performance optimization for large-scale networked computer systems. In particular, his work in Multi-Screen Cloud Social TV has been featured by global media (more than 1600 news articles from over 29 countries) and received 2013 ASEAN ICT Awards (Gold Medal). His work on Cloud3DView, as the only academia entry, has won 2016 ASEAN ICT Awards (Gold Medal) and 2015 Datacentre Dynamics Awards \u2013 APAC (\u2018Oscar\u2019 award of data centre industry). He is a co-recipient of 2015 IEEE Multimedia Best Paper Award, and a co-recipient of Best Paper Awards at 2016 IEEE Globecom, 2016 IEEE Infocom MuSIC Workshop, 2015 EAI\/ICST Chinacom, 2014 IEEE WCSP, 2013 IEEE Globecom and 2012 IEEE EUC. He was the sole winner of 2016 Nanyang Awards in Entrepreneurship and Innovation at NTU, and received 2016 IEEE ComSoc MMTC Distinguished Leadership Award. He serves on editorial boards for ACM Transactions Multimedia Computing, Communications and Applications, IEEE Transactions on Circuits and Systems for Video Technology, IEEE Wireless Communication Magazine, IEEE Communications Survey & Tutorials, IEEE Transactions on Multimedia, IEEE Transactions on Signal and Information Processing over Networks, IEEE Access Journal and Elsevier Ad Hoc Networks, and was elected as the Chair for IEEE ComSoc Multimedia Communication Technical Committee (2014-2016). His research interests include cloud computing, blockchain, green data centre, distributed machine learning, big data analytics, multimedia network and mobile computing.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Wenfei-Wu-New.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Wenfei Wu<\/strong><\/p>\n<p>Tsinghua University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-478\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-478\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-477\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-477\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-478\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Wenfei Wu is an assistant professor in the Institute for Interdisciplinary Information Sciences (IIIS) at Tsinghua University. Wenfei Wu obtained his Ph.D. from the CS department at the University of Wisconsin-Madison in 2015. Dr. Wu&#8217;s research interests are in networked systems, including architecture design, data plane optimization, and network management optimization. He was awarded the best student paper in SoCC&#8217;13. Currently, Dr. Wu is working on model-centric DevOps for network functions, in-network computation for distributed systems (including distributed neural networks and big data systems), and secure network protocol design.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Yingcai-Wu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Yingcai Wu<\/strong><\/p>\n<p>Zhejiang University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-480\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-480\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-479\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-479\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-480\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Yingcai Wu is a National Youth-1000 scholar and a ZJU100 Young Professor at the State Key Lab of CAD & CG, College of Computer Science and Technology, Zhejiang University. He obtained his Ph.D. degree in Computer Science from the Hong Kong University of Science and Technology (HKUST). Prior to his current position, Yingcai Wu was a researcher in the Microsoft Research Asia, Beijing, China from 2012 to 2015, and a postdoctoral researcher at the University of California, Davis from 2010 to 2012. He was a paper co-chair of IEEE Pacific Visualization 2017 and ChinaVis 2016-2017. His main research interests are in visual analytics and human-computer interaction, with focuses on sports analytics, urban computing, and social media analysis. He has published more than 50 refereed papers, including 25 IEEE Transactions on Visualization and Computer Graphics (TVCG) papers. His three papers have been awarded Honorable Mention at IEEE VIS (SciVis) 2009, IEEE VIS (VAST) 2014, and IEEE PacificVis 2016. For more information, visit www.ycwu.org<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Hiroaki-Yamane.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Hiroaki Yamane<\/strong><\/p>\n<p>RIKEN AIP & The University of Tokyo<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-482\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-482\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-481\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-481\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-482\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Hiroaki Yamane is a post-doctoral researcher at RIKEN AIP and a visiting researcher at the University of Tokyo. He completed his PhD at Keio University where he proposed slogan generating systems. After PhD acquisition, he was dedicated to brain decoding and currently is working on building machine intelligence for medical engineering at RIKEN AIP. Because he has a strong interest in human intelligence, sensitivity, and health, his research interests include: word embedding on commonsense, sentiment analysis, sentence generation, and domain adaptation. He is more broadly interested in multidisciplinary areas natural language processing, computer vision, cognitive & neuroscience, and AI applications to medical.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Rui-Yan.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Rui Yan<\/strong><\/p>\n<p>Peking University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-484\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-484\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-483\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-483\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-484\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr. Rui Yan is an assistant professor at Peking University, an adjunct professor at Central China Normal University and Central University of Finance and Economics, and he was a Senior Researcher at Baidu Inc. He has investigated several open-domain conversational systems and dialogue systems in vertical domains. Till now he has published more than 100 highly competitive peer-reviewed papers. He serves as a (senior) program committee member of several top-tier venues (such as KDD, SIGIR, ACL, WWW, IJCAI, AAAI, CIKM, and EMNLP, etc.).<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Chuck-Yoo.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Chuck Yoo<\/strong><\/p>\n<p>Korea University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-486\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-486\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-485\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-485\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-486\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Chuck Yoo received B.S. degree from Seoul National University in 1982, and M.S. and Ph.D degrees from University of Michigan, Ann Arbor, Michigan in 1986 and 1990 respectively. From 1990 to 1995, he was with Sun Microsystems, Mountain View, California, working on Sun\u2019s operating systems. In 1995, he joined the computer science department of Korea University and served the dean of the College of Informatics for 5 years until Jan. of 2018.<\/p>\n<p>He has been working on virtualization, starting with hypervisor for mobile phones, virtualized automotive platform, integrated SLA (service level agreement) for clouds and network virtualization including virtual routers and SDN. He hosted Xen Summit in Seoul in 2011 and served program committees of various conferences. In addition to publishing quite a number of papers, his research has influenced global industry leaders such as Samsung and LG to inspire and enhance their products.<\/p>\n<p>Recently, he is working with the College of Medicine for precision medicine and also with the College of Law to bring up new and revised legislative bills for the fourth industrial revolution.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Sung-eui-Yoon.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Sung-eui Yoon<\/strong><\/p>\n<p>KAIST<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-488\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-488\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-487\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-487\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-488\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Sung-Eui Yoon is a professor at Korea Advanced Institute of Science and Technology (KAIST). He received the B.S. and M.S. degrees in computer science from Seoul National University in 1999 and 2001, respectively. He received his Ph.D. degree in computer science from the University of North Carolina at Chapel Hill in 2005. He was a postdoctoral scholar at Lawrence Livermore National Laboratory, USA. His research interests include graphics, vision, and robotics. He has published about 100 technical papers, and gave numerous tutorials on ray tracing, collision detection, and image search in premier conferences like ACM SIGGRAPH, IEEE Visualization, CVPR, ICRA, etc. He served as conf. co-chair and paper co-chair for ACM I3D 2012 and 2013 respectively. At 2008, he published a monograph on real-time massive model rendering with other three co-authors. Recently, we also published an online book on Rendering at 2018. Some of his papers received a test-of-time award, a distinguished paper award, and a few invitations to IEEE Trans. on Visualization and Graphics. He is currently senior members of IEEE and ACM.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Masatoshi-Yoshikawa.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Masatoshi Yoshikawa<\/strong><\/p>\n<p>Kyoto University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-490\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-490\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-489\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-489\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-490\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Masatoshi Yoshikawa received the B.E., M.E. and Ph.D. degrees from Department of Information Science, Kyoto University in 1980, 1982 and 1985, respectively. In 1985, he joined The Institute for Computer Sciences, Kyoto Sangyo University as an Assistant Professor. From April 1989 to March 1990, he has been a Visiting Scientist at the Computer Science Department of University of Southern California (USC). In 1993, he joined Nara Institute of Science and Technology as an Associate Professor of Graduate School of Information Science. From April 1996 to January 1997, he has stayed at Department of Computer Science, University of Waterloo as a Visiting Associate Professor. From June 2002 to March 2006, he served as a professor at Nagoya University. From April 2006, he has been a professor of Graduate School of Informatics, Kyoto University.<\/p>\n<p>One of his current research topics is theory and practice of privacy protection. As a basic research, he investigated the potential privacy loss of a traditional Differential Privacy (DP) mechanism under temporal correlations. He is also interested in personal data market. Particularly, he is studying a mechanism for pricing and selling personal data perturbed by DP.<\/p>\n<p>He was a General Co-Chair of the 6th IEEE International Conference on Big Data and Smart Computing (BigComp 2019). He is a Steering Committee member of the International Conference on Big Data and Smart Computing (BigComp), He is serving as a PC member of VLDB2020 and ICDE2030. He is member of the IEEE ICDE Steering Committee, Science Council of Japan (SCJ), ACM, IPSJ and IEICE.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Huanjing-Yue.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Huanjing Yue<\/strong><\/p>\n<p>Tianjin University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-492\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-492\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-491\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-491\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-492\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Huanjing Yue received the B.S. and Ph.D. degrees from Tianjin University, Tianjin, China, in 2010 and 2015, respectively. She was an Intern with Microsoft Research Asia from 2011 to 2012, and from 2013 to 2015. She visited the Video Processing Laboratory, University of California at San Diego, from 2016 to 2017. She is currently an Associate Professor with the School of Electrical and Information Engineering, Tianjin University. Her current research interests include image processing and computer vision. She received the Microsoft Research Asia Fellowship Honor in 2013 and was selected into the Elite Scholar Program of Tianjin University in 2017.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Lijun-Zhang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Lijun Zhang<\/strong><\/p>\n<p>Nanjing University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-494\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-494\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-493\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-493\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-494\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Lijun Zhang received the B.S. and Ph.D. degrees in Software Engineering and Computer Science from Zhejiang University, China, in 2007 and 2012, respectively. He is currently an associate professor of the Department of Computer Science and Technology, Nanjing University, China. Prior to joining Nanjing University, he was a postdoctoral researcher at the Department of Computer Science and Engineering, Michigan State University, USA. His research interests include machine learning and optimization. He has published 80 academic papers, most of which are on prestigious conferences and journals, such as ICML, NeurIPS, COLT and JMLR. He received the DAMO Academy Young Fellow of Alibaba, and AAAI-12 Outstanding Paper Award.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Min-Zhang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Min Zhang<\/strong><\/p>\n<p>Tsinghua University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-496\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-496\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-495\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-495\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-496\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr. Min Zhang is a tenured associate professor in the Dept. of Computer Science & Technology, Tsinghua University, specializes in Web search and recommendation, and user modeling. She is the vice director of State Key Lab. of Intelligent Technology & Systems, the executive director of Tsinghua-MSRA Lab on Media and Search. She also serves as the ACM SIGIR Executive Committee member, associate editor for the ACM Transaction of Information Systems (TOIS), Short Paper co-Chair of SIGIR 2018, Program co-Chair of WSDM 2017, etc. She has published more than 100 papers on top level conferences with 4100+ citations. She was awarded Beijing Science and Technology Award (First Prize), etc. She also owns 12 patents. And she has made a lot of cooperation with international and domestic enterprises, such as Microsoft, Toshiba, Samsung, Sogou, WeChat, Zhihu, JD, etc<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Tianzhu-Zhang.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Tianzhu Zhang<\/strong><\/p>\n<p>University of Science and Technology of China<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-498\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-498\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-497\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-497\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-498\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Tianzhu Zhang is currently a Professor at the Department of Automation, School of Information Science and Technology, University of Science and Technology of China. His current research interests include pattern recognition, computer vision, multimedia computing, and machine learning. He has authored or co-authored over 80 journal and conference papers in these areas, including over 60 IEEE\/ACM Transactions papers (TPAMI\/IJCV\/TIP) and top-tier conference papers (ICCV\/CVPR\/ACM MM). According to the Google Scholar, his papers have been cited more than 4900 times. His work has been recognized by 2017 China Multimedia Conference Best Paper Award and 2016 ACM Multimedia Conference Best Paper Award (CCF-A). He has got Chinese Academy of Sciences President Award of Excellence in 2011, Excellent Doctoral Dissertation of Chinese Academy of Sciences in 2012, Youth Innovation Promotion Association CAS in 2018, and the Natural Science Award (first Prize) of Chinese Institute of Electronics in 2018. He served\/serves as the Area Chair for CVPR 2020, ICCV 2019, ACM MM 2019, WACV 2018, ICPR 2018, and MVA 2017, the Associate Editor for IEEE T-CSVT and Neurocomputing. He received the outstanding reviewer award in MMSJ, ECCV 2016 and CVPR 2018.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Yu-Zhang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Yu Zhang<\/strong><\/p>\n<p>University of Science & Technology of China<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-500\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-500\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-499\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-499\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-500\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Yu Zhang is an associate professor in School of Computer Science & Technology, University of Science and Technology of China (USTC). She got her Ph.D. at USTC in Jan. 2005. Her current research interests include programming languages and systems for emerging AI applications, quantum software.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Zhou-Zhao.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Zhou Zhao<\/strong><\/p>\n<p>Zhejiang University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-502\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-502\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-501\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-501\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-502\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Zhou Zhao received his Ph.D. from the Hong Kong University of Science and Technology in 2015. He subsequently worked at Zhejiang University as an associate professor and doctoral supervisor. Zhao\u2019s main research interests are in natural language processing and multimedia key technology research and development. Zhao is a fellow of the Association for Computing Machinery(ACM),a fellow of the Institute of Electrical and Electronics Engineers(IEEE),and a fellow of the China Computer Federation(CCF).In addition, he release more than sixty papers on the top international conference, such as NIPS, CLR, ICML. Zhao was rewarded the Innovation Award of the Information Department of Zhejiang University the title of the Outstanding Youth in Zhejiang.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Wei-Shi-Zheng.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Wei-Shi Zheng<\/strong><\/p>\n<p>Sun Yat-sen University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-504\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-504\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-503\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-503\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-504\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr. Wei-Shi Zheng is now a Professor with Sun Yat-sen University. Dr. Zheng received the PhD degree in Applied Mathematics from Sun Yat-sen University in 2008. He is now a full Professor at Sun Yat-sen University. He has now published more than 100 papers, including more than 80 publications in main journals (TPAMI, TNN\/TNNLS, TIP, TSMC-B, PR) and top conferences (ICCV, CVPR, IJCAI, AAAI). He has joined the organisation of four tutorial presentations in ACCV 2012, ICPR 2012, ICCV 2013 and CVPR 2015. His research interests include person\/object association and activity understanding in visual surveillance, and the related large-scale machine learning algorithm. Especially, Dr. Zheng has active research on person re-identification in the last five years. He serves a lot for many journals and conference, and he was announced to perform outstanding review in recent top conferences (ECCV 2016 & CVPR 2017). He has ever joined Microsoft Research Asia Young Faculty Visiting Programme. He has ever served as a senior PC\/area chair\/associate editor of AVSS 2012, ICPR 2018, IJCAI 2019\/2020, AAAI 2020 and BMVC 2018\/2019. He is an IEEE MSA TC member. He is an associate editor of Pattern Recognition. He is a recipient of Excellent Young Scientists Fund of the National Natural Science Foundation of China, and a recipient of Royal Society-Newton Advanced Fellowship of United Kingdom.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<h2>Technology Showcase by Microsoft Research Asia<\/h2>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-506\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-506\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-505\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tAutoSys: Learning based approach for system optimization\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-505\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-506\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Mao Yang, Microsoft Research<\/p>\n<p>As computer systems and networking get increasingly complicated, optimizing them manually with explicit rules and heuristics becomes harder than ever before, sometimes impossible. At Microsoft Research Asia, our AutoSys project applies learning to large-scale system performance tuning. Our AutoSys framework (1) defines interfaces to expose system features for learning, (2) introduces monitors to detect learning-induced failures, and (3) runs resource management to support heterogenous requirements of learning-related tasks. Based on AutoSys, we have built a tool to help many crucial system scenarios within Microsoft. These scenarios include multimedia search for Bing (e.g., tail latency reduced by up to ~40%, and capacity increased by up to ~30%), job scheduling for Bing Ads (e.g., tail latency reduced by up to ~13%), and so on.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-508\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-508\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-507\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDual Learning and Its Applications to Machine Translation and Speech Synthesis\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-507\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-508\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Yingce Xia and Xu Tan, Microsoft Research<\/p>\n<p>Many AI tasks are emerged in dual forms, e.g., English-to-French translation vs. French-to-English translation, speech recognition vs. speech synthesis, question answering vs. question generation, and image classification vs. image generation. Dual learning is a new learning framework that leverages the primal-dual structure of AI tasks to obtain effective feedback or regularization signals to enhance the learning\/inference process. In this demo, we will show two applications of dual learning: machine translation and speech synthesis.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-510\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-510\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-509\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFluency Boost Learning and Inference for Neural Grammar Checker\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-509\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-510\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Tao Ge, Microsoft Research<\/p>\n<p>Neural sequence-to-sequence (seq2seq) approaches have proven to be successful in grammatical error correction (GEC). Based on the seq2seq framework, we propose a novel fluency boost learning and inference mechanism. Fluency boosting learning generates diverse error-corrected sentence pairs during training, enabling the error correction model to learn how to improve a sentence&#8217;s fluency from more instances, while fluency boosting inference allows the model to correct a sentence incrementally with multiple inference steps. Combining fluency boost learning and inference with conventional seq2seq models, our approach achieves the state-of-the-art performance in the GEC benchmarks.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-512\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-512\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-511\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tOneOCR For Digital Transformation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-511\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-512\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Qiang Huo, Microsoft Research<\/p>\n<p>In Microsoft, we have been developing a new generation OCR engine (aka OneOCR), which can detect both printed and handwritten text in an image captured by a camera or mobile phone, and recognize the detected text for follow-up actions. Our unified OneOCR engine can recognize mixed printed and handwritten English text lines with arbitrary orientations (even flipped), outperforming significantly other leading industrial OCR engines on a wide range of application scenarios. Empowered by OneOCR engine, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/docs.microsoft.com\/en-us\/azure\/cognitive-services\/computer-vision\/concept-recognizing-text#read-api\">Computer Vision Read<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> capability and <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/azure.microsoft.com\/en-us\/services\/search\/\">Cognitive Search capability of Azure Search<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> are generally available, and a <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/azure.microsoft.com\/en-us\/services\/cognitive-services\/form-recognizer\/\">Form Recognizer<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> with <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/docs.microsoft.com\/en-us\/azure\/cognitive-services\/form-recognizer\/quickstarts\/python-receipts\">Receipt Understanding<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> capability is available for preview, all in Azure Cognitive Services, which can power enterprise workflows and Robotic Process Automation (RPA) to spur digital transformation. In this presentation, I will demonstrate the capabilities of Microsoft\u2019s latest OneOCR engine, highlight its core component technologies, and explain the roadmap ahead.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-514\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-514\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-513\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tSpreadsheet Intelligence for Ideas in Excel\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-513\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-514\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter:<\/strong> Shi Han, Microsoft Research<\/p>\n<p>Ideas in Excel aims at such one-click intelligence\u2014when a user clicks the Ideas button on the Home tab of Excel, the intelligent service will empower the user to understand his or her data via automatic recommendation of visual summaries and interesting patterns. Then the user can insert the recommendations to the spreadsheet to help further analysis or as analysis result directly. To enable such one-click intelligence, there are underlying technical challenges to solve. At the Data, Knowledge and Intelligence group of Microsoft Research Asia, we have long-term research on spreadsheet intelligence and automated insights accordingly. And via close collaboration with Excel product teams, we transferred a suite of technologies and shipped Ideas in Excel together. In this demo presentation, we will show this intelligent feature and introduce corresponding technologies.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<h2>Technology Showcase by Academic Collaborators<\/h2>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-516\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-516\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-515\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\t3D Caricature Generation from Real Face Images\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-515\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-516\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Yucheol Jung, Wonjong Jang, and Seungyong Lee, POSTECH<\/p>\n<p>A 3D caricature can be defined as a 3D mesh with cartoon-style shape exaggeration of a face. We present a novel deep learning based framework that generates a 3D caricature for a given real face image. Our approach exploits 3D geometry information in the caricature generation process and produces more convincing 3D shape exaggerations than 2D caricature-based approaches.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-518\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-518\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-517\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tA Co-Training Method towards Machine Reading Comprehension\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-517\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-518\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Minlie Huang, Tsinghua University<\/p>\n<p>A Co-Training Method towards Machine Reading Comprehension<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-520\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-520\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-519\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tA Method for Controlling Human Hearing by Editing the Frequency of the Sound in Real Time\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-519\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-520\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Hiroki Watanabe, Hokkaido University<\/p>\n<p>A Method for Controlling Human Hearing by Editing the Frequency of the Sound in Real Time<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-522\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-522\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-521\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tAbstractive Summarization of Reddit Posts with Multi-level Memory Networks\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-521\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-522\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Gunhee Kim, Seoul National University<\/p>\n<p>We address the problem of abstractive summarization in two directions: proposing a novel dataset and a new model. First, we collect Reddit TIFU dataset, consisting of 120K posts from the online discussion forum Reddit. We use such informal crowd-generated posts as text source, in contrast with existing datasets that mostly use formal documents as source such as news articles. Thus, our dataset could less suffer from some biases that key sentences usually locate at the beginning of the text and favorable summary candidates are already inside the text in similar forms. Second, we propose a novel abstractive summarization model named multi-level memory networks (MMN), equipped with multi-level memory to store the information of text from different levels of abstraction. With quantitative evaluation and user studies via Amazon Mechanical Turk, we show the Reddit TIFU dataset is highly abstractive and the MMN outperforms the state-of-the-art summarization models.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-524\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-524\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-523\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tAdaptive Graph Structure Learning for Image Sentence Matching\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-523\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-524\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> TianZhu Zhang, University of Science and Technology of China<\/p>\n<p>We adapt the attention mechanism for visual and semantic elements representation.<\/p>\n<p>We adaptively construct graphs and update the features for objects and words, making good use of both the intra modality relationship and inter modality relationship.<\/p>\n<p>We consider the structure information across different graphs by proposing a constraint on the semantic element, forcing the semantic element aligning to the corresponded visual element.<\/p>\n<p>The proposed model obtains the promising results on dataset Flickr30K and MS-COCO.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-526\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-526\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-525\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tAdversarial Attacks and Defenses in Deep Learning\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-525\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-526\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Yinpeng Dong, Tsinghua University<\/p>\n<p>Adversarial Attacks and Defenses in Deep Learning<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-528\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-528\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-527\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tAI+VIS: Automated Visualization Production\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-527\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-528\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Huamin Qu, The Hong Kong University of Science and Technology<\/p>\n<p>Existing visualization designs are often based on manual design and need lots of human efforts. How can we apply deep learning techniques to automatically generating visualization products? We report our two recent progresses on this direction:<\/p>\n<p>Automated Graph Drawing: We propose a graph-LSTM-based model to directly generate graph drawings with desirable visual properties similar to the training drawings, which do not need users to tune different algorithm-specific parameters.<\/p>\n<p>Automated Design of Timeline Infographics: We contribute an end-to-end approach to automatically extract an extensible template from a bitmap timeline image. The output can be used to generate new timelines with updated data.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-530\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-530\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-529\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBlockchain-Enabled Incentive and Trading Mechanism Design for AIoT Service Platform\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-529\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-530\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Ai-Chun Pang, National Taiwan University<\/p>\n<p>Ensure data effectiveness by the blockchain technology so as to hold data properties like immutability and credibility during the whole transaction process.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-532\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-532\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-531\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBypassing Defense Methods for Neural Network Backdoor\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-531\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-532\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Sangwoo Ji and Jong Kim, POSTECH<\/p>\n<p>Bin Zhu, Microsoft Research<\/p>\n<p>Bypass two backdoor detection method: suspicious data instance detection and backdoor trigger detection.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-534\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-534\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-533\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCan Kernel Networking Become Fast Enough?\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-533\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-534\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Chuck Yoo, Korea University<\/p>\n<ul>\n<li>Existing network optimizations suffer from poor stability, low resource efficiency, and a need for API changes<\/li>\n<li>Solution: Kernel-based optimization for high-performance networking<\/li>\n<li>L3 forwarding achieves performance similar to DPDK<\/li>\n<li>A virtual switch achieves 67.5% performance of DPDK-OVS and three times greater resource efficiency<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-536\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-536\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-535\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCDPN: Coordinates-Based Disentangled Pose Network for Real-Time RGB-Based 6-DoF Object Pose Estimation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-535\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-536\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Xiangyang Ji, Tsinghua University<\/p>\n<p>CDPN: Coordinates-Based Disentangled Pose Network for Real-Time RGB-Based 6-DoF Object Pose Estimation<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-538\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-538\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-537\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCommonsense Reasoning with Structured Knowledge\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-537\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-538\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Hongming Zhang, The Hong Kong University of Science and Technology<\/p>\n<p>Understanding human\u2018s language requires complex commonsense knowledge. However, existing large-scale knowledge graphs mainly focus on knowledge about entities while ignoring commonsense knowledge about activities, states, or events, which are used to describe how entities or things act in the real world. To fill this gap, we develop ASER (activities, states, events, and their relations), a large-scale eventuality knowledge graph extracted from more than 11-billion-token unstructured textual data. ASER contains 15 relation types belonging to five categories, 194-million unique eventualities, and 64-million unique edges among them. Both human and extrinsic evaluations demonstrate the quality and effectiveness of ASER.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-540\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-540\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-539\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tComplex Correlation Modeling and Analysis Framework for Incomplete, Multimodal and Dynamic Data\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-539\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-540\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Zizhao Zhang, Tsinghua University<\/p>\n<p>A well constructed hypergraph structure can represent the data correlation accurately, yet leading to better performance.How to construct a good hypergraph to fit complex data?<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-542\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-542\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-541\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tConcordia: Distributed Shared Memory with In-Network Cache Coherence\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-541\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-542\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Youyou Lu, Tsinghua University<\/p>\n<p>Divides coherence responsibility between the switch and servers. The switch serializes conflicted requests and forwards them to correct destinations via a lock-check-forward pipeline. Servers execute requester-driven coherence control to reach coherence and transit states.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-544\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-544\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-543\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tContinual Learning with Dynamic Network Expansion\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-543\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-544\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Sung Ju Hwang, KAIST<\/p>\n<ul>\n<li>Perform effective knowledge transfer from earlier tasks to later tasks.<\/li>\n<li>Prevent catastrophic forgetting, where the earlier task performance gets negatively affected by semantic drift of the representations as the model adapts to later tasks.<\/li>\n<li>Obtain maximal performance with minimal increase in the network capacity.<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-546\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-546\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-545\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCounting Hypergraph Colorings in the Local Lemma Regime\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-545\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-546\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Chao Liao, Shanghai Jiao Tong University<\/p>\n<p>Counting Hypergraph Colorings in the Local Lemma Regime<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-548\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-548\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-547\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCross-Lingual Visual Grounding and Multimodal Machine Translation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-547\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-548\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Chenhui Chu, Osaka University<\/p>\n<p>Cross-Lingual Visual Grounding and Multimodal Machine Translation<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-550\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-550\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-549\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCuriosity-Bottleneck: Exploration by Distilling Task-Specific Novelty\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-549\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-550\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Gunhee Kim, Seoul National University<\/p>\n<p>Exploration based on state novelty has brought great success in challenging reinforcement learning problems with sparse rewards. However, existing novelty-based strategies become inefficient in real-world problems where observation contains not only task-dependent state novelty of our interest but also task-irrelevant information that should be ignored. We introduce an information- theoretic exploration strategy named Curiosity-Bottleneck that distills task-relevant information from observation. Based on the information bottleneck principle, our exploration bonus is quantified as the compressiveness of observation with respect to the learned representation of a compressive value network. With extensive experiments on static image classification, grid-world and three hard-exploration Atari games, we show that Curiosity-Bottleneck learns an effective exploration strategy by robustly measuring the state novelty in distractive environments where state-of-the-art exploration methods often degenerate.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-552\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-552\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-551\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDeep Reinforcement Learning for the Transfer from Simulation to the Real World with Uncertainties for AI Curling Robot System\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-551\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-552\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Dong-Ok Won and Seong-Whan Lee, Korea University<\/p>\n<p>Recently, deep reinforcement learning (DRL) has even enabled real world applications such as robotics. Here we teach a robot to succeed in curling (Olympic discipline), which is a highly complex real-world application where a robot needs to carefully learn to play the game on the slippery ice sheet in order to compete well against human opponents. This scenario encompasses fundamental challenges: uncertainty, non-stationarity, infinite state spaces and most importantly scarce data. One fundamental objective of this study is thus to better understand and model the transfer from simulation to real-world scenarios with uncertainty. We demonstrate our proposed framework and show videos, experiments and statistics about Curly our AI curling robot being tested on a real curling ice sheet. Curly performed well both, in classical game situations and when interacting with human opponents.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-554\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-554\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-553\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDeep Text Generation: Conversation and Application\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-553\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-554\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Rui Yan, Peking University<\/p>\n<p>Deep Text Generation: Conversation and Application<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-556\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-556\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-555\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDevelopment of 3D capsule endoscopic system\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-555\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-556\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Ryo Furukawa, Hiroshima City University<\/p>\n<p>Development of 3D capsule endoscopic system<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-558\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-558\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-557\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDevelopment of automatic Labanotation estimation system from video using Deep Learning\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-557\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-558\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Hiroshi Kawasaki, Kyushu University<\/p>\n<p>Our project aims to research on human representation and understanding human motion based on vision-based approach and develop new applications.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-560\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-560\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-559\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDissecting and Accelerating Neural Network via Graph Instrumentation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-559\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-560\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Jingwen Leng, Shanghai Jiao Tong University<\/p>\n<p>The proposed graph instrumentation framework can observe and modify neural networks using user-defined analysis code without changes in source code.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-562\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-562\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-561\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDistant Supervised Domain-Specific Knowledge Base Construction and Population\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-561\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-562\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Lei Chen, The Hong Kong University of Science and Technology<\/p>\n<p>Our Goal in Domain-Specific KB Construction<\/p>\n<ul>\n<li>Entity Extraction, Entity Typing and Relation Extraction related to the target domain.<\/li>\n<li>Training data generation based on distant-supervision without human annotation.<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-564\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-564\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-563\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tEfficient and Effective Sparse DNNs with Bank-Balanced Sparsity\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-563\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-564\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Shijie Cao, Harbin Institute of Technology<\/p>\n<p>Efficient and Effective Sparse DNNs with Bank-Balanced Sparsity<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-566\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-566\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-565\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tEfficient Deep Neural Networks for Realistic Noise Removal\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-565\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-566\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Huanjing Yue, Tianjin University<\/p>\n<p>We propose an end-to-end noise estimation and removal network, where the estimated noise map is weighted concatenated with the noisy input to improve the denoising performance.<\/p>\n<p>The proposed noise estimation network takes advantage of the Bayer pattern prior of the noise maps, which not only improves the estimation accuracy but also reduces the memory cost.<\/p>\n<p>We propose a RSD block to fully take advantage of the spatial and channel correlations of realistic noise. The ablation study demonstrates the effectiveness of the proposed module.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-568\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-568\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-567\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tEmoji-Powered Representation Learning for Cross-Lingual Sentiment Analysis\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-567\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-568\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Zhenpeng Chen, Peking University<\/p>\n<p>Emoji-Powered Representation Learning for Cross-Lingual Sentiment Analysis<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-570\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-570\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-569\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tErebus: A Stealthier Partitioning Attack against Bitcoin Peer-to-Peer Network\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-569\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-570\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Muoi Tran, National University of Singapore<\/p>\n<p>We present the\u00a0Erebus\u00a0attack, which allows large malicious Internet Service Providers (ISPs) isolate any targeted public Bitcoin nodes from the Bitcoin peer-to-peer network. The Erebus attack does\u00a0not\u00a0require routing manipulation (e.g., BGP hijacks) and hence it is\u00a0virtually undectable\u00a0to any control-plane and even typical data-plane detectors.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-572\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-572\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-571\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tExplaining Word Embeddings via Disentangled Representations\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-571\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-572\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Shou-de Lin, National Taiwan University<\/p>\n<p>We propose transforming word embeddings into interpretable representations disentangling explainable factors<\/p>\n<p>Examples of factors: a) Topical factors: food, location, animal, etc. b) Part-of-Speech factors: noun, adj, verb, etc.<\/p>\n<p>We define and propose 4 desirable properties of our disentangled word vectors: a) Modularity, b) Compactness, c) Explicitness, d) Feature preservation<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-574\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-574\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-573\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFree-form Video Inpainting with 3D Gated Conv, TPD, and LGTSM\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-573\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-574\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Winston Hsu, National Taiwan University.<\/p>\n<p>Free-form Video Inpainting with 3D Gated Conv, TPD, and LGTSM<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-576\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-576\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-575\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFluid: A Blockchain based Framework for Crowdsourcing\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-575\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-576\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Lei Chen, The Hong Kong University of Science and Technology<\/p>\n<p>Fluid: A Blockchain based Framework for Crowdsourcing<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-578\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-578\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-577\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFLUID: Flexible User Interface Distribution for Ubiquitous Multi-device Interaction\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-577\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-578\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Insik Shin, KAIST<\/p>\n<p>Key idea: separation between app logic & UI parts1) Distributing target UI objects to remote devices and rendering them2) Giving an illusion as if app logic and UI objects were in the same process<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-580\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-580\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-579\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFuzzing with Interleaving Coverage for Multi-threading Program\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-579\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-580\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Youngjoo Ko and Jong Kim, POSTECH<\/p>\n<p>Bin Zhu, Microsoft Research<\/p>\n<p>Increase the performance of fuzzing to discover more bugs in multi-threading programs using interleaving coverage.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-582\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-582\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-581\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tGenerative Model-based Speech Enhancement for Speech Recognition\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-581\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-582\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Jinyoung Lee and Hong-Goo Kang, Yonsei University<\/p>\n<ul>\n<li>Remove ambient noise to improve automatic speech recognition performance<\/li>\n<li>Overcome the problems of conventional masking-based speech enhancement algorithms, e.g. speech signal distortion<\/li>\n<li>Propose a generative and adversarial model-based approach that effectively utilizes spectro-temporal characteristics of speech and noise components<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-584\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-584\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-583\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tGlobal-Local Temporal Representations For Video Person Re-Identification\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-583\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-584\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Shiliang Zhang, Peking University<\/p>\n<ul>\n<li>Propose Dilated Temporal Convolution (DTC) to learn short-term temporal cues<\/li>\n<li>Propose Temporal Self Attention (TSA) to learn the long-term temporal cues<\/li>\n<li>DTC and TSA learn complementary temporal feature<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-586\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-586\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-585\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tGradient Descent Finds Global Minima of DNNs\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-585\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-586\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Liwei Wang, Peking University<\/p>\n<p>Gradient Descent Finds Global Minima of DNNs<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-588\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-588\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-587\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tGraph Neural Networks for 3D Face Anti-spoofing\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-587\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-588\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Wei HU and Gusi Te, Peking University<\/p>\n<p>This project aims to explore the emerging graph neural networks (GNN) based on texture plus depth features to address the problem of 3D face anti spoofing. Various spoofing attacks are growing by presenting a fake or copied facial evidence to obtain valid authentication. While anti spoofingtechniques using 2D facial data have matured, 3D face anti spoofing hasn\u2019t been studied much, thus allowing advanced spoofing techniques such as 3D masking at large. Hence, we propose to address this problem, based on texture plus depth cues acquired from RGBD cameras, and in the framework of GNN.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-590\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-590\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-589\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tGraph-structured Knowledge Base Management and Applications\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-589\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-590\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Hongzhi Wang, Harbin Institute of Technology<\/p>\n<p>Graph-structured Knowledge Base Management and Applications<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-592\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-592\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-591\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tHome Location Selection with Reachability\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-591\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-592\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> YingcaiWu, Zhejiang University<\/p>\n<p>This study characterizes the problem of reachabilitycentric multi-criteria decision-making for choosing ideal homes.The system can also be adopted inother location selection scenarios, in which the reachability of locations is considered (e.g., selecting a location for a convenience store).<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-594\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-594\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-593\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tIdentifying Structures in Spreadsheets\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-593\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-594\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Wensheng Dou, Chinese Academy of Sciences<\/p>\n<p>Identifying Structures in Spreadsheets<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-596\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-596\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-595\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tImage-to-Image Translation via Group-wise Deep Whitening-and-Coloring Transformation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-595\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-596\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Jaegul Choo, Korea University<\/p>\n<p>Recently, unsupervised exemplar-based image-to-image translation has accomplished substantial advancements. In order to transfer the information from an exemplar to an input image, existing methods often use a normalization technique, e.g., adaptive instance normalization, that controls the channel-wise statistics of an input activation map at a particular layer, such as the mean and the variance. Meanwhile, style transfer approaches similar task to image translation by nature, demonstrated superior performance by using the higher-order statistics such as covariance among channels in representing a style. However, applying this approach in image translation is computationally intensive and error-prone due to the expensive time complexity and its non-trivial backpropagation. In response, this paper proposes an end-to-end approach tailored for image translation that efficiently approximates this transformation with our novel regularization methods. We further extend our approach to a group-wise form for memory and time efficiency as well as image quality. Extensive qualitative and quantitative experiments demonstrate that our proposed method is fast, both in training and inference, and highly effective in reflecting the style of an exemplar.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-598\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-598\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-597\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tImmersive Biology &#8211; An Interactive Microscope for Informal Biology Education\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-597\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-598\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Jaewoo Jung, Kyungwon Lee and Seung Ah Lee, Yonsei University<\/p>\n<p>We developed a new hybrid digital-biological system that provide interactive and immersive experiences between humans and biological objects for applications in life science education and research. The scope of this work includes;<\/p>\n<ul>\n<li>Construction of an automated optical stimulation microscope, which uses light to both image and interface with light-sensitive cells.<\/li>\n<li>Use of human interaction modalities to convert human\u2019s natural input into stimuli for the microscopic biological objects.<\/li>\n<li>Comparative user study as a public installation that evaluated user behaviors, user engagement and learning outcomes.<\/li>\n<\/ul>\n<p>We expect that this platform will transform microscopes from a passive observation tool to an active interaction medium, assisting scientific research, life science education and clinical interventions.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-600\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-600\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-599\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tImproving Join Reorderability with Compensation Operators\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-599\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-600\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> TaiNing Wang and Chee-Yong Chan, National University of Singapore<\/p>\n<p>Improving Join Reorderability with Compensation Operators<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-602\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-602\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-601\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tImproving the Performance of Video Analytics Using WIFI Signal\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-601\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-602\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Hai Truong, Rajesh Krishna Balan, Singapore Management University<\/p>\n<p>Automatic analysis of the behaviour of large groups of people is an important requirement for a large class of important applications such as crowd management, traffic control, and surveillance. For example, attributes such as the number of people, how they are distributed, which groups they belong to, and what trajectories they are taking can be used to optimize the layout of a mall to increase overall revenue. A common way to obtain these attributes is to use video camera feeds coupled with advanced video analytics solutions. However, solely utilizing video feeds is challenging in high people-density areas, such as a normal mall in Asia, as the high people density significantly reduces the effectiveness of video analytics due to factors such as occlusion. In this work, we propose to combine video feeds with WiFi data to achieve better classification results of the number of people in the area and the trajectories of those people. In particular, we believe that our approach will combine the strengths. of the two different sensors, WiFi and video, while reducing the weaknesses of each sensor. This work has started fairly recently and we will present our thoughts and current results up to now.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-604\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-604\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-603\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tIntelligent Action Analytics\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-603\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-604\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Jiaying Liu, Peking University<\/p>\n<p>Intelligent Action Analytics<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-606\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-606\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-605\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tInteractive Methods to Improve Data Quality\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-605\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-606\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Changjian Chen, Tsinghua University<\/p>\n<p>Interactive Methods to Improve Data Quality<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-608\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-608\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-607\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tInter-learner shadowing framework for comprehensibility-based assessment of learners' speech\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-607\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-608\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Nobuaki MINEMATSU, University of Tokyo<\/p>\n<p>Inter-learner shadowing framework for comprehensibility-based assessment of learners&#8217; speech<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-610\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-610\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-609\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tIoTcube: An Open Platform for Feedback based Protocol Fuzzing\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-609\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-610\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Heejo Lee, Korea University<\/p>\n<p>An open platform for feedback based fuzzing improves its testing performance using two factors: binary feedback and user feedback.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-612\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-612\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-611\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tLearning Multi-label Feature for Fine-Grained Food Recognition\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-611\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-612\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Xueming Qian, Xi&#8217;an Jiaotong University<\/p>\n<p>1.We proposed Attention Fusion Network (AFN). it pay attention to food discrimination region against unstru-ctured defeat, and generate the feature embeddings jointly aware the ingredients and food.<\/p>\n<p>2.We proposed the balance focal loss (BFL) to enhance the joint learning of ingredients and food, optimize feature expression ability for multi-label ingredients<\/p>\n<p>3. The effectiveness is proved through the comparative experiments.\u00a0 In particular, the use of balance focal loss make the Micro-F1, Macro-F1 and Accuracy of ingredi-ents improved by 5.76%, 12.62% and 5.78%.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-614\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-614\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-613\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMAP Inference for Customized Determinantal Point Processes via Maximum Inner Product Search\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-613\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-614\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Insu Han, KAIST<\/p>\n<p>MAP Inference for Customized Determinantal Point Processes via Maximum Inner Product Search<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-616\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-616\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-615\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMinimizing Network Footprint in Distributed Deep Learning\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-615\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-616\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Hong Xu, City University of Hong Kong<\/p>\n<p>Minimizing Network Footprint in Distributed Deep Learning<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-618\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-618\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-617\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMultilingual End-to-End Speech Translation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-617\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-618\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Hirofumi Inaguma, Kyoto University<\/p>\n<p>Directly translate source speech to target languages with a single sequence-to-sequence (S2S) model<\/p>\n<ul>\n<li>Many-to-many (M2M)<\/li>\n<li>One-to-many (O2M)<\/li>\n<\/ul>\n<p>Outperformed the bilingual end-to-end speech translation (E2E-ST) models<\/p>\n<p>Shared representations obtained from multilingual E2E-ST were more effective than those from the bilingual one for transfer learning to a very low-resource ST task: Mboshi->French (4.4h)<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-620\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-620\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-619\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMulti-marginal Wasserstein GAN\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-619\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-620\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Mingkui Tan, South China University of Technology<\/p>\n<ul>\n<li>We propose a novel MWGAN to optimize the multi-marginal distance among different domains.<\/li>\n<li>We define and analyze the generalization performance of MWGAN for the multiple domain translation task.<\/li>\n<li>Extensive experiments demonstrate the effectiveness of MWGAN on balanced and imbalanced translation tasks.<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-622\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-622\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-621\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tNAT: Neural Architecture Transformer for Accurate and Compact Architectures\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-621\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-622\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Mingkui Tan, South China University of Technology<\/p>\n<ul>\n<li>Propose a novel Neural Architecture Transformer (NAT) to optimize any arbitrary architecture.<\/li>\n<li>Cast the problem into a Markov Decision Process.<\/li>\n<li>Employ Graph Convolution Network to learn the policy.<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-624\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-624\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-623\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tNFD: Using Behavior Models to Develop Cross-Platform NFs\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-623\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-624\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Wenfei Wu, Tsinghua University<\/p>\n<p>We propose a new NF development framework named NFD which consists of an NF abstraction layer to develop NF behavior models and a compiler to adapt NF models to specific runtime environments.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-626\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-626\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-625\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tNon-factoid Question Answering for Text and Video\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-625\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-626\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Seung-won Hwang, Yonsei University<\/p>\n<p>Question Answering (QA) has been mostly studied in the context of factoid, providing concise facts. In contrast, we study Non-factoid QA, extending to cover more realistic questions such as how- or why-questions with long answers, from long texts or videos. This demo and poster address the following questions:<\/p>\n<ul>\n<li>Non-factoid QA for text, combining the complementary strength of representation- and interaction-focused approaches [EMNLP19]. Extending this task for video has the opportunity and challenge, coming from multimodality and having no pre-divided answer candidates (e.g. paragraph), which is our ongoing MSRA collaboration.<\/li>\n<li>Human-in-the-loop debugging for QA Demo [SIGIR19]<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-628\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-628\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-627\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tNPA: Neural News Recommendation with Personalized Attention\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-627\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-628\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Chuhan Wu, Tsinghua University<\/p>\n<ul>\n<li>Different users usually have different interests in news.<\/li>\n<li>Different users may click the same news article due to different interests.<\/li>\n<li>We need personalized news and user representation!<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-630\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-630\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-629\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tNumerical\/quantitative system for common sense natural language processing\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-629\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-630\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Hiroaki Yamane, The University of Tokyo<\/p>\n<p>We construct methods for converting contextual language to numerical variables for quantitative\/numerical common sense in natural language processing.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-632\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-632\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-631\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tOnline Convex Optimization in Non-stationary Environments\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-631\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-632\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Shiyin Lu, Nanjing University<\/p>\n<p>Online Convex Optimization in Non-stationary Environments<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-634\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-634\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-633\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tOptimizing Quality of Experience (QoE) for Adaptive Bitrate Streaming via Deep Video Analytics\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-633\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-634\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Yonggang Wen, Nanyang Technological University<\/p>\n<p>QoE depending multiple families of Influential Factors (IF), to be optimized jointly for the best user experience.<\/p>\n<p>How to develop a unified and scalable framework to optimize QoE for multimedia communications, in the presence of system dynamics?<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-636\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-636\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-635\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tParaphrasing and Simplification with Lean Vocabulary\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-635\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-636\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Tadashi Nomoto, National Institute of Japanese Literature<\/p>\n<p>This work explores the impact of the subword representation on paraphrasing and text simplification. Experiments found that when combined with REINFORCE, the subword scheme boosted performance beyond the current state of the art both in paraphrasing and text simplification.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-638\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-638\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-637\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tPick-Carry-Place Household Tasks Using Labanotation for Learning-from-Observation Robots\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-637\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-638\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Jun Takamatsu, Nara Institute of Science and Technology<\/p>\n<p>Pick-Carry-Place Household Tasks Using Labanotation for Learning-from-Observation Robots<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-640\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-640\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-639\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tPredicting Future Instance Segmentation with Contextual Pyramid ConvLSTMs\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-639\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-640\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Wei-Shi Zheng, Sun Yat-sen University<\/p>\n<p>Predicting Future Instance Segmentation<\/p>\n<ul>\n<li>Given several frames in a video, this task is to predict future instance segmentation before the corresponding frames are observed.<\/li>\n<li>It is challenging due to the uncertainty in appearance variation caused by object moving, occlusion between objects, and viewpoint changing in videos.<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-642\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-642\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-641\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tProject Title: Secure and compact elliptic curve cryptosystems\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-641\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-642\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Yaoan Jin and Atsuko Miyaji, Graduate School of Engineering Osaka University<\/p>\n<p>Any attack based on information, such as timing information and power consumption, gained from the implementation of a cryptosystem.<\/p>\n<ul>\n<li>Simple Power Analysis (SPA)<\/li>\n<li>Safe Error Attack<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-644\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-644\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-643\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tPruning from Scratch\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-643\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-644\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Hang Su, Tsinghua University<\/p>\n<p>In this work, we find that pre-training an over-parameterized model is not necessary for obtaining an efficient pruned structure. We propose a novel network pruning pipeline which allows pruning from scratch.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-646\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-646\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-645\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tRecent Progress of Handwritten Mathematical Expression Recognition\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-645\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-646\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Jun Du, University of Science and Technology of China<\/p>\n<p>Recent Progress of Handwritten Mathematical Expression Recognition<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-648\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-648\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-647\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tRecurrent Temporal Aggregation Framework for Deep Video Inpainting\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-647\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-648\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Dahun Kim, KAIST<\/p>\n<ul>\n<li>To remove unwanted object from a video<\/li>\n<li>Frame-by-frame image inpainting<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-650\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-650\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-649\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tRelational Knowledge Distillation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-649\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-650\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Wonpyo park, Dongju Kim, and Minsu Cho, POSTECH<\/p>\n<p>Yan Lu, Microsoft Research<\/p>\n<p>Knowledge distillation aims at transferring knowledge acquired in one model (a teacher) to another model (a student) that is typically smaller. Previous approaches can be expressed as a form of training the student to mimic output activations of individual data examples represented by the teacher. We introduce a novel approach, dubbed relational knowledge distillation (RKD), that transfers mutual relations of data examples instead. For concrete realizations of RKD, we propose distance-wise and angle-wise distillation losses that penalize structural differences in relations. Experiments conducted on different tasks show that the proposed method improves educated student models with a significant margin. In particular for metric learning, it allows students to outperform their teachers&#8217; performance, achieving the state of the arts on standard benchmark datasets.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-652\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-652\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-651\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tResearch on Deep Learning Framework for Julia\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-651\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-652\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Yu Zhang, YuxiangZhang, YitongHuang, Xing Guo, University of Science and Technology of China<\/p>\n<p>Research on Deep Learning Framework for Julia<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-654\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-654\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-653\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tSARA: Self-Replay Augmented Record and Replay for Android in Industrial Cases\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-653\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-654\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Ting Liu, Xi&#8217;an Jiaotong University<\/p>\n<p>SARA: Self-Replay Augmented Record and Replay for Android in Industrial Cases<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-656\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-656\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-655\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tsecGAN: A Cycle-Consistent GAN for Securely-Recoverable Video Transformation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-655\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-656\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Fengyuan Xu, Nanjing University<\/p>\n<p>Video transformation needs to meet new requirements in actual use, such as privacy protection under surveillance scenarios:<\/p>\n<ul>\n<li>The transformed video can be restored to the original ones.<\/li>\n<li>The transformed video only can be restored by the authorized party.<\/li>\n<\/ul>\n<p>We need a unified translation style and a unique stenography.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-658\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-658\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-657\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tStyleMe: An AI Fashion Consultant for Personal Shopping and Style Advice\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-657\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-658\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Shintami Chusnul Hidayati, Institut Teknologi Sepuluh Nopember; Wen-Huang Cheng, National Chiao Tung University; Jianlong Fu, Microsoft Research<\/p>\n<p>StyleMe: An AI Fashion Consultant for Personal Shopping and Style Advice<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-660\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-660\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-659\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tSystem support for designing efficient gradient compression algorithms for distributed DNN training\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-659\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-660\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Cheng Li, University of Science and Technology of China<\/p>\n<p>System support for designing efficient gradient compression algorithms for distributed DNN training<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-662\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-662\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-661\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tTemporal Cause and Effect Localization on Car Crash Videos Via Multi-Task Neural Architecture Search\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-661\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-662\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Tackgeun You, POSTECH and Bohyung Han, Seoul National University<\/p>\n<ul>\n<li>Introduce a benchmark for temporal cause and effect localization on car crash videos.<\/li>\n<li>Propose a multi-task baseline for simultaneously conducting temporal cause and effect localization.<\/li>\n<li>Propose a multi-task neural architecture search that decides to share or separate building blocks<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-664\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-664\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-663\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tTowards a Deep and Unified Understanding of Deep Neural Models in NLP\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-663\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-664\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Chaoyu Guan, Shanghai Jiao Tong University<\/p>\n<p>A unified information based measure : quantify the information of each input word that is encoded in an intermediate layer of a deep NLP model.<\/p>\n<p>The information based measure as a tool<\/p>\n<ul>\n<li>Evaluating different explanation methods.<\/li>\n<li>Explaining different deep NLP models<\/li>\n<\/ul>\n<p>This measure enriches the capability of explaining DNNs.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-666\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-666\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-665\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tTowards Complex Text-to-SQL in Cross-Domain Database with Intermediate Representation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-665\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-666\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Ting Liu, Xi&#8217;an Jiaotong University<\/p>\n<p>Towards Complex Text-to-SQL in Cross-Domain Database with Intermediate Representation<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-668\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-668\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-667\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tVibration-Mediated Sensing Techniques for Tangible Interaction\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-667\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-668\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter:<\/strong> Seungmoon Choi and Seungjae Oh, POSTECH<\/p>\n<ul>\n<li>Recognize contact finger(s) on any rigid surfaces by decoding transmitted frequencies<\/li>\n<li>Identify a grasped object by visualizing the propagation dynamics of vibration<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-670\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-670\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-669\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tVideo Generation from Natural Language by Decomposing the Components of Video : Background, Object, and Action\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-669\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-670\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Kibeom Hong and Hyeran Byun, Yonsei University<\/p>\n<ul>\n<li style=\"list-style-type: none\">\n<ul>\n<li>Video can be created by separating Background and Foreground, and Foreground can be divided into Object and Action.<\/li>\n<li>We can get background and foreground information for video generation from text.<\/li>\n<li>In the Image domain, previous works[1,2,3] have studied image generation with text extensively, [4,5,6] expanded this idea to video domain.<\/li>\n<li>In this work, we want to create a video with three components in order to control more realistic and fine-grained parts.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-672\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-672\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-671\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tVideo Dialog via Progressive Inference and Cross-Transformer\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-671\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-672\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Zhou Zhao, Zhejiang University<\/p>\n<p>Video dialog is a new and challenging task, which requires the agent to answer questions combining video information with dialog history. And different from single-turn video question answering, the additional dialog history is important for video dialog, which often includes contextual information for the question. Existing visual dialog methods mainly use RNN to encode the dialog history as a single vector representation, which might be rough and straightforward. Some more advanced methods utilize hierarchical structure, attention and memory mechanisms, which still lack an explicit reasoning process. In this paper, we introduce a novel progressive inference mechanism for video dialog, which progressively updates query information based on dialog history and video content until the agent think the information is sufficient and unambiguous. In order to tackle the multi- modal fusion problem, we propose a cross-transformer module, which could learn more fine-grained and comprehensive interactions both inside and between the modalities. And besides answer generation, we also consider question generation, which is more challenging but significant for a complete video dialog system. We evaluate our method on two largescale datasets, and the extensive experiments show the effectiveness of our method.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-674\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-674\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-673\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tWidar 3.0: Zero-Effort Cross-Domain Gesture Recognition with Wi-Fi\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-673\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-674\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Zheng Yang, Tsinghua University<\/p>\n<p>Widar 3.0: Zero-Effort Cross-Domain Gesture Recognition with Wi-Fi<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-676\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-676\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-675\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tYour Tweets Reveal What You Like: Introducing Cross-media Content Information into Multi-domain Recommendation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-675\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-676\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Min Zhang, Tsinghua University<\/p>\n<p>The key to solving this problem is to conduct better user profiling.<\/p>\n<p>How about off-topic features in other platforms, such as tweets?<\/p>\n<ul>\n<li style=\"list-style-type: none\">\n<ul>\n<li style=\"list-style-type: none\">\n<ul>\n<li>On-topic features are helpful in understanding users\u2019 interests and preference.<\/li>\n<li>Off-topic features are able to describe users too.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>We will try to introduce these off-topic features (tweets) into different rating prediction algorithms.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<h3><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-293876\" style=\"vertical-align: top\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/icon-address.png\" alt=\"21ccc-icon-5\" width=\"30\" height=\"30\" \/>\u00a0<strong>Microsoft Address<\/strong><\/h3>\n<p>Venue: Tower 1-1F, No. 5 Danling Street, Haidian District, Beijing, China<\/p>\n<p>\u5730\u5740\uff1a\u4e2d\u56fd\u5317\u4eac\u6d77\u6dc0\u533a\u4e39\u68f1\u88575\u53f7\u5fae\u8f6f\u5927\u53a61\u53f7\u697c<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p><ul id='gallery-1' class='gallery galleryid-0 gallery-columns-2 gallery-size-medium stripped ms-row fixed-small'><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5484_095210-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5484_095210-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5484_095210-300x169.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4796_141029-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4796_141029-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4796_141029-300x201.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4892_143429-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4892_143429-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4892_143429-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4873_142816-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4873_142816-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4873_142816-300x201.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4853_142431-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4853_142431-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4853_142431-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4828_141729-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4828_141729-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4828_141729-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4812_141434-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4812_141434-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4812_141434-300x204.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5107_155210-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5107_155210-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5107_155210-300x200.jpg\" alt=\"a man holding a microphone\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5216_164751-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5216_164751-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5216_164751-300x204.jpg\" alt=\"a man holding a guitar\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5135_161527-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5135_161527-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5135_161527-300x194.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5256_165720-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5256_165720-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5256_165720-300x198.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4908_143958-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4908_143958-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4908_143958-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4935_145935-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4935_145935-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4935_145935-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4951_150340-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4951_150340-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4951_150340-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4964_150600-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4964_150600-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4964_150600-300x203.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4985_151118-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4985_151118-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4985_151118-300x201.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4991_151456-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4991_151456-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4991_151456-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5010_151930-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5010_151930-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5010_151930-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5026_152922-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5026_152922-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5026_152922-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5045_153748-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5045_153748-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5045_153748-300x202.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5058_153915-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5058_153915-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5058_153915-300x202.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5063_153948-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5063_153948-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5063_153948-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5066_154304-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5066_154304-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5066_154304-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5069_154334-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5069_154334-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5069_154334-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5094_154830-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5094_154830-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5094_154830-300x202.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5150_161747-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5150_161747-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5150_161747-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5172_162144-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5172_162144-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5172_162144-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5204_164458-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5204_164458-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5204_164458-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5209_164611-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5209_164611-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5209_164611-300x204.jpg\" alt=\"a man wearing a blue shirt\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5220_164922-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5220_164922-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5220_164922-300x196.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5235_165314-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5235_165314-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5235_165314-300x204.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5239_165329-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5239_165329-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5239_165329-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5242_165417-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5242_165417-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5242_165417-300x193.jpg\" alt=\"a group of people posing for the camera\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5246_165556-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5246_165556-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5246_165556-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5251_165637-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5251_165637-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5251_165637-300x199.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5261_165801-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5261_165801-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5261_165801-300x205.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5265_165830-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5265_165830-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5265_165830-300x204.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5267_165941-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5267_165941-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5267_165941-300x195.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5278_170321-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5278_170321-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5278_170321-300x202.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5280_170349-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5280_170349-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5280_170349-300x200.jpg\" alt=\"a group of people posing for the camera\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5281_170401-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5281_170401-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5281_170401-300x187.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5287_170858-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5287_170858-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5287_170858-300x203.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5292_171251-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5292_171251-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5292_171251-300x201.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5297_171556-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5297_171556-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5297_171556-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5327_090112-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5327_090112-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5327_090112-300x200.jpg\" alt=\"Hsiao-Wuen Hon wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5335_090407-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5335_090407-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5335_090407-300x200.jpg\" alt=\"Hsiao-Wuen Hon wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5358_091013-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5358_091013-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5358_091013-300x200.jpg\" alt=\"Hsiao-Wuen Hon wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5398_091646-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5398_091646-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5398_091646-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5421_094247-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5421_094247-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5421_094247-300x169.jpg\" alt=\"Hsiao-Wuen Hon wearing a suit and tie posing for a photo\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5423_094257-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5423_094257-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5423_094257-300x169.jpg\" alt=\"Hsiao-Wuen Hon standing posing for the camera\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5426_094309-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5426_094309-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5426_094309-300x169.jpg\" alt=\"Hsiao-Wuen Hon wearing a suit and tie posing for a photo\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5430_094323-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5430_094323-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5430_094323-300x169.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5433_094340-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5433_094340-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5433_094340-300x169.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5436_094354-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5436_094354-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5436_094354-300x169.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5438_094405-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5438_094405-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5438_094405-300x169.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5441_094417-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5441_094417-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5441_094417-300x169.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5444_094431-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5444_094431-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5444_094431-300x169.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5448_094442-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5448_094442-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5448_094442-300x169.jpg\" alt=\"Hsiao-Wuen Hon wearing a suit and tie posing for a photo\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5452_094457-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5452_094457-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5452_094457-300x169.jpg\" alt=\"Hsiao-Wuen Hon wearing a suit and tie posing for a photo\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5455_094507-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5455_094507-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5455_094507-300x169.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5460_094559-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5460_094559-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5460_094559-300x169.jpg\" alt=\"Monika Yulianti, Tian Pengfei, Hsiao-Wuen Hon posing for a photo\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5467_094653-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5467_094653-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5467_094653-300x169.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5538_100935-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5538_100935-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5538_100935-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5553_101525-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5553_101525-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5553_101525-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5578_102003-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5578_102003-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5578_102003-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5588_102119-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5588_102119-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5588_102119-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5602_102314-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5602_102314-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5602_102314-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5621_103734-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5621_103734-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5621_103734-300x169.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5655_110157-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5655_110157-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5655_110157-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5574_101831-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5574_101831-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5574_101831-200x300.jpg\" alt=\"a man wearing a suit and tie holding a cell phone\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5662_110345-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5662_110345-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5662_110345-300x200.jpg\" alt=\"a man wearing a blue shirt\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5701_112037-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5701_112037-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5701_112037-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5719_113101-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5719_113101-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5719_113101-300x200.jpg\" alt=\"a person posing for the camera\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5743_113915-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5743_113915-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5743_113915-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5804_121551-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5804_121551-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5804_121551-300x200.jpg\" alt=\"Rong Xu wearing a suit and tie standing in a room\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5805_121614-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5805_121614-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5805_121614-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5806_121636-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5806_121636-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5806_121636-300x200.jpg\" alt=\"a group of people standing in front of a computer\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5807_121643-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5807_121643-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5807_121643-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5808_121720-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5808_121720-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5808_121720-300x200.jpg\" alt=\"a man standing in front of a computer\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5810_121807-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5810_121807-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5810_121807-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5812_121854-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5812_121854-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5812_121854-300x200.jpg\" alt=\"Lip-Bu Tan et al. standing in a room\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5815_121901-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5815_121901-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5815_121901-300x200.jpg\" alt=\"Lip-Bu Tan et al. standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5816_121913-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5816_121913-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5816_121913-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5819_124414-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5819_124414-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5819_124414-300x169.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5821_124508-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5821_124508-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5821_124508-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5825_124546-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5825_124546-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5825_124546-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5827_124627-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5827_124627-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5827_124627-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5830_132319-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5830_132319-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5830_132319-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5832_132336-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5832_132336-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5832_132336-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5833_132344-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5833_132344-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5833_132344-300x200.jpg\" alt=\"Monika Yulianti et al. standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5834_132401-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5834_132401-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5834_132401-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5837_132423-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5837_132423-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5837_132423-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5839_132459-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5839_132459-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5839_132459-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5840_132519-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5840_132519-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5840_132519-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5841_132525-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5841_132525-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5841_132525-300x200.jpg\" alt=\"a group of people standing next to a man in a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5842_132540-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5842_132540-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5842_132540-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5843_132559-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5843_132559-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5843_132559-300x200.jpg\" alt=\"a woman standing next to a man in a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5847_132657-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5847_132657-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5847_132657-300x200.jpg\" alt=\"a group of people standing in front of a building\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5848_132719-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5848_132719-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5848_132719-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5963_143056-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5963_143056-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5963_143056-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5970_143228-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5970_143228-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5970_143228-300x200.jpg\" alt=\"a group of people sitting at a table using a laptop computer\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5974_143337-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5974_143337-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5974_143337-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5981_143450-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5981_143450-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5981_143450-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5984_143503-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5984_143503-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5984_143503-300x200.jpg\" alt=\"a group of people sitting at a table using a laptop computer\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5985_143512-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5985_143512-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5985_143512-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5988_143540-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5988_143540-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5988_143540-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5994_143650-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5994_143650-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5994_143650-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5997_143710-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5997_143710-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5997_143710-300x200.jpg\" alt=\"Xu Shousheng et al. sitting at a table\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6009_144012-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6009_144012-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6009_144012-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6016_144106-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6016_144106-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6016_144106-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6018_144302-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6018_144302-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6018_144302-300x200.jpg\" alt=\"a group of people posing for the camera\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6022_144359-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6022_144359-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6022_144359-300x200.jpg\" alt=\"a man looking at the camera\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6027_144509-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6027_144509-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6027_144509-300x200.jpg\" alt=\"a man sitting at a table using a laptop computer\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6029_144609-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6029_144609-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6029_144609-300x200.jpg\" alt=\"a group of people sitting at a table using a laptop\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6031_144720-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6031_144720-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6031_144720-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6047_145101-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6047_145101-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6047_145101-300x200.jpg\" alt=\"a group of people looking at a laptop\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6050_145115-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6050_145115-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6050_145115-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6051_145125-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6051_145125-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6051_145125-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6058_145418-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6058_145418-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6058_145418-300x200.jpg\" alt=\"a man looking at the camera\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6060_145530-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6060_145530-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6060_145530-300x200.jpg\" alt=\"a group of people sitting at a desk\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6076_150914-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6076_150914-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6076_150914-300x200.jpg\" alt=\"a person standing posing for the camera\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6080_151113-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6080_151113-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6080_151113-300x200.jpg\" alt=\"a group of people sitting at a desk\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6086_151332-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6086_151332-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6086_151332-300x200.jpg\" alt=\"a group of people in a room\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6093_152405-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6093_152405-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6093_152405-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6099_152610-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6099_152610-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6099_152610-300x200.jpg\" alt=\"a man standing in front of a screen\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6106_152801-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6106_152801-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6106_152801-300x200.jpg\" alt=\"a group of people sitting at a desk\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6116_153342-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6116_153342-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6116_153342-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6119_153502.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6119_153502.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6119_153502-200x300.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6121_154027-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6121_154027-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6121_154027-300x200.jpg\" alt=\"a person posing for the camera\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6142_154819-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6142_154819-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6142_154819-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6150_155029-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6150_155029-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6150_155029-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6158_155959-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6158_155959-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6158_155959-300x200.jpg\" alt=\"a group of people looking at a laptop\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6159_160013-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6159_160013-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6159_160013-300x200.jpg\" alt=\"a group of people looking at a laptop\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6164_160459-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6164_160459-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6164_160459-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6169_160642-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6169_160642-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6169_160642-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6181_161214-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6181_161214-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6181_161214-300x200.jpg\" alt=\"a person wearing glasses\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6194_161535-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6194_161535-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6194_161535-300x200.jpg\" alt=\"a group of people sitting at a desk\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6198_163048-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6198_163048-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6198_163048-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6217_164042-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6217_164042-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6217_164042-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6241_170824-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6241_170824-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6241_170824-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6250_171041-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6250_171041-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6250_171041-300x200.jpg\" alt=\"a man wearing glasses and a blue shirt\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4770_140241-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4770_140241-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4770_140241-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4789_140724-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4789_140724-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4789_140724-300x203.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5850_132738.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5850_132738.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5850_132738-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5851_132747.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5851_132747.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5851_132747-300x200.jpg\" alt=\"a man standing in front of a screen\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5853_132817.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5853_132817.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5853_132817-300x200.jpg\" alt=\"a person wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5854_132829.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5854_132829.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5854_132829-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5856_132914.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5856_132914.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5856_132914-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5858_132944.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5858_132944.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5858_132944-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5860_133003.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5860_133003.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5860_133003-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5861_133011.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5861_133011.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5861_133011-300x200.jpg\" alt=\"a person standing posing for the camera\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5862_133033.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5862_133033.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5862_133033-300x200.jpg\" alt=\"a group of people standing next to a person\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5863_133045.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5863_133045.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5863_133045-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5867_133123.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5867_133123.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5867_133123-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5868_133150.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5868_133150.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5868_133150-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5871_133227.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5871_133227.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5871_133227-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5874_133242.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5874_133242.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5874_133242-300x200.jpg\" alt=\"a woman standing in a room\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5875_133251.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5875_133251.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5875_133251-300x200.jpg\" alt=\"a group of people looking at a laptop\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5876_133302.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5876_133302.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5876_133302-300x200.jpg\" alt=\"a person wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5877_133316.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5877_133316.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5877_133316-300x200.jpg\" alt=\"a group of people sitting at a desk in front of a computer\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5883_140732.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5883_140732.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5883_140732-300x200.jpg\" alt=\"a group of people sitting at a desk\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5891_140854.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5891_140854.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5891_140854-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5902_141317.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5902_141317.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5902_141317-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5928_142118.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5928_142118.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5928_142118-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5935_142520.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5935_142520.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5935_142520-300x200.jpg\" alt=\"a man sitting at a table using a laptop computer\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5948_142750.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5948_142750.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5948_142750-300x200.jpg\" alt=\"a man looking at the camera\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6188_161355.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6188_161355.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6188_161355-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6210_163540.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6210_163540.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6210_163540-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6213_163553.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6213_163553.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6213_163553-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6220_170022.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6220_170022.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6220_170022-300x169.jpg\" alt=\"a group of people in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6221_170214.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6221_170214.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6221_170214-300x200.jpg\" alt=\"Prof Lawrence Jun Zhang wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6183_161222.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6183_161222.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6183_161222-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/1.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/1.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/1-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6266_182202.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6266_182202.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6266_182202-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6270_182317.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6270_182317.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6270_182317-300x200.jpg\" alt=\"a screen shot of a man\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6273_182403.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6273_182403.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6273_182403-300x200.jpg\" alt=\"a screenshot of a man\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6276_182447.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6276_182447.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6276_182447-300x200.jpg\" alt=\"a screenshot of a man\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6277_182523.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6277_182523.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6277_182523-300x169.jpg\" alt=\"a screen shot of a person\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6314_183444.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6314_183444.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6314_183444-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6280_182549.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6280_182549.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6280_182549-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6282_182619.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6282_182619.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6282_182619-300x200.jpg\" alt=\"a group of people posing for the camera\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6288_182711.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6288_182711.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6288_182711-300x200.jpg\" alt=\"a screenshot of a man\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6293_182903.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6293_182903.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6293_182903-300x200.jpg\" alt=\"a screen shot of a person\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6295_182942.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6295_182942.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6295_182942-300x200.jpg\" alt=\"a screen shot of a man\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/2.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/2.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/2-300x200.jpg\" alt=\"a screen shot of a person\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6300_183040-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6300_183040-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6300_183040-300x200.jpg\" alt=\"a screenshot of a man\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6304_183146-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6304_183146-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6304_183146-300x200.jpg\" alt=\"a screenshot of a man\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6308_183249-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6308_183249-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6308_183249-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6290_182806-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6290_182806-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6290_182806-300x200.jpg\" alt=\"a screenshot of a computer screen\" class=\"db full-width\" \/><\/a><\/li>\n\t\t\t<br style='clear: both' \/>\n\t\t<\/ul>\n<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Venue: Microsoft Research Asia, Beijing QR Code: Opens in a new tab The Academic Day 2019 event brings together the intellectual power of researchers from across Microsoft Research Asia and the academic community to attain a shared understanding of the contemporary ideas and issues facing the field of tech. Together, we will advance the frontier [&hellip;]<\/p>\n","protected":false},"featured_media":615285,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_startdate":"2019-11-07","msr_enddate":"2019-11-08","msr_location":"Beijing, China","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":false,"msr_private_event":false,"msr_hide_image_in_river":0,"footnotes":""},"research-area":[],"msr-region":[197903],"msr-event-type":[197944],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-613563","msr-event","type-msr-event","status-publish","has-post-thumbnail","hentry","msr-region-asia-pacific","msr-event-type-hosted-by-microsoft","msr-locale-en_us"],"msr_about":"<!-- wp:msr\/event-details {\"title\":\"MSRA Academic Day 2019\",\"backgroundColor\":\"grey\",\"image\":{\"id\":615285,\"url\":\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/msra-academic-day-2019-banner-4.jpg\",\"alt\":\"\"}} \/-->\n\n<!-- wp:msr\/content-tabs --><!-- wp:msr\/content-tab {\"title\":\"About\"} --><!-- wp:freeform --><p><strong>Venue:<\/strong> Microsoft Research Asia, Beijing<\/p>\n<p><strong>QR Code<\/strong>:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-615825\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/msra-academic-day-2019-qr-code.png\" alt=\"\" width=\"130\" height=\"130\" srcset=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/msra-academic-day-2019-qr-code.png 256w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/msra-academic-day-2019-qr-code-150x150.png 150w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/msra-academic-day-2019-qr-code-180x180.png 180w\" sizes=\"auto, (max-width: 130px) 100vw, 130px\" \/><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p>The Academic Day 2019 event brings together the intellectual power of researchers from across Microsoft Research Asia and the academic community to attain a shared understanding of the contemporary ideas and issues facing the field of tech. Together, we will advance the frontier of technology towards an ideal world of computing.<\/p>\n<p>Through our Microsoft Research Outreach Programs, Microsoft Research Asia has been actively collaborating with academic institutions to promote and progress further development in computer science and other technology domains. We have an ever-expanding partnership with leading universities across the Asia Pacific region to advance state-of-the-art research through various programs and initiatives.<\/p>\n<p>We are excited for \u201cMicrosoft Research Asia Academic Day 2019\u201d to facilitate comprehensive and insightful exchanges between Microsoft Research Asia and the academic community.<\/p>\n<h2>Program Chairs<\/h2>\n<ul class=\"msr-people-list stripped ms-row no-margin-bottom\">\n<li class=\"xs-col-12-24 s-col-8-24 m-col-6-24 l-col-8-24 margin-bottom-sp3\"><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/miran_lee.png\" alt=\"\" width=\"300\" height=\"300\" \/>\n<p class=\"body-alt no-margin-bottom\">Miran Lee<\/p>\n<p class=\"body-alt no-margin-bottom\">Outreach Director<\/p>\n<\/li>\n<li class=\"xs-col-12-24 s-col-8-24 m-col-6-24 l-col-8-24 margin-bottom-sp3\"><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/yongqiang_xiong.jpg\" alt=\"Portrait of Susan Dumais\" width=\"150\" height=\"150\" \/>\n<p class=\"body-alt no-margin-bottom\">Yongqiang Xiong<\/p>\n<p class=\"body-alt no-margin-bottom\">Principal Research Manager<\/p>\n<\/li>\n<li class=\"xs-col-12-24 s-col-8-24 m-col-6-24 l-col-8-24 margin-bottom-sp3\"><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/07\/lyx-2019.png\" alt=\"\" width=\"150\" height=\"150\" \/>\n<p class=\"body-alt no-margin-bottom\">Yunxin Liu<\/p>\n<p class=\"body-alt no-margin-bottom\">Principal Research Manager<\/p>\n<\/li>\n<li class=\"xs-col-12-24 s-col-8-24 m-col-6-24 l-col-8-24 margin-bottom-sp3\"><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2016\/08\/avatar_user__1470987161-180x180.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\n<p class=\"body-alt no-margin-bottom\">Tao Qin<\/p>\n<p class=\"body-alt no-margin-bottom\">Senior Principal Research Manager<\/p>\n<\/li>\n<li class=\"xs-col-12-24 s-col-8-24 m-col-6-24 l-col-8-24 margin-bottom-sp3\"><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2016\/07\/avatar_user__1468038567-180x180.png\" alt=\"\" width=\"150\" height=\"150\" \/>\n<p class=\"body-alt no-margin-bottom\">Wenjun Zeng<\/p>\n<p class=\"body-alt no-margin-bottom\">Senior Principal Research Manager<\/p>\n<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Agenda\"} --><!-- wp:freeform --><h2>November 7<\/h2>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-208\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-208\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-207\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tWorkshop on System and Networking for AI\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-207\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-208\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Abstract<\/strong>: We live in a world of connected entities including various systems (ranging from big cloud and edge systems to individual memory and disk systems) networked together. Innovations in systems and networking are key driving forces in the era of big data and artificial intelligence, to empower advanced intelligent algorithms with reliable, secure, scalable and efficient computing capacity to process huge volumes of data. We have witnessed the significant progress in cloud systems, and recently, edge computing, in particular AI on Edge, has attracted increasing attention from both academia and industry. This workshop aims to report and discuss the most recent progress and trends on general system and networking area, especially on various infrastructure support for machine learning systems.<\/p>\n<p><strong>Event owners<\/strong>: <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/yunliu\/\">Yunxin Liu<\/a>, <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/yqx\/\">Yongqiang Xiong<\/a><\/p>\n<table class=\"msr-table-schedule\" style=\"border-spacing: inherit;border-collapse: collapse\" width=\"100%\">\n<thead class=\"thead\">\n<tr class=\"tr\">\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 20%\" width=\"20%\">Time (CST)<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 30%\" width=\"30%\">Workshops<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 35%\" width=\"35%\">Speaker<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 15%\" width=\"15%\">Location<\/th>\n<\/tr>\n<\/thead>\n<tbody class=\"tbody\">\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">2:00 PM\u20132:10 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Welcome and introductions<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Yunxin Liu &amp; Yongqiang Xiong, Microsoft Research<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Dong Zhi Men, Microsoft Tower 1-1F<\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">2:10 PM\u20133:25 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Research at Microsoft (25 mins per talk x3)<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\n<ul>\n<li>Peng Cheng, Microsoft Research<\/li>\n<li>Ting Cao, Microsoft Research<\/li>\n<li>Quanlu Zhang, Microsoft Research<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">3:25 PM\u20134:40 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Research talks (25 mins per talk x3)<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\n<ul>\n<li>Chuan Wu, University of Hong Kong<\/li>\n<li>Xuanzhe Liu, Peking University<\/li>\n<li>Rajesh Krishna Balan, Singapore Management University<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">4:40 PM\u20135:20 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Panel with discussion<\/p>\n<p>Title: \u201cWhat\u2019s missing in system &amp; networking for AI?\u201d<\/p>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\n<ul>\n<li>Yunxin Liu, Microsoft Research (Moderator)<\/li>\n<li>Yongqiang Xiong, Microsoft Research (Moderator)<\/li>\n<li>Chuan Wu, University of Hong Kong<\/li>\n<li>Xuanzhe Liu, Peking University<\/li>\n<li>Rajesh Krishna Balan, Singapore Management University<\/li>\n<li>Peng Cheng, Microsoft Research<\/li>\n<li>Ting Cao, Microsoft Research<\/li>\n<li>Quanlu Zhang, Microsoft Research<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">5:20 PM\u20135:30 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Wrap-up and closing<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-210\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-210\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-209\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tWorkshop on Low-Resource Machine Learning\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-209\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-210\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Abstract<\/strong>: Deep learning has greatly driven this wave of AI. While deep learning has made many breakthroughs in recent years, its success heavily relies on big labeled data, big model, and big computing. As edge computing becomes the trend and more and more IoT devices become available, deep learning faces the low-resource challenge: how to learn from limited labeled data, with limited model size, and limited computation resources. The theme of this workshop is low-resource machine learning: learning from low-resource data, learning compact models, and learning with limited computational resources. This workshop aims to report latest progress and discuss the trends and frontiers of research on low-resource machine learning.<\/p>\n<p><strong>Event owner<\/strong>: <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/taoqin\/\">Tao Qin<\/a><\/p>\n<table class=\"msr-table-schedule\" style=\"border-spacing: inherit;border-collapse: collapse\" width=\"100%\">\n<thead class=\"thead\">\n<tr class=\"tr\">\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 20%\" width=\"20%\">Time (CST)<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 30%\" width=\"30%\">Workshops<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 35%\" width=\"35%\">Speaker<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 15%\" width=\"15%\">Location<\/th>\n<\/tr>\n<\/thead>\n<tbody class=\"tbody\">\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">2:00 PM\u20132:10 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Welcome and introductions<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Tao Qin, Microsoft Research<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Xi Zhi Men, Microsoft Tower 1-1F<\/td>\n<td><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">2:10 PM\u20133:25 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Research at Microsoft (25 mins per talk x3)<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\n<ul>\n<li>Yingce Xia, Microsoft Research<\/li>\n<li>Xu Tan, Microsoft Research<\/li>\n<li>Guolin Ke, Microsoft Research<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">3:25 PM\u20134:40 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Research talks (25 mins per talk x3)<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\n<ul>\n<li>Jaegul Choo, Korea University<\/li>\n<li>Sinno Jialin Pan, Nanyang Technological University<\/li>\n<li>Sung Ju Hwang, KAIST<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">4:40 PM\u20135:20 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Panel with discussion<\/p>\n<p>Title: \u201cChallenges and Future of Low-Resource Machine Leaning\u201d<\/p>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\n<ul>\n<li>Tao Qin, Microsoft Research (Moderator)<\/li>\n<li>Jaegul Choo, Korea University<\/li>\n<li>Sung Ju Hwang, KAIST<\/li>\n<li>Shujie Liu, Microsoft Research<\/li>\n<li>Dongdong Zhang, Microsoft Research<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">5:20 PM\u20135:30 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Wrap-up and closing<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-212\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-212\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-211\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tWorkshop on Multimodal Representation Learning and Applications\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-211\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-212\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Abstract<\/strong>: We live in a world of multimedia (text, image, video, audio, sensor data, 3D, etc.). These modalities are integral components of real-world events and applications. A full understanding of multimedia relies heavily on feature learning, entity recognition, knowledge, reasoning, language representation, etc. Cross-modal learning, which requires joint feature learning and cross-modal relationship modeling, has attracted increasing attention from both academia and industry. This workshop aims to report and discuss the most recent progress and trends on multimodal representation learning for multimedia applications.<\/p>\n<p><strong>Event owners<\/strong>: <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<\/a>, <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/nanduan\/\">Nan Duan<\/a><\/p>\n<table class=\"msr-table-schedule\" style=\"border-spacing: inherit;border-collapse: collapse\" width=\"100%\">\n<thead class=\"thead\">\n<tr class=\"tr\">\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 20%\" width=\"20%\">Time (CST)<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 30%\" width=\"30%\">Workshops<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 35%\" width=\"35%\">Speaker<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 15%\" width=\"15%\">Location<\/th>\n<\/tr>\n<\/thead>\n<tbody class=\"tbody\">\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">2:00 PM\u20132:10 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Welcome and introductions<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Wenjun Zeng, Microsoft Research<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Tian An Men, Microsoft Tower 1-1F<\/td>\n<td><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">2:10 PM\u20133:25 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Research at Microsoft (25 mins per talk x3)<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\n<ul>\n<li>Nan Duan, Microsoft Research<\/li>\n<li>Yue Cao, Microsoft Research<\/li>\n<li>Chong Luo, Microsoft Research<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">3:25 PM\u20134:40 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Research talks (25 mins per talk x3)<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\n<ul>\n<li>Gunhee Kim, Seoul National University<\/li>\n<li>Winston Hsu, National Taiwan University<\/li>\n<li>Jiwen Lu, Tsinghua University<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">4:40 PM\u20135:20 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Panel with discussion<\/p>\n<p>Title: Opportunities and Challenges for Cross-Modal Learning<\/p>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\n<ul>\n<li>Wenjun Zeng, Microsoft Research (Moderator)<\/li>\n<li>Xilin Chen, Chinese Academy of Science<\/li>\n<li>Winston Hsu, National Taiwan University<\/li>\n<li>Gunhee Kim, Seoul National University<\/li>\n<li>Nan Duan, Microsoft Research<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">5:20 PM\u20135:30 PM<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Wrap-up and closing<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<h2>November 8<\/h2>\n<table class=\"msr-table-schedule\" style=\"border-spacing: inherit;border-collapse: collapse\" width=\"100%\">\n<thead class=\"thead\">\n<tr class=\"tr\">\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 20%\" width=\"20%\">Time (CST)<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 30%\" width=\"30%\">Workshops<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 35%\" width=\"35%\">Speaker<\/th>\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 15%\" width=\"15%\">Location<\/th>\n<\/tr>\n<\/thead>\n<tbody class=\"tbody\">\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">09:00 \u2013 09:30<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Welcome &amp; MSRA Overview<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Hsiao-Wuen Hon<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Gu Gong, Microsoft Tower 1-1F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">09:30 \u2013 09:40<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Fellowship Award Ceremony<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Presenter: Hsiao-Wuen Hon<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">09:40 \u2013 10:00<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Photo session &amp; Break<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">10:00 \u2013 10:40<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Panel Discussion<\/p>\n<p>Title: \u201cHow to foster a computer scientist\u201d<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Moderator: Tim Pan, Microsoft Research<\/p>\n<p>Panelists:<\/p>\n<ul>\n<li>Bohyung Han, Seoul National University<\/li>\n<li>Junichi Rekimoto, The University of Tokyo<\/li>\n<li>Winston Hsu, National Taiwan University<\/li>\n<li>Xin Tong, Microsoft Research<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">10:40 \u2013 11:55<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Technology Showcase by Microsoft Research Asia (5)<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\n<ul>\n<li>\u201cOneOCR For Digital Transformation\u201d by Qiang Huo<\/li>\n<li>\u201cNN grammar check\u201d by Tao Ge<\/li>\n<li>\u201cAutoSys: Learning based approach for system optimization\u201d by Mao Yang<\/li>\n<li>\u201cDual learning and its application in translation and speech from ML\u201d by Tao Qin(Yingce Xia and Xu Tan)<\/li>\n<li>\u201cSpreadsheet Intelligence for Ideas in Excel\u201d by Shi Han<\/li>\n<\/ul>\n<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">12:00 \u2013 14:00<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Technology Showcase by Academic Collaborators<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Lunch, Microsoft Tower1-1F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">14:00 \u2013 17:30<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Breakout Sessions<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Language and Knowledge<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leader: Xing Xie<\/p>\n<p>Speakers: Seung-won Hwang, Min Zhang, Lei Chen, Masatoshi Yoshikawa, Shou-De Lin, Rui Yan, Hiroaki Yamane, Chenhui Chu, Tadashi Nomoto<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Zhong Guan Cun, Microsoft Tower 2-4F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">System and Networking<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leaders: Lidong Zhou, Yunxin Liu<\/p>\n<p>Speakers: Insik Shin, Wenfei Wu, Rajesh Krishna Balan, Youyou Lu, Chuck Yoo, Yu Zhang, Atsuko Miyaji, Jingwen Leng, Yao Guo, Heejo Lee, Cheng Li<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">San Li Tun, Microsoft Tower 2-4F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Computer Vision<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leader: Wenjun Zeng<\/p>\n<p>Speakers: Gunhee Kim, Tianzhu Zhang, Yonggang Wen, Wen-Huang Cheng, Jiaying Liu, Bohyung Han, Wei-Shi Zheng, Jun Takamatsu, Xueming Qian<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Qian Men, Microsoft Tower 2-4F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Graphics<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leader: Xin Tong<\/p>\n<p>Speakers: Min H. Kim, Seungyong Lee, Sung-eui Yoon<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Di Tan, Microsoft Tower 2-4F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Multimedia<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leader: Yan Lu<\/p>\n<p>Speakers: Seung Ah Lee, Huanjing Yue, Hiroki Watanabe, Minsu Cho, Zhou Zhao, Seungmoon Choi<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Gu Lou, Microsoft Tower 2-4F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Healthcare<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leader: Eric Chang<\/p>\n<p>Speakers: Ryo Furukawa, Winston Hsu<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Dong Cheng, Microsoft Tower 2-4F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Data, Knowledge, and Intelligence<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leaders: Jian-Guang Lou, Qingwei Lin<\/p>\n<p>Speakers: Shixia Liu, Huamin Qu, Jong Kim, Yingcai Wu<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Xi Cheng, Microsoft Tower 2-4F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Machine Learning<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leader: Tao Qin<\/p>\n<p>Speakers: Hongzhi Wang, Seong-Whan Lee, Sinno Jialin Pan, Lijun Zhang, Jaegul Choo, Mingkui Tan, Liwei Wang<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Ri Tan, Microsoft Tower 2-4F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Speech<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leader: Frank Soong<\/p>\n<p>Speakers: Jun Du, Hong-Goo Kang<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Guo Zi Jian, Microsoft Tower 2-4F<\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">17:30-18:00<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Transition Break<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<\/tr>\n<tr class=\"tr\">\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">18:15 \u2013 20:30<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Banquet<\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Ballroom located @ 3F, Tylfull Hotel<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Abstracts\"} --><!-- wp:freeform --><h2>Workshops<\/h2>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-214\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-214\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-213\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tAI Platform Acceleration with Programmable Hardware\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-213\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-214\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Peng Cheng, Microsoft Research<\/p>\n<p>Programmable hardware has been used to build high throughput, low latency real-time core AI engine such as BrainWave. Instead of AI engine, we focus on solving AI-platform-related bottlenecks, for instance in this case, storage and networking I\/O, model distribution, synchronization and data pre-processing in machine learning tasks, with acceleration from programmable hardware. Our proposed system enables direct hardware-assisted device-to-device interconnection with inline processing. We choose FPGA as our first prototype to build a general platform for AI acceleration since FPGA has been widely deployed in Azure to achieve high performance with much lower economy cost. Our system can accelerate AI in many aspects. It now enables GPUs directly fetch training data from storage to GPU memory to bypass costly CPU involvement. As an intelligent hub, it can also do inline data pre-processing efficiently. More accelerating scenarios are under development including in-network inference acceleration and hardware parameter server for distributed machine learning, etc.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-216\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-216\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-215\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tAudio captioning and knowledge-grounded conversation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-215\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-216\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Gunhee Kim, Seoul National University<\/p>\n<p>In this talk, I will introduce two recent works about NLP from Vision and Learning Lab of Seoul National University. First, we present our work that explores the problem of audio captioning: generating natural language description for any kind of audio in the wild, which has been surprisingly unexplored in previous research. We not only contribute a large-scale dataset of about 46K audio clips to human-written text pairs collected via crowdsourcing but also propose two novel components that help improve audio captioning performance of attention-based neural models. Second, I discuss about our work on knowledge-grounded dialogues, in which we address the problem of better modeling the knowledge selection in the multi-turn knowledge-grounded dialogue. We propose a sequential latent variable model as the first approach to this matter. Our experimental results show that the proposed model improves the knowledge selection accuracy and subsequently the performance of utterance generation. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-218\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-218\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-217\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBuilding Large-Scale Decentralized Intelligent Software Systems\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-217\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-218\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<\/p>\n<p><strong>Speaker:<\/strong> Xuanzhe Liu, Peking University<\/p>\n<p>We are in the fast-growing flood of &#8220;data&#8221; and we significantly benefit from the &#8220;intelligence&#8221; derived from it. Such intelligence heavily relies on the centralized paradigm, i.e., the cloud-based systems and services. However, we realize that we are also at the dawn of emerging &#8220;decentralized&#8221; fashion to make intelligence more pervasive and even &#8220;handy&#8221; over smartphones, wearables, IoT devices, along with the collaborations among them and the cloud. This talk tries to discuss some technical challenges and opportunities of building the decentralized intelligence, mostly from a software system perspective, covering aspects of programming abstraction, performance, privacy, energy, and interoperability. We also share our recent efforts on building such software systems and industrial experiences. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-220\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-220\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-219\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tColoring with Limited Data: Few-Shot Colorization via Memory-Augmented Networks\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-219\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-220\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<\/p>\n<p><strong>Speaker:<\/strong> Jaegul Choo, Korea University<\/p>\n<p>Despite recent advancements in deep learning-based automatic colorization, they are still limited when it comes to few-shot learning. Existing models require a significant amount of training data. To tackle this issue, we present a novel memory-augmented colorization model that can produce high-quality colorization with limited data. In particular, our model can capture rare instances and successfully colorize them. We also propose a novel threshold triplet loss that enables unsupervised training of memory networks without the need of class labels. Experiments show that our model has superior quality in both few-shot and one-shot colorization tasks.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-222\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-222\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-221\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFastSpeech: Fast, Robust and Controllable Text to Speech\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-221\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-222\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker:<\/strong> Xu Tan, Microsoft Research<\/p>\n<p>Neural network based end-to-end text to speech (TTS) has significantly improved the quality of synthesized speech. However, neural network based end-to-end models suffer from slow inference speed, and the synthesized speech is usually not robust (i.e., some words are skipped or repeated) and lack of controllability (voice speed or prosody control). In this work, we propose a novel feed-forward network based on Transformer to generate mel-spectrogram in parallel for TTS. Experiments show that our parallel model matches autoregressive models in terms of speech quality, nearly eliminates the problem of word skipping and repeating in particularly hard cases, and can adjust voice speed smoothly. Most importantly, compared with autoregressive Transformer TTS, our model speeds up the mel-spectrogram generation by 270x and the end-to-end speech synthesis by 38x.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-224\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-224\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-223\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tImproving the Performance of Video Analytics Using WiFi Signals\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-223\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-224\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker:<\/strong> Rajesh Krishna Balan, Singapore Management University<\/p>\n<p>Automatic analysis of the behaviour of large groups of people is an important requirement for a large class of important applications such as crowd management, traffic control, and surveillance. For example, attributes such as the number of people, how they are distributed, which groups they belong to, and what trajectories they are taking can be used to optimize the layout of a mall to increase overall revenue. A common way to obtain these attributes is to use video camera feeds coupled with advanced video analytics solutions. However, solely utilizing video feeds is challenging in high people-density areas, such as a normal mall in Asia, as the high people density significantly reduces the effectiveness of video analytics due to factors such as occlusion. In this work, we propose to combine video feeds with WiFi data to achieve better classification results of the number of people in the area and the trajectories of those people. In particular, we believe that our approach will combine the strengths. of the two different sensors, WiFi and video, while reducing the weaknesses of each sensor. This work has started fairly recently and we will present our thoughts and current results up to now.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-226\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-226\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-225\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tLearning Beyond 2D Images\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-225\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-226\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker:<\/strong> Winston Hsu, National Taiwan University<\/p>\n<p>We observed super-human capabilities from current (2D) convolutional networks for the images &#8212; either for discriminative or generative models. For this talk, we will show our recent attempts in visual cognitive computing beyond 2D images. We will first demonstrate the huge opportunities as augmenting the leaning with temporal cues, 3D (point cloud) data, raw data, audio, etc. over emerging domains such as entertainment, security, healthcare, manufacturing, etc. In an explainable manner, we will justify how to design neural networks leveraging the novel (and diverse) modalities. We will demystify the pros and cons for these novel signals. We will showcase a few tangible applications ranging from video QA, robotic object referring, situation understanding, autonomous driving, etc. We will also review the lessons we learned as designing the advanced neural networks which accommodate the multimodal signals in an end-to-end manner. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-228\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-228\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-227\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tLightGBM: A highly efficient gradient boosting machine\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-227\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-228\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<\/p>\n<p><strong>Speaker<\/strong>: Guolin Ke, Microsoft Research<\/p>\n<p>Gradient Boosting Decision Tree (GBDT) is a popular machine learning algorithm and widely-used in the real-world applications. We open-sourced LightGBM, which contains many critical optimizations for the efficient training of GBDT and becomes one of the most popular GBDT tools. During this talk, I will introduce the key technologies behind LightGBM.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-230\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-230\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-229\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMobiDL: Unleash the Mobile CPU Computing Power for Deep Learning Inference\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-229\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-230\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker:<\/strong> Ting Cao, Microsoft Research<\/p>\n<p>Deep learning (DL) models are increasingly deployed into real-world applications on mobile devices. However, current mobile DL frameworks neglect the CPU asymmetry, and the CPUs are seriously underutilized. We propose MobiDL for mobile DL inference, targeting improved CPU utilization and energy efficiency through novel designs for hardware asymmetry and appropriate frequency setting. It integrates four main techniques: 1) cost-model directed matrix block partition; 2) prearranged memory layout for model parameters; 3) asymmetry-aware task scheduling; and 4) data-reuse based CPU frequency setting. During the one-time initialization, the proper block partition, parameter layout, and efficient frequency for DL models can be configured by MobiDL. During inference, MobiDL scheduling balances tasks to fully utilize all the CPU cores. Evaluation shows that for CNN models, MobiDL can achieve 85% performance and 72% energy efficiency improvement on average compared to default TensorFlow. For RNN, it achieves up-to 17.51X performance and 8.26X energy efficiency improvement. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-232\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-232\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-231\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMulti-agent dual learning\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-231\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-232\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<\/p>\n<p><strong>Speaker<\/strong>: Yingce Xia, Microsoft Research<\/p>\n<p>Dual learning is our recently proposed framework, where a primal task (e.g. Chinese-to-English translation) and a dual task (e.g., English-to-Chinese translation) are jointly optimized through a feedback signal. We extend standard dual learning to multi-agent dual learning, where multiple models for the primal task and multiple models for the dual task are evolved. In the case, the feedback signal is enhanced and we can get better performances. Experimental results on low-resource settings show that our method works pretty well. On WMT&#8217;19 machine translation competition, we won four top places using multi-agent dual learning.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-234\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-234\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-233\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMulti-view Deep Learning for Visual Content Understanding\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-233\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-234\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker:<\/strong> Jiwen Lu, Tsinghua University<\/p>\n<p>In this talk, I will overview the trend of multi-view deep learning techniques and discuss how they are used to improve the performance of various visual content understanding tasks. Specifically, I will present three multi-view deep learning approaches: multi-view deep metric learning, multi-modal deep representation learning, and multi-agent deep reinforcement learning, and show how these methods are used for visual content understanding tasks. Lastly, I will discuss some open problems in multi-view deep learning to show how to further develop more advanced multi-view deep learning methods for computer vision in the future. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-236\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-236\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-235\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tNNI: An open source toolkit for neural architecture search and hyper-parameter tuning\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-235\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-236\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<\/p>\n<p><strong>Speaker<\/strong>: Quanlu Zhang, Microsoft Research<\/p>\n<p>Recent years have witnessed the great success of deep learning in a broad range of applications. Model tuning becomes a key step for finding good models. To be effective in practice, a system is demanded to facilitate this tuning procedure from both programming effort and searching efficiency. Thus, we open source NNI (Neural Network Intelligence), a toolkit for neural architecture search and hyper-parameter tuning, which provides easy-to-use interface, rich built-in AutoML algorithms. Moreover, it is highly extensible to support various new tuning algorithms and requirements. With high scalability, many trials could run in parallel on various training platforms.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-238\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-238\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-237\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tPre-training for Video-Language Cross-Modal Tasks\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-237\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-238\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker:<\/strong> Chong Luo, Microsoft Research<\/p>\n<p>Video-language cross-modal tasks are receiving increasing interests in recent years, from video retrieval, video captioning, to spatial-temporal localization in video by language query. In this talk, we will present the research and application of some of these tasks. We will show how pre-trained single-modality models have made these tasks tractable and discuss the paradigm shift in deep neural network design with pre-trained models. In addition, we propose a universal cross-modality pre-training framework which may benefit a wide range of video-language tasks. We hope that our work will provide inspiration to other researchers in solving these interesting but challenging cross-modal tasks. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-240\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-240\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-239\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tResource Scheduling for Distributed Deep Training\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-239\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-240\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<\/p>\n<p><strong>Speaker:<\/strong> Chuan Wu, University of Hong Kong<\/p>\n<p>More and more companies\/institutions are running AI clouds\/machine learning clusters with various ML model training workloads, to support various AI-driven services. Efficient resource scheduling is the key to maximize the performance of ML workloads, as well as hardware efficiency of the very expensive ML cluster. A large room exists in improving today\u2019s ML cluster schedulers, e.g., to include interference awareness in task placement and to schedule not only computation but also communication, etc. In this talk, I will share our recent work on designing deep learning job schedulers for ML clusters, aiming at expediting training speeds and minimizing training completion time. Our schedulers decide communication scheduling, the number of workers\/PSs, and the placement of workers\/PSs for jobs in the cluster, through both heuristics with theoretical support and reinforcement learning approaches. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-242\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-242\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-241\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tTransferable Recursive Neural Networks for Fine-grained Sentiment Analysis\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-241\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-242\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<\/p>\n<p><strong>Speaker:<\/strong> Sinno Jialin Pan, Nanyang Technological University<\/p>\n<p>In \ufb01ne-grained sentiment analysis, extracting aspect terms and opinion terms from user-generated texts is the most fundamental task in order to generate structured opinion summarization. Existing studies have shown that the syntactic relations between aspect and opinion words play an important role for aspect and opinion terms extraction. However, most of the works either relied on pre-de\ufb01ned rules or separated relation mining with feature learning. Moreover, these works only focused on single-domain extraction which failed to adapt well to other domains of interest, where only unlabeled data is available. In real-world scenarios, annotated resources are extremely scarce for many domains or languages. In this talk, I am going to introduce our recent series of works on transfer learning for cross-domain and cross-language fine-grained sentiment analysis based on recursive neural networks. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-244\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-244\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-243\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tVL-BERT: Pre-training of Generic Visual-Linguistic Representations\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-243\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-244\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<\/p>\n<p><strong>Speaker:<\/strong> Yue Cao, Microsoft Research<\/p>\n<p>We introduce a new pre-trainable generic representation for visual-linguistic tasks, called Visual-Linguistic BERT (VL-BERT for short). VL-BERT adopts the simple yet powerful Transformer model as the backbone and extends it to take both visual and linguistic embedded features as input. In it, each element of the input is either of a word from the input sentence, or a region-of-interest (RoI) from the input image. It is designed to fit for most of the visual-linguistic downstream tasks. To better exploit the generic representation, we pre-train VL-BERT on the massive-scale Conceptual Captions dataset, together with text-only corpus. Extensive empirical analysis demonstrates that the pre-training procedure can better align the visual-linguistic clues and benefit the downstream tasks, such as visual commonsense reasoning, visual question answering and referring expression comprehension. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-246\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-246\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-245\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tWhen Language Meets Vision: Multi-modal NLP with Visual Contents\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-245\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-246\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<\/p>\n<p><strong>Speaker:<\/strong> Nan Duan, Microsoft Research<\/p>\n<p>In this talk, I will introduce our latest work on multi-modal NLP, including (i) multi-modal pre-training, which aims to learn the joint representations between language and visual contents; (ii) multi-modal reasoning, which aims to handle complex queries by manipulating knowledge extracted from language and visual contents; (iii) video-based QA\/summarization, which aims to make video contents readable and searchable. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<h2>Breakout Sessions<\/h2>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-248\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-248\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-247\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tAdaptive Regret for Online Learning\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-247\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-248\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Lijun Zhang, Nanjing University<\/p>\n<p>To deal with changing environments, a new performance measure\u2014adaptive regret, defined as the maximum static regret over any interval, is proposed in online learning. Under the setting of online convex optimization, several algorithms have been developed to minimize the adaptive regret. However, existing algorithms are problem-independent and lack universality. In this talk, I will briefly introduce our two contributions in this direction. The first one is to establish problem-dependent bounds of adaptive regret by exploiting the smoothness condition. The second one is to design an universal algorithm that can handle multiple types of functions simultaneously.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-250\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-250\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-249\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tAdvances and Challenges on Human-Computer Conversational Systems\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-249\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-250\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Rui Yan, Peking University<\/p>\n<p>Nowadays, automatic human-computer conversational systems have attracted great attention from both industry and academia. Intelligent products such as XiaoIce (by Microsoft) have been released, while tons of Artificial Intelligence companies have been established. We see that the technology behind the conversational systems is accumulating and now open to the public gradually. With the investigation of researchers, conversational systems are more than scientific fictions: they become real. It is interesting to review the recent advances of human-computer conversational systems, especially the significant changes brought by deep learning techniques. It would also be exciting to anticipate the development and challenges in the future.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-252\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-252\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-251\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tAI and Data: a closed Loop\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-251\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-252\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Hongzhi Wang, Harbin Institute of Technology<\/p>\n<p>Data is the base of modern Artificial Intelligence (AI). Efficient and effective AI requires the support of data acquirement, governance, management, analytics and mining, which brings new challenges. From another aspect, the advances of AI provide new chances for data process to increase its automation. Thus, AI and data forms a closed loop and promote each other. In this talk, the speaker will demonstrate the mutual promotion of AI and data with some examples and discuss the further chance of promote bother of these areas.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-254\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-254\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-253\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tArtificial Intelligence for Fashion\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-253\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-254\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Wen-Huang Cheng, National Chiao Tung University<\/p>\n<p>The fashion industry is one of the biggest in the world, representing over 2 percent of global GDP (2018). Artificial intelligence (AI) has been a predominant theme in the fashion industry and is impacting its every part in scales from personal to industrial and beyond. In recent years, I and my research group have devoted to advanced AI research on helping revolutionize the fashion industry to enable innovative applications and services with improved user experience. In this talk, I would like to give an overview of the major outcomes of our researches and discuss what research subjects we can further work on together with Microsoft researchers to make new impact on the fashion domains.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-256\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-256\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-255\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBERT is not all you need\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-255\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-256\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Seung-won Hwang, Yonsei University<\/p>\n<p>This talk is inspired by a question to my talk at MSRA faculty summit last year: presenting NLP models where injecting (diverse forms of) knowledge contributes to meaningfully enhancing the accuracy and robustness. Then Chin-yew asked: \u201cDo you think BERT implicitly contains all these information already?\u201d This talk an extended investigation to support my short answer at the talk. The title is a spoiler.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-258\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-258\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-257\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBig Data, AI and HI, What is Next?\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-257\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-258\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Lei Chen, Hong Kong University of Science and Technology<\/p>\n<p>Recently, AI has become quite popular and attractive, not only to academia but also to the industry. The successful stories of AI on various applications raise significant public interests in AI. Meanwhile, human intelligence is turning out to be more sophisticated, and Big Data technology is everywhere to improve our life quality. The question that we all want to ask is \u201cwhat is the next?&#8221;. In this talk, I will discuss about DHA, a new computing paradigm, which combines big Data, Human intelligence, and AI (DHA). Specifically, I will first briefly explain the motivation of the DHA. Then I will present challenges, after that, I will highlight some possible solutions to build such a new paradigm.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-260\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-260\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-259\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCombinatorial Inference against Label Noise\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-259\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-260\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Bohyung Han, Seoul National University<\/p>\n<p>Label noise is one of the critical sources that degrade generalization performance of deep neural networks significantly. To handle the label noise issue in a principled way, we propose a unique classification framework of constructing multiple models in heterogeneous coarse-grained meta-class spaces and making joint inference of the trained models for the final predictions in the original (base) class space. Our approach reduces noise level by simply constructing meta-classes and improves accuracy via combinatorial inferences over multiple constituent classifiers. Since the proposed framework has distinct and complementary properties for the given problem, we can even incorporate additional off-the-shelf learning algorithms to improve accuracy further. We also introduce techniques to organize multiple heterogeneous meta-class sets using k-means clustering and identify a desirable subset leading to learn compact models. Our extensive experiments demonstrate outstanding performance in terms of accuracy and efficiency compared to the state- of-the-art methods under various synthetic noise configurations and in a real-world noisy dataset.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-262\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-262\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-261\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCommunication-Efficient Geo-Distributed Multi-Task Learning\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-261\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-262\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Sinno Jialin Pan, Nanyang Technological University<\/p>\n<p>Multi-task learning aims to learn multiple tasks jointly by exploiting their relatedness to improve the generalization performance for each task. Traditionally, to perform multi-task learning, one needs to centralize data from all the tasks to a single machine. However, in many real-world applications, data of different tasks is owned by different organizations and geo-distributed over different local machines. Due to heavy communication caused by transmitting the data and the issue of data privacy and security, it is impossible to send data of different task to a master machine to perform multi-task learning. In this paper, we present our recent work on distributed multi-task learning, which jointly learns multiple tasks in the parameter server paradigm without sharing any training data, and has a theoretical guarantee on convergence to the solution obtained by the corresponding centralized multi-task learning algorithm.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-264\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-264\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-263\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCompact Snapshot Hyperspectral Imaging with Diffracted Rotation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-263\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-264\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Min H. Kim, KAIST<\/p>\n<p>Traditional snapshot hyperspectral imaging systems include various optical elements: a dispersive optical element (prism), a coded aperture, several relay lenses, and an imaging lens, resulting in an impractically large form factor. We seek an alternative, minimal form factor of snapshot spectral imaging based on recent advances in diffractive optical technology. We there- upon present a compact, diffraction-based snapshot hyperspectral imaging method, using only a novel diffractive optical element (DOE) in front of a conventional, bare image sensor. Our diffractive imaging method replaces the common optical elements in hyperspectral imaging with a single optical element. To this end, we tackle two main challenges: First, the traditional diffractive lenses are not suitable for color imaging under incoherent illumination due to severe chromatic aberration because the size of the point spread function (PSF) changes depending on the wavelength. By leveraging this wavelength-dependent property alternatively for hyperspectral imaging, we introduce a novel DOE design that generates an anisotropic shape of the spectrally-varying PSF. The PSF size remains virtually unchanged, but instead the PSF shape rotates as the wavelength of light changes. Second, since there is no dispersive element and no coded aperture mask, the ill-posedness of spectral reconstruction increases significantly. Thus, we pro- pose an end-to-end network solution based on the unrolled architecture of an optimization procedure with a spatial-spectral prior, specifically designed for deconvolution-based spectral reconstruction. Finally, we demonstrate hyperspectral imaging with a fabricated DOE attached to a conventional DSLR sensor. Results show that our method compares well with other state- of-the-art hyperspectral imaging methods in terms of spectral accuracy and spatial resolution, while our compact, diffraction-based spectral imaging method uses only a single optical element on a bare image sensor.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-266\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-266\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-265\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tContextDM: Context-aware Permanent Data Management Framework for Android\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-265\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-266\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Jong Kim, Pohang University of Science and Technology (POSTECH)<\/p>\n<p>The data management practices by third-party apps have failed in terms of manageability and security because the modern systems cannot provide a fine-grained data management and security due to lack of understanding about stored data. As results, users suffer from storage shortage, data stealing, and data tampering.<\/p>\n<p>To tackle the problem, we propose a novel and general data management framework, ContextDM, that sheds light on the storage to help system services and aid-apps for storage to have a better understanding on permanent data. In specific, the framework provides permanent data with metadata that includes contextual semantic information in terms of importance and sensitivity of data. Further, we show the effectiveness of our framework by demonstrating ContextDM based aid-tools that automatically identifying important and useless data as well as sensitive data that is disclosed.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-268\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-268\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-267\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tControlling Deep Natural Language Generation Models\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-267\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-268\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Shou-De Lin, National Taiwan University<\/p>\n<p>Deep Neural Network based solutions have shown promising results in natural language generation recently. From Autoencoder to the Seq2Seq models to the GAN-based solutions, deep learning models can already generate text that pass Turing Test, making the outputs non-distinguishable to human generated ones. However, researchers have pointed out that the content generated from deep neural networks can be fairly unpredictable, meaning that it is non-trivial for human to control the outputs to be generated. This talk will be discussing how to control the outputs of an NLG model and demonstrating some of our recent works along this line.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-270\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-270\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-269\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCross-lingual Visual Grounding and Multimodal Machine Translation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-269\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-270\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Chenhui Chu, Osaka University<\/p>\n<p>In this talk, we will introduce two of our recent work on multilingual and multimodal processing: cross-lingual visual grounding and multimodal machine translation. Visual grounding is a vision and language understanding task aiming at locating a region in an image according to a specific query phrase. We will present our work on cross-lingual visual grounding to expand the task to different languages. In addition, we will introduce our work on multimodal machine translation that incorporate semantic image regions with both visual and textural attention.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-272\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-272\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-271\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCryptographi-based security solutions for internet of things\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-271\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-272\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Atsuko Miyaji, Osaka University<\/p>\n<p>The consequences of security failures in the era of internet of things (IoT) can be catastrophic, as have been demonstrated by a rapidly growing list of IoT security incidents. As a result, people have begun to recognize the importance and value of bringing the highest level of security to IoT. Tradition wisdom has it that, though technologically superior, public-key cryptography (PKC) is too expensive to deploy in IoT devices and networks. In this talk, we present our cost-effective improvement of elliptic curve cryptography (ECC) in terms of memory and computational resource.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-274\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-274\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-273\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDeep Efficient Image (Video) Restoration\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-273\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-274\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Huanjing Yue, Tianjin University<\/p>\n<p>In this talk, I will introduce our team\u2019s work on image (video) denoising and demoir\u00e9ing.<\/p>\n<p>Realistic noise, which is introduced when capturing images under high ISO modes or low light conditions, is more complex than Gaussian noise, and therefore is difficult to be removed. By exploring the spatial, channel, and temporal correlations via deep CNNs, we can efficiently remove noise for images and videos. We construct two datasets to facilitate research on realistic noise removal for images and videos.<\/p>\n<p>Moir\u00e9 patterns, caused by aliasing between the grid of the display device and the array of camera sensor, greatly degrade the visual quality of recaptured screen images. Considering that the recaptured screen image and the original screen content usually have a large difference in brightness, we construct a moir\u00e9 removal and brightness improvement (MRBI) database with moir\u00e9-free and moir\u00e9 image pairs to facilitate the supervised learning and quantitative evaluation. Correspondingly, we propose a CNN based moir\u00e9 removal and brightness improvement method. Our work provides a benchmark dataset and a good baseline method for the demoir\u00e9ing task.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-276\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-276\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-275\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDeep Reinforcement Learning for the Transfer from Simulation to the Real World with Uncertainties for AI Curling Robot System\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-275\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-276\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Seong-Whan Lee, Korea University<\/p>\n<p>Recently, deep reinforcement learning (DRL) has even enabled real world applications such as robotics. Here we teach a robot to succeed in curling (Olympic discipline), which is a highly complex real-world application where a robot needs to carefully learn to play the game on the slippery ice sheet in order to compete well against human opponents. This scenario encompasses fundamental challenges: uncertainty, non-stationarity, infinite state spaces and most importantly scarce data. One fundamental objective of this study is thus to better understand and model the transfer from simulation to real-world scenarios with uncertainty. We demonstrate our proposed framework and show videos, experiments and statistics about Curly our AI curling robot being tested on a real curling ice sheet. Curly performed well both, in classical game situations and when interacting with human opponents.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-278\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-278\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-277\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDevelopment of a 3D endoscopic system with abilities of multi-frame, wide-area scanning \t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-277\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-278\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Ryo Furukawa, Hiroshima City University<\/p>\n<p>For effective in situ endoscopic diagnosis and treatment, or robotic surgery, 3D endoscopic systems have been attracting many researchers. We have been developing a 3D endoscopic system based on an active stereo technique, which projects a special pattern wherein each feature is coded. We believe it is a promising approach because of simplicity and high precision. However, previous works of this approach have problems. First, the quality of 3D reconstruction depended on stabilities of feature extraction from the images captured by the endoscope camera. Second, due to the limited pattern projection area, the reconstructed region was relatively small. In this talk, we describe our works of a learning-based technique using CNNs to solve the first problem and an extended bundle adjustment technique, which integrates multiple shapes into a consistent single shape, to address the second. The effectiveness of the proposed techniques compared to previous techniques was evaluated experimentally.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-280\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-280\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-279\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDifferential Privacy for Spatial and Temporal Data\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-279\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-280\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Masatoshi Yoshikawa, Kyoto University<\/p>\n<p>Differential Privacy (DP) has received increased attention as a rigorous privacy framework. In this talk, we introduce our recent studies on extension of DP to spatial temporal data. The topics include i) DP mechanism under temporal correlations in the context of continuous data release; and ii) location privacy for location-based service over road networks.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-282\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-282\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-281\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDissecting and Accelerating Neural Network via Graph Instrumentation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-281\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-282\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Jingwen Leng, Shanghai Jiao Tong University<\/p>\n<p>Despite the enormous success of deep neural network, there is still no solid understanding of deep neural network\u2019s working mechanism. As such, one fundamental question arises &#8211; how should architects and system developers perform optimizations centering DNNs? Treating them as black box leads to efficiency and security issues: 1) DNN models require fixed computation budge regardless of input; 2) a human-imperceivable perturbation to the input causes a DNN misclassification. This talk will present our efforts toward addressing those challenges. We recognize an increasing need of monitoring and modifying the DNN\u2019s runtime behavior, as evident by our recent work effective path, and other researchers\u2019 work of network pruning and quantization. As such, we present our on-going effort of building a graph instrumentation framework that provides programmers with the great convenience of achieving those abilities.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-284\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-284\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-283\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDynamic GPU Memory Management for DNNs\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-283\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-284\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Yu Zhang, University of Science &amp; Technology of China<\/p>\n<p>While deep learning researchers are seeking deeper and wider nonlinear networks, there is an increasing challenge for deploying deep neural network applications on low-end GPU devices for mobile and edge computing due to the limited size of GPU DRAM. The existing deep learning frameworks lack effective GPU memory management for different reasons. It is hard to apply effective GPU memory management on dynamic computation graphs which cannot get global computation graph (e.g. PyTorch), or can only impose limited dynamic GPU memory management strategies for static computation graphs (e.g. Tensorflow). In this talk, I will analyze the state of the art GPU memory management in the existing DL frameworks, present challenges on GPU memory management faced by running deep neural networks on low-end resource-constrained devices and finally give our thinking.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-286\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-286\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-285\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tEmotional Speech Synthesis with Granularized Control\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-285\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-286\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker: <\/strong>Hong-Goo Kang, Yonsei University<\/p>\n<p>Tangible interaction allows a user to interact with a computer using ordinary physical objects. It substantially expands the interaction space owing to the natural affordance and metaphors provided by real objects. However, tangible interaction requires to identify the object held by the user or how the user is touching the object. In this talk, I will introduce two sensing techniques for tangible interaction, which exploits active sensing using mechanical vibration. A vIn end-to-end deep learning-based emotional text-to-speech (TTS) systems such as the ones using Tacotron networks, it is very important to provide additional embedding vectors to flexibly control the distinct characteristic of target emotion.<\/p>\n<p>This talk introduces a couple of methods to effectively estimate representative embedding vectors. Using the mean of embedding vectors is a simple approach, but the expressiveness of synthesized speech is not satisfactory. To enhance the expressiveness, we needs to consider the distribution of emotion embedding vectors. An inter-to-intra (I2I) distance ratio-based algorithm recently proposed by our research team shows much higher performance than the conventional mean-based one. The I2I algorithm is also useful for gradually changing the intensity of expressiveness. Listening test results verify that the emotional expressiveness and control-ability of the I2I algorithm is superior to those of the mean-based one. ibration is transmitted from an exciter worn in the user\u2019s hand or fingers, and the transmitted vibration is measured using a sensor. By comparing the input-output pair, we can recognize the object held between two fingers or the fingers touching the object. The mechanical vibrations also provide pleasant confirmation feedback to the user. Details will be shared in the talk. <span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-288\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-288\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-287\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFairness in Recommender Systems\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-287\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-288\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<\/p>\n<p><strong>Speaker<\/strong>: Min Zhang, Tsinghua University<\/p>\n<p>Recommender systems have played significant roles in our daily life, and are expected to be available to any user, regardless of their gender, age or other demographic factors. Recently, there has been a growing concern about the bias that can creep into personalization algorithms and produce unfairness issues. In this talk, I will introduce the trending topics and our recent research progresses at THUIR (Tsinghua University Information Retrieval) group on fairness issue in recommender systems, including the causes of unfairness and the approaches to handle it. These series of work provide new ideas for building fairness-aware recommender system, and have been published on related top-tier international conferences SIGIR 2018, WWW 2019, SIGIR 2019, etc.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-290\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-290\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-289\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFLUID: Flexible User Interface Distribution for Ubiquitous Multi-device Interaction\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-289\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-290\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Insik Shin, KAIST<\/p>\n<p>The growing trend of multi-device ownerships creates a need and an opportunity to use applications across multiple devices. However, in general, the current app development and usage still remain within the single-device paradigm, falling far short of user expectations. For example, it is currently not possible for a user to dynamically partition an existing live streaming app with chatting capabilities across different devices, such that she watches her favorite broadcast on her smart TV while real-time chatting on her smartphone. In this paper, we present FLUID, a new Android-based multi-device platform that enables innovative ways of using multiple devices. FLUID aims to i) allow users to migrate or replicate individual user interfaces (UIs) of a single app on multiple devices (high flexibility), ii) require no additional development effort to support unmodified, legacy applications (ease of development), and iii) support a wide range of apps that follow the trend of using custom-made UIs (wide applicability). FLUID, on the other hand, meets the goals by carefully analyzing which UI states are necessary to correctly render UI objects, deploying only those states on different devices, supporting cross-device function calls transparently, and synchronizing the UI states of replicated UI objects across multiple devices. Our evaluation with 20 unmodified, real-world Android apps shows that FLUID can transparently support a wide range of apps and is fast enough for interactive use.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-292\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-292\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-291\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tGlobal Texture Mapping for Dynamic Objects\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-291\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-292\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Seungyong Lee, Pohang University of Science and Technology (POSTECH)<\/p>\n<p>In this talk, I will introduce a novel framework to generate a global texture atlas for a deforming geometry. Our approach distinguishes from prior arts in two aspects. First, instead of generating a texture map for each timestamp to color a dynamic scene, our framework reconstructs a global texture atlas that can be consistently mapped to a deforming object. Second, our approach is based on a single RGB-D camera, without the need of a multiple-camera setup surrounding a scene. In our framework, the input is a 3D template model with an RGB-D image sequence, and geometric warping fields are found using a state-of-the-art non-rigid registration method to align the template mesh to noisy and incomplete input depth images. With these warping fields, our multi-scale approach for texture coordinate optimization generates a sharp and clear texture atlas that is consistent with multiple color observations over time. Our approach provides a handy configuration to capture a dynamic geometry along with a clean texture atlas, and we demonstrate it with practical scenarios, particularly human performance capture.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-294\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-294\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-293\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tGradient Descent Finds Global Minima of Deep Neural Networks\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-293\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-294\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Liwei Wang, Peking University<\/p>\n<p>Gradient descent finds a global minimum in training deep neural networks despite the objective function being non-convex. The current paper proves gradient descent achieves zero training loss in polynomial time for a deep over-parameterized neural network with residual connections (ResNet). Our analysis relies on the particular structure of the Gram matrix induced by the neural network architecture. This structure allows us to show the Gram matrix is stable throughout the training process and this stability implies the global optimality of the gradient descent algorithm. We further extend our analysis to deep residual convolutional neural networks and obtain a similar convergence result.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-296\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-296\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-295\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tGraph-based Action Assessment\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-295\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-296\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Wei-Shi Zheng, Sun Yat-sen University<\/p>\n<p>We present a new model to assess the performance of actions visually from videos by graph-based joint relation modelling. Previous works mainly focused on the whole scene including the performer&#8217;s body and background, yet they ignored the detailed joint interactions. This is insufficient for fine-grained and accurate action assessment, because the action quality of each joint is dependent of its neighboring joints. Therefore, we propose to learn the detailed joint motion based on the joint relations. We build trainable Joint Relation Graphs, and analyze joint motion on them. We propose two novel modules, namely the Joint Commonality Module and the Joint Difference Module, for joint motion learning. The Joint Commonality Module models the general motion for certain body parts, and the Joint Difference Module models the motion differences within body parts. We evaluate our method on six public Olympic actions for performance assessment. Our method outperforms previous approaches (+0.0912) and the whole-scene model (+0.0623) in terms of the Spearman&#8217;s Rank Correlation. We also demonstrate our model&#8217;s ability of interpreting the action assessment process.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-298\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-298\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-297\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tIntelligent Action Analytics with Multi-Modal Reasoning\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-297\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-298\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Jiaying Liu, Peking University<\/p>\n<p>In this talk, we focus on intelligent action analytics in videos with multi-modal reasoning, which is important but remains under explored. We first present challenges in this problem by introducing PKU-MMD dataset collected by ourselves, i.e., multi-modal complementary feature learning, noise-robust feature learning, and dealing with tedious label annotation, etc. To tackle the above issues, we propose initial solutions with multi-modal reasoning. A modality compensation network is proposed to explicitly explore relationship of different modalities and further boost multi-modal feature learning. A noise-invariant network is developed to recognize human actions from noisy skeletons by referring denoised skeletons. To light up the community, we introduce possible future work in the end, such as self-supervised learning, language-guided reasoning.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-300\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-300\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-299\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tKafe: can OS kernel handle packets fast enough\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-299\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-300\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Chuck Yoo, Korea University<\/p>\n<p>It is widely believed that commodity operating systems cannot deliver high-speed packet processing, and a number of alternative approaches (including user-space network stacks) have been proposed. This talk revisits the inef\ufb01ciency of packet processing inside kernel and explores whether a redesign of kernel network stacks can improve the incompetence. We present a case through a redesign: Kafe \u2013 a kernel-based advanced forwarding engine. Contrary to the belief, Kafe can process packets as fast as user-space network stacks. Kafe neither adds any new API nor depends on proprietary hardware features.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-302\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-302\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-301\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tLearning Multi-label Feature for Fine-Grained Food Recognition\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-301\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-302\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Xueming Qian, Xi&#8217;an Jiaotong University<\/p>\n<p>Fine-grained food recognition is the detailed classification provide more specialized and professional attribute information of food. It is the basic work to realize healthy diet recommendation and cooking instructions, nutrition intake management and caf\u00e9teria self-checkout system. Chinese food appearance without the structured information, and ingredients composition is an important consideration. We proposed a new method for fine-grained food and ingredients recognition, include Attention Fusion Network (AFN) and Food-Ingredient Joint Learning. In AFN, it is focus on important attention regional features, and generates the feature descriptor. In Food-Ingredient Joint Learning, we proposed the balance focal loss to solve the issue of imbalanced ingredients multi-label. Finally, a series of experiments to prove results have significantly improved on the existing methods.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-304\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-304\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-303\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tLearning to Appreciate: Transforming Multimedia Communications via Deep Video Analytics\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-303\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-304\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Yonggang Wen, Nanyang Technological University<\/p>\n<p>Media-rich applications will continue to dominate mobile data traffic with an exponential growth, as predicted by Cisco Video Index. The improved quality of experience (QoE) for the video consumers plays an important role in shaping this growth. However, most of the existing approaches in improving video QoE are system-centric and model-based, in that they tend to derive insights from system parameters (e.g., bandwidth, buffer time, etc) and propose various mathematical models to predict QoE scores (e.g., mean opinion score, etc). In this talk, we will share our latest work in developing a unified and scalable framework to transform multimedia communications via deep video analytics. Specifically, our framework consists two main components. One is a deep-learning based QoE prediction algorithm, by combining multi-modal data inputs to provide a more accurate assessment of QoE in real-time manner. The other is a model-free QoE optimization paradigm built upon deep reinforcement learning algorithm. Our preliminary results verify the effectiveness of our proposed framework. We believe that the hybrid approach of multimedia communications and computing would fundamentally transform how we optimization multimedia communications system design and operations.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-306\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-306\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-305\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tLensless Imaging for Biomedical Applications\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-305\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-306\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Seung Ah Lee, Yonsei University<\/p>\n<p>Miniaturization of microscopes can be a crucial stepping stone towards realizing compact,cost-effective and portable platforms for biomedical research and healthcare. This talk reports on implementations lensless microscopes and lensless cameras for a variety of biological imaging applications in the form of mass-producible semiconductor devices, which transforms the fundamental design of optical imaging systems.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-308\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-308\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-307\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tLeveraging Generative Adversarial Networks for Data Augmentation by Disentangling Class-Independent Features\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-307\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-308\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Jaegul Choo, Korea University<\/p>\n<p>Considering its success in generating high-quality, realistic data, generative adversarial networks (GANs) have potentials to be used for data augmentation to improve the prediction accuracy in diverse problems where the limited amount of training data is given. However, GANs themselves require a nontrivial amount of data for their training, so data augmentation via GANs does not often improve the accuracy in practice. This talk will briefly review existing literature and our on-going approach based on feature disentanglement. I will conclude the talk with further research issues that I would like to address in the future.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-310\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-310\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-309\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tManipulatable Auditory Perception in Wearable Computing\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-309\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-310\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Hiroki Watanabe, Hokkaido University<\/p>\n<p>Since auditory perception is passive sense, we often do not notice important information and acquire unimportant information. We focused on a earphone-type wearable computer (hearable device) that not only has speakers but also microphones. In a hearable computing environment, we always attach microphones and speakers to the ears. Therefore, we can manipulate our auditory perception using a hearable device. We manipulated the frequency of the input sound from the microphones and transmitted the converted sound from the speakers. Thus, we could acquire the sound that is not heard with our normal auditory perception and eliminate the unwanted sound according to the user\u2019s requirements.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-312\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-312\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-311\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tModel Centric DevOps for Network Functions\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-311\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-312\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Wenfei Wu, Tsinghua University<\/p>\n<p>Network Functions play important roles in improving performance and enhancing security in modern computer networks. More and more NFs are being developed, integrated, and managed in production networks. However, the connection between the development and the operation for network functions has not drawn attention yet, which slows down the development and delivery of NFs and complicates NF network management.<\/p>\n<p>We propose that building a common abstraction layer for network functions would benefit both the development and operation. For NF development, having a uniform abstraction layer to describe NF behaviors would make the cross-platform development to be rapid and agile, which accelerate the NF delivery for NF vendors, and we would introduce our recent NF development framework based on language and compiler technologies. For NF operation, having a behavior model would ease the network reasoning, which can avoid runtime bugs, and more crucially, the behavior model is guaranteed to reflect the actual implementation; we would introduce our NF verification work based on the NF modeling language. Around our model-centric NF development and operation, we also other NF model works which lay the foundation of NF modeling language, and fill in the semantic gap between legacy NFs and NF models.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-314\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-314\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-313\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tNAT: Neural Architecture Transformer for Accurate and Compact Architectures\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-313\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-314\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Mingkui Tan, South China University of Technology<\/p>\n<p>Architecture design is one of the key factors behind the success of deep neural networks. Existing deep architectures are either manually designed or automatically searched by some Neural Architecture Search (NAS) methods. However, even a well-searched architecture may still contain many non-significant or redundant modules or operations (e.g., convolution or pooling), which not only incur substantial memory consumption and computational cost but may also deteriorate the performance. Thus, it is necessary to optimize the operations inside the architecture to improve the performance without introducing extra computational cost. However, such a constrained optimization problem is an NP-hard problem and is very hard to solve. To address this problem, we cast the optimization problem into a Markov decision process (MDP) and learn a Neural Architecture Transformer (NAT) to replace the redundant operations with the more computationally efficient ones (e.g., skip connection or directly removing the connection). In MDP, we train NAT with reinforcement learning to obtain the architecture optimization policies w.r.t. different architectures. To verify the effectiveness of the proposed method, we apply NAT on both hand-crafted architectures and NAS based architectures. Extensive experiments on two benchmark datasets, i.e., CIFAR-10 and ImageNet, show that the transformed architecture significantly outperforms both the original architecture and the architectures optimized by the existing methods.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-316\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-316\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-315\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tNovelty-aware exploration in RL and Conditional GANs for diversity\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-315\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-316\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Gunhee Kim, Seoul National University<\/p>\n<p>In this talk, I will introduce two recent works on machine learning from Vision and Learning Lab of Seoul National University. First, we present our work in reinforcement learning. We introduce an information-theoretic exploration strategy named Curiosity-Bottleneck (CB) that distills task-relevant information from observation. In our experiments, we observe that the CB algorithm robustly measures the state novelty in distractive environments where state-of-the-art exploration methods often degenerate. Second, we propose novel training schemes with a new set of losses that can prevent conditional GANs from losing the diversity in their outputs. We perform thorough experiments on image-to-image translation, super-resolution and image inpainting and show that our methods achieve a great diversity in outputs while retaining or even improving the visual fidelity of generated samples.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-318\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-318\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-317\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tNumerical\/quantitative system for common sense natural language processing\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-317\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-318\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Hiroaki Yamane, RIKEN AIP &amp; The University of Tokyo<\/p>\n<p>Numerical common sense (e.g., \u201ca person with a height of 2m is very tall\u201d) is essential when deploying artificial intelligence (AI) systems in society. We construct methods for converting contextual language to numerical variables for quantitative\/numerical common sense in natural language processing (NLP).<\/p>\n<p>We are living the world where we need common sense. We use some common sense when observing objects: A 165 cm human cannot be bigger than a 1 km bridge. The weight of the aforementioned human ranges from 40 kg to 90 kg. If one\u2019s weight is less than 50 kg, they are more likely to be very thin. This can be also applied to money. If the latest Surface Pro is $500, it is quite cheap. There is a necessity to account for common sense in future AI system.<\/p>\n<p>To address this problem, we first use a crowdsourcing service to obtain sufficient data for a subjective agreement on numerical common sense. Second, to examine whether common sense is attributed to current word embedding, we examined the performance of a regressor trained on the obtained data.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-320\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-320\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-319\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tParaphrasing and Simplification with Lean Vocabulary\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-319\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-320\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Tadashi Nomoto, The SOKENDAI Graduate School of Advanced Studies<\/p>\n<p>In this work, we examine whether it is possible to achieve the state of the art performance in paraphrase generation with reduced vocabulary. Our approach consists of building a convolution to sequence model (Conv2Seq) partially guided by the reinforcement learning, and training it on the sub-word representation of the input. The experiment on the Quora dataset, which contains over 140,000 pairs of sentences and corresponding paraphrases, found that with less than 1,000 token types, we were able to achieve performance that exceeded that of the current state of the art. We also report that the same architecture works equally well for text simplification, with little change.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-322\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-322\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-321\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tRay-SSL: Ray Tracing based Sound Source Localization considering Reflection and Diffraction\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-321\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-322\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Sung-eui Yoon, KAIST<\/p>\n<p>In this talk, we discuss a novel, ray tracing based technique for 3D sound source localization for indoor and outdoor environments. Unlike prior approaches, which are mainly based on continuous sound signals from a stationary source, our formulation is designed to localize the position instantaneously from signals within a single frame. We consider direct sound and indirect sound signals that reach the microphones after reflecting off surfaces such as ceilings or walls. We then generate and trace direct and reflected acoustic paths using backward acoustic ray tracing and utilize these paths with Monte Carlo localization to estimate a 3D sound source position. For complex cases with many objects, we also found that diffraction effects caused by the wave characteristics of sound become dominant. We propose to handle such non-trivial problems even with ray tracing, since directly applying wave simulation is prohibitively expensive.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-324\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-324\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-323\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tRecent Advances and Trends in Visual Tracking\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-323\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-324\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Tianzhu Zhang, University of Science and Technology of China<\/p>\n<p>Visual tracking is one of the most fundamental topics in computer vision with various applications in video surveillance, human computer interaction and vehicle navigation. Although great progress has been made in recent years, it remains a challenging problem due to factors such as illumination changes, geometric deformations, partial occlusions, fast motions and background clutters. In this talk, I will first review several recent models of visual tracking including particle filtering, classifier learning for tracking, sparse tracking, deep learning tracking, and correlation filter based tracking. Then, I will review several recent works of our group including correlation particle filter tracking, and graph convolutional tracking.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-326\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-326\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-325\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tRelational Knowledge Distillation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-325\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-326\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Minsu Cho, Pohang University of Science and Technology (POSTECH)<\/p>\n<p>Knowledge distillation aims at transferring knowledge acquired in one model (a teacher) to another model (a student) that is typically smaller. Previous approaches can be expressed as a form of training the student to mimic output activations of individual data examples represented by the teacher. We introduce a novel approach, dubbed relational knowledge distillation (RKD), that transfers mutual relations of data examples instead. For concrete realizations of RKD, we propose distance-wise and angle-wise distillation losses that penalize structural differences in relations. Experiments conducted on different tasks show that the proposed method improves educated student models with a significant margin. In particular for metric learning, it allows students to outperform their teachers&#8217; performance, achieving the state of the arts on standard benchmark datasets.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-328\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-328\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-327\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tRequirements of Computer Vision for Household Robots\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-327\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-328\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Jun Takamatsu, Nara Institute of Science and Technology<\/p>\n<p>For household robots that work in everyday-life dynamic environments, the computer vision (CV) to recognize the environments is essential. Unfortunately, CV issues in household robots sometimes cannot be solved by the methods that were usually proposed in the CV fields. In this talk, I exemplify the two examples and would like to ask their solutions. The first example is CV in learning-from-observation, where it is not enough to recognize names of actions, such as walk and jump. The second example is analysis of usage of time. This requires recognizing activities in the level such as watch TV and spend one\u2019s hobby.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-330\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-330\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-329\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tSoftware and Hardware Co-design for Networked Memory\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-329\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-330\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Youyou Lu, Tsinghua University<\/p>\n<p>Non-volatile memory (NVM) and remote direct memory access (RDMA) provide extremely high performance in storage and network hardware. Comparatively, the software overhead in the file systems become a non-negligible part in persistent memory storage systems. To achieve efficient networked memory design, I will present this design choices in Octopus. Octopus is a distributed file system that redesigns file system internal mechanisms by closely coupling NVM and RDMA features. I will further discuss the possible hardware enhancements for networked memory for research in my group.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-332\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-332\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-331\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tSystem support for designing efficient gradient compression algorithms for distributed DNN training\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-331\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-332\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Cheng Li, University of Science and Technology of China<\/p>\n<p>Training DNN models across a large number of connected devices or machines has been at norm. Studies suggest that the major bottleneck of scaling out the training jobs is to exchange the huge amount of gradients per mini-batch. Thus, a few compression algorithms have been proposed, such as Deep Gradients Compression, Terngrad, and evaluated to demonstrate their benefits of reducing the transmission cost. However, when re-implementing these algorithms and integrating them into mainstream frameworks such as MxNet, we identified that they performed less efficiently than what was claimed in their original papers. The major gap is that the developers of those algorithms did not necessarily understand the internals of the deep learning frameworks. As a consequence, we believe that there is lack of system support for enabling the algorithm developers to primarily focus on the innovations of the compression algorithms, rather than the efficient implementations which may take into account various levels of parallelism. To this end, we propose a domain-specific language that allows the algorithm developers to sketch their compression algorithms, a translator that converts the high-level descriptions into low-level highly optimized GPU codes, and a compiler that generates new computation DAGs that fuses the compression algorithms with proper operators that produce gradients.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-334\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-334\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-333\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tTowards solving the cocktail party problem: from speech separation to speech recognition\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-333\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-334\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Jun Du, University of Science and Technology of China<\/p>\n<p>Solving the cocktail party problem is one ultimate goal for the machine to achieve the human-level auditory perception. Speech separation and recognition are two related key techniques. With the emergence of deep learning, new milestones are achieved for both speech separation and recognition. In this talk, I will introduce our recent progress and future trends in these areas with the development of DIHARD and CHiME Challenges.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-336\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-336\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-335\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tToward Ubiquitous Operating Systems: Challenges and Research Directions\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-335\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-336\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Yao Guo, Peking University<\/p>\n<p>In recent years, operating systems have expanded beyond traditional computing systems into the cloud, IoT devices, and other emerging technologies and will soon become ubiquitous. We call this new generation of OSs as ubiquitous operating systems (UOSs). Despite the apparent differences among existing OSs, they all have in common so-called \u201csoftware-defined\u201d capabilities\u2014namely, resource virtualization and function programmability. In this talk, I will present our vision and some recent work toward the development of ubiquitous operating systems (UOSs).<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-338\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-338\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-337\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tVibration-Mediated Sensing Techniques for Tangible Interaction\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-337\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-338\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker: <\/strong>Seungmoon Choi, Pohang University of Science and Technology (POSTECH)<\/p>\n<p>Tangible interaction allows a user to interact with a computer using ordinary physical objects. It substantially expands the interaction space owing to the natural affordance and metaphors provided by real objects. However, tangible interaction requires to identify the object held by the user or how the user is touching the object. In this talk, I will introduce two sensing techniques for tangible interaction, which exploits active sensing using mechanical vibration. A vibration is transmitted from an exciter worn in the user\u2019s hand or fingers, and the transmitted vibration is measured using a sensor. By comparing the input-output pair, we can recognize the object held between two fingers or the fingers touching the object. The mechanical vibrations also provide pleasant confirmation feedback to the user. Details will be shared in the talk.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-340\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-340\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-339\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tVideo Analytics in Crowded Spaces\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-339\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-340\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Rajesh Krishna Balan, Singapore Management University<\/p>\n<p>I will describe the flow of work I am starting on video analytics in crowded spaces. This includes malls, conferences centres, and university campuses in Asia. The goal of this work is to use video analytics, combined with other sensors to accurately count the number of people in the environments, track their movement trajectories, and discover their demographics and persona.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-342\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-342\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-341\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tVideo Dialog via Progressive Inference and Cross-Transformer\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-341\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-342\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Zhou Zhao, Zhejiang University<\/p>\n<p>Video dialog is a new and challenging task, which requires the agent to answer questions combining video information with dialog history. And different from single-turn video question answering, the additional dialog history is important for video dialog, which often includes contextual information for the question. Existing visual dialog methods mainly use RNN to encode the dialog history as a single vector representation, which might be rough and straightforward. Some more advanced methods utilize hierarchical structure, attention and memory mechanisms, which still lack an explicit reasoning process. In this paper, we introduce a novel progressive inference mechanism for video dialog, which progressively updates query information based on dialog history and video content until the agent think the information is sufficient and unambiguous. In order to tackle the multimodal fusion problem, we propose a cross-transformer module, which could learn more fine-grained and comprehensive interactions both inside and between the modalities. And besides answer generation, we also consider question generation, which is more challenging but significant for a complete video dialog system. We evaluate our method on two largescale datasets, and the extensive experiments show the effectiveness of our method.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-344\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-344\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-343\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tVisual Analytics of Sports Data\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-343\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-344\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Yingcai Wu, Zhejiang University<\/p>\n<p>With the rapid development of sensing technologies and wearable devices, large sports data have been acquired daily. The data usually implies a wide spectrum of information and rich knowledge about sports. Visual analytics, which facilitates analytical reasoning by interactive visual interfaces, has proven its value in solving various problems. In this talk, I will discuss our research experiences in visual analytics of sports data and introduce several recent studies of our group of making sense of sports data through interactive visualization.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-346\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-346\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-345\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tVisual Analytics for Data Quality Improvement\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-345\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-346\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Shixia Liu, Tsinghua University<\/p>\n<p>The quality of training data is crucial to the success of supervised and semi-supervised learning. Errors in data have long been known to limit the performance of machine learning models. This talk presents the motivation, major challenges of interactive data quality analysis and improvement. With that perspective, I will then discuss some of my recent efforts on 1) analyzing and correcting poor label quality, and 2) resolving the poor coverage of the training data caused by dataset bias.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-348\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-348\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-347\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tVIS+AI: Making AI more Explainable and VIS more Intelligent\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-347\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-348\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Huamin Qu, Hong Kong University of Science and Technology<\/p>\n<p>VIS for AI and AI for VIS have become hot research topics recently. On the one side, visualization plays an important role in explainable AI. On the other side, AI has been transforming the visualization field and automated the whole visualization system development pipeline. In this talk, I will introduce the emerging opportunities of combining AI and VIS to leverage both human intelligence and artificial intelligence to solve some grand challenging problems facing both fields and the society.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-350\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-350\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-349\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tWhat We Learned from Medical Image Learning\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-349\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-350\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Speaker<\/strong>: Winston Hsu, National Taiwan University<\/p>\n<p>We observed super-human capabilities from convolutional networks for image learning. It is a natural extension for advancing the technologies towards healthcare applications such as medical image segmentation (CT, MRI), registration, detection, prediction, etc. In the past few years, working closely with the university hospitals, we found many exciting developments in this aspect. However, we also learn a lot as working in the cross-disciplinary setup, which requires strong devotions and deep technologies from the medical and machine learning domains. We\u2019d like to take this opportunity to share what we failed and succeeded for the few attempts in advancing machine learning for medical applications. We will identity promising working models (also the misunderstandings between these two disciplines) derived with the medical experts and evidence the great opportunities to discover new treatment or diagnosis methods across numerous common diseases.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Speakers\"} --><!-- wp:freeform --><h2>Workshops<\/h2>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Rajesh-Krishna-Balan.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Rajesh Krishna Balan<\/strong><\/p>\n<p>Singapore Management University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-352\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-352\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-351\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-351\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-352\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Prof. Balan is an ACM Distinguished Scientist and has worked in the area of mobile systems for over 18 years. He obtained his Ph.D. in Computer Science in 2006 from Carnegie Mellon University under the guidance of Professor Mahadev Satyanarayanan. He has been a general chair for both MobiSys 2016 and UbiComp 2018 and has served as a program chair for HotMobile 2012 and MobiSys 2019. In addition, he also organised student workshop, called ASSET, that ran at MobiCom 2019, COMSNETS 2018, and MobiSys 2016. Prof. Balan has a strong interest in applied research and was a director for LiveLabs (http:\/\/www.livelabs.smu.edu.sg), a large research \/ startup lab that turned real-world environments (such as a university, a convention centre, and a resort island) into living testbeds for mobile systems experiments. He founded a startup to more effectively provide LiveLabs technologies to interested commercial clients. These experiences have given Prof Balan a great insight into how hard and meaningful it is to translate research into tangible systems that are tested and deployed in the real world.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Ting-Cao.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Ting Cao<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-354\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-354\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-353\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-353\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-354\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Ting Cao is now a Researcher in System Research Group of MSRA. Her research interests include HW\/SW co-design, high-level language implementation, software management of heterogeneous hardware, big data and deep learning frameworks. She has reputable publications in ISCA, ASPLOS, PLDI, Proceedings of the IEEE, etc. She got her PhD from the Australian National University. Before joining MSRA, she was a senior software engineer in the Compiler and Computing Language Lab in Huawei Technologies.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Yue-Cao.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Yue Cao<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-356\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-356\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-355\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-355\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-356\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Yue Cao is now a researcher at Microsoft Research Asia. He received the B.E. degree in Computer Software at 2014 and Ph.D. degree in Software Engineering at 2019, both from Tsinghua University, China. He was awarded the Top-grade Scholarship of Tsinghua University at 2018, and Microsoft Research Asia PhD Fellowship at 2017. His research interests include computer vision and deep learning. He has published more than 20 papers in the top-tier conferences with more than 1,700 citations.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Xilin-Chen.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Xilin Chen<\/strong><\/p>\n<p>Chinese Academy of Sciences<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-358\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-358\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-357\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-357\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-358\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Xilin Chen is a professor with the Institute of Computing Technology, Chinese Academy of Sciences (CAS). He has authored one book and more than 300 papers in refereed journals and proceedings in the areas of computer vision, pattern recognition, image processing, and multimodal interfaces. He is currently an associate editor of the IEEE Transactions on Multimedia, and a Senior Editor of the Journal of Visual Communication and Image Representation, a leading editor of the Journal of Computer Science and Technology, and an associate editor-in-chief of the Chinese Journal of Computers, and Chinese Journal of Pattern Recognition and Artificial Intelligence. He served as an Organizing Committee member for many conferences, including general co-chair of FG13 \/ FG18, program co-chair of ICMI 2010. He is \/ was an area chair of CVPR 2017 \/ 2019 \/ 2020, and ICCV 2019. He is a fellow of the IEEE, IAPR, and CCF.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Peng-Cheng.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Peng Cheng<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-360\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-360\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-359\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-359\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-360\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Peng Cheng is the researcher in Networking Research Group, MSRA. His research interests are computer networking and networked systems. His recent work is focusing on Hardware-based System in Data Center. He has publications in NSDI, CoNEXT, EuroSys, SIGCOMM, etc. He received his Ph.D. in Computer Science and Technology from Tsinghua University in 2015.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jaegul-Choo.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Jaegul Choo<\/strong><\/p>\n<p>Korea University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-362\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-362\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-361\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-361\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-362\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Jaegul Choo (https:\/\/sites.google.com\/site\/jaegulchoo\/ ) is an associate professor in the Dept. of Computer Science and Engineering at Korea University. He has been a research scientist at Georgia Tech from 2011 to 2015, where he also received M.S in 2009 and Ph.D in 2013. His research areas include computer vision, and natural language processing, data mining, and visual analytics, and his work has been published in premier venues such as KDD, WWW, WSDM, CVPR, ECCV, EMNLP, AAAI, IJCAI, ICDM, ICWSM, IEEE VIS, EuroVIS, CHI, TVCG, CFG, and CG&amp;A. He earned the Best Student Paper Award at ICDM in 2016, the NAVER Young Faculty Award in 2015, the Outstanding Research Scientist Award at Georgia Tech in 2015, and the Best Poster Award at IEEE VAST (as part of IEEE VIS) in 2014.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Nan-Duan.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Nan Duan<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-364\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-364\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-363\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-363\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-364\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr. Nan DUAN is a Principle Research Manager at Microsoft Research Asia. He is working on fundamental NLP tasks, especially on question answering, natural language understanding, language + vision, pre-training and reasoning.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Winston-HSU.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Winston Hsu<\/strong><\/p>\n<p>National Taiwan University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-366\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-366\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-365\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-365\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-366\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Prof. Winston Hsu is an active researcher dedicated to large-scale image\/video retrieval\/mining, visual recognition, and machine intelligence. He is a Professor in the Department of Computer Science and Information Engineering, National Taiwan University. He and his team have been recognized with technical awards in multimedia and computer vision research communities including IBM Research Pat Goldberg Memorial Best Paper Award (2018), Best Brave New Idea Paper Award in ACM Multimedia 2017, First Place for IARPA Disguised Faces in the Wild Competition (CVPR 2018), First Prize in ACM Multimedia Grand Challenge 2011, ACM Multimedia 2013\/2014 Grand Challenge Multimodal Award, etc. Prof. Hsu is keen to realizing advanced researches towards business deliverables via academia-industry collaborations and co-founding startups. He was a Visiting Scientist at Microsoft Research Redmond (2014) and had his 1-year sabbatical leave (2016-2017) at IBM TJ Watson Research Center. He served as the Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) and IEEE Transactions on Multimedia, two premier journals, and was on the Editorial Board for IEEE Multimedia Magazine (2010 \u2013 2017).<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Sung-Ju-Hwang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Sung Ju Hwang<\/strong><\/p>\n<p>KAIST<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-368\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-368\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-367\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-367\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-368\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Sung Ju Hwang is an assistant professor in the Graduate School of Artificial Intelligence and School of Computing at KAIST. He received his Ph.D. degree in computer science at University of Texas at Austin, under the supervision of Professor Kristen Grauman. Sung Ju Hwang&#8217;s research interest is mainly on developing machine learning models for tackling practical challenges in various application domains, including but not limited to, visual recognition, natural language understanding, healthcare and finance. He regularly presents papers at various top-tier AI conferences, such as NeurIPS, ICML, ICLR, CVPR, ICCV, AAAI and ACL.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Guolin-Ke.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Guolin Ke<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-370\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-370\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-369\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-369\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-370\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Guolin Ke is currently a Researcher in Machine Learning Group, Microsoft Research Asia. His research interests mainly lie in machine learning algorithms.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Gunhee-Kim.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Gunhee Kim<\/strong><\/p>\n<p>Seoul National University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-372\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-372\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-371\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-371\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-372\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Gunhee Kim is an associate professor in the Department of Computer Science and Engineering of Seoul National University from 2015. He was a postdoctoral researcher at Disney Research for one and a half years. He received his PhD in 2013 under supervision of Eric P. Xing from Computer Science Department of Carnegie Mellon University. Prior to starting PhD study in 2009, he earned a master\u2019s degree under supervision of Martial Hebert in Robotics Institute, CMU. His research interests are solving computer vision and web mining problems that emerge from big image data shared online, by developing scalable and effective machine learning and optimization techniques. He is a recipient of 2014 ACM SIGKDD doctoral dissertation award, and 2015 Naver New faculty award.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2016\/07\/avatar_user__1469100866-180x180.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Shujie Liu<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-374\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-374\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-373\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-373\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-374\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr. Shujie Liu is a Principle Researcher in Natural Language Computing group at Microsoft Research Asia, Beijing, China. Shujie joined MSRA-NLC in Jul. 2012 after he received his Ph.D in Jun. 2012 from Department of Computer Science of Harbin Institute of Technology.<\/p>\n<p>Shujie\u2019s research interests include natural language processing and deep learning. He is now working on fundamental NLP problems, models, algorithms and innovations.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Xuanzhe-Liu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Xuanzhe Liu<\/strong><\/p>\n<p>Peking University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-376\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-376\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-375\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-375\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-376\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Prof. Xuanzhe Liu is now an associate professor with the Institute of Software, Peking University, since 2011. He now leads the DAAS (Data, Analytics, Applications, and Systems) lab in Peking University. Prof. Liu\u2019s recent research interests are focused on measuring, engineering, and operating large-scale service-based and intelligent software systems (such as mobility and Web), mostly from a data-driven perspective. Prof. Liu has published more than 80 papers on premier conferences such as WWW, ICSE, OOPSLA, MobiCom, UbiComp, EuroSys, and IMC, and impactful journals such as ACM TOIS\/TOIT and IEEE TSE\/TMC\/TSC. He won the Best Paper Award of WWW 2019. He was also recognized by several academic awards, such as the CCF-IEEE CS Young Scientist Award, the Honorable Young Faculty Award of Yangtze River Scholar Program, and so on. Prof. Liu was a visiting researcher with Microsoft Research (with &#8220;Star-Track Young Faculty Program&#8221;) from 2013-2014, and the winner of Microsoft Ph.D. Fellowship in 2007.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jiwen-Lu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Jiwen Lu<\/strong><\/p>\n<p>Tsinghua University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-378\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-378\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-377\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-377\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-378\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Jiwen Lu is currently an Associate Professor with the Department of Automation, Tsinghua University, China. His current research interests include computer vision, machine learning, and intelligent robotics. He has authored\/co-authored over 200 scientific papers in these areas, where over 70 of them are IEEE Transactions papers and over 50 of them are CVPR\/ICCV\/ECCV papers. He was a recipient of the National 1000 Young Talents Program of China in 2015, and the National Science Fund of China Award for Excellent Young Scholars in 2018. He serves as the Co-Editor-of-Chief for PR Letters, an Associate Editor for T-IP\/T-CSVT\/T-BIOM\/PR. He is the Program Co-Chair of ICME\u20192020, AVSS\u20192020 and DICTA\u20192019, and an Area Chair for CVPR\u20192020, ICME\u20192017-2019, ICIP\u20192017-2019, and ICPR 2018.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Chong-Luo.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Chong Luo<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-380\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-380\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-379\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-379\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-380\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p> Dr. Chong Luo joined Microsoft Research Asia in 2003 and is now a Principal Researcher at the Intelligent Multimedia Group (IMG). She is an adjunct professor and a Ph.D. advisor at the University of Science and Technology of China (USTC), China. Her current research interests include computer vision, cross-modality multimedia analysis and processing, and multimedia communications. In particular, she is interested in visual object tracking, audio-visual and text-visual video analysis, and hybrid digital-analog transmission of wireless video. She is currently a member of the Multimedia Systems and Applications (MSA) Technical Committee (TC) of the IEEE Circuits and Systems (CAS) society. She is an IEEE senior member.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Sinno-Jialin-Pan.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Sinno Jialin Pan<\/strong><\/p>\n<p>Nanyang Technological University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-382\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-382\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-381\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-381\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-382\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr Sinno Jialin Pan is a Provost&#8217;s Chair Associate Professor with the School of Computer Science and Engineering, and Deputy Director of the Data Science and AI Research Centre at Nanyang Technological University (NTU), Singapore. He received his Ph.D. degree in computer science from the Hong Kong University of Science and Technology (HKUST) in 2011. Prior to joining NTU, he was a scientist and Lab Head of text analytics with the Data Analytics Department, Institute for Infocomm Research, Singapore from Nov. 2010 to Nov. 2014. He joined NTU as a Nanyang Assistant Professor (university named assistant professor) in Nov. 2014. He was named to &#8220;AI 10 to Watch&#8221; by the IEEE Intelligent Systems magazine in 2018. His research interests include transfer learning, and its applications to wireless-sensor-based data mining, text mining, sentiment analysis, and software engineering.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2018\/03\/Xu-Tan-Profile-Photo-360-x-360.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Xu Tan<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-384\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-384\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-383\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-383\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-384\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Xu Tan is currently a Senior Researcher in Machine Learning Group, Microsoft Research Asia (MSRA). He graduated from Zhejiang University on March, 2015. His research interests mainly lie in machine learning, deep learning, low-resource learning, and their applications on natural language processing and speech processing, including neural machine translation, text to speech, etc.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Chuan-Wu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Chuan Wu<\/strong><\/p>\n<p>University of Hong Kong<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-386\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-386\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-385\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-385\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-386\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Chuan Wu received her B.Engr. and M.Engr. degrees in 2000 and 2002 from the Department of Computer Science and Technology, Tsinghua University, China, and her Ph.D. degree in 2008 from the Department of Electrical and Computer Engineering, University of Toronto, Canada. Between 2002 and 2004, She worked in the Information Technology industry in Singapore. Since September 2008, Chuan Wu has been with the Department of Computer Science at the University of Hong Kong, where she is currently an Associate Professor. Her current research is in the areas of cloud computing, distributed machine learning\/big data analytics systems, and smart elderly care technologies\/systems. She is a senior member of IEEE, a member of ACM, and an associate editor of IEEE Transactions on Cloud Computing, IEEE Transactions on Multimedia, IEEE Transactions on Circuits and Systems for Video Technology and ACM Transactions on Modeling and Performance Evaluation of Computing Systems. She was the co-recipient of the best paper awards of HotPOST 2012 and ACM e-Energy 2016.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Yingce-Xia.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Yingce Xia<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-388\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-388\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-387\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-387\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-388\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>I am currently a researcher at machine learning group, Microsoft Research Asia. I received my Ph.D. degree from University of Science and Technology in 2018, supervised by Dr. Tie-Yan Liu and Prof. Nenghai Yu. Prior to that, I obtained my bachelor degree from University of Science and Technology of China in 2013.<\/p>\n<p>My research revolves around dual learning (a new learning paradigm proposed by our group) and deep learning (with application to neural machine translation and image processing).<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2016\/09\/avatar_user__1474853894-180x180.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Dongdong Zhang<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-390\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-390\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-389\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-389\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-390\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr. Dongdong Zhang is a researcher in Natural Language Computing group at Microsoft Research Asia, Beijing, China. He received his Ph.D in Dec. 2005 from Department of Computer Science of Harbin Institute of Technology under the supervision of Prof. Jianzhong Li. Before that, he received a B.S. degree and M.S. degree from the same department in 1999 and 2001 respectively.<\/p>\n<p>Dongdong\u2019s research interests include natural language processing, machine translation and machine learning. He is now working on research and development of advanced statistical machine translation systems (SMT) as well as related fundamental NLP problems, models, algorithms and innovations.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Quanlu-Zhang.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Quanlu Zhang<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-392\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-392\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-391\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-391\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-392\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Quanlu Zhang is a senior researcher at MSRA. He obtained his PhD in computer science from Peking University. His current focuses are on the areas of AutoML systems, GPU cluster management, resource scheduling, and storage support for DL workload. Some works have been published on conferences such as OSDI, SoCC, FAST etc.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<h2>Breakout Sessions<\/h2>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Rajesh-Krishna-Balan.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Rajesh Krishna Balan<\/strong><\/p>\n<p>Singapore Management University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-394\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-394\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-393\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-393\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-394\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Prof. Balan is an ACM Distinguished Scientist and has worked in the area of mobile systems for over 18 years. He obtained his Ph.D. in Computer Science in 2006 from Carnegie Mellon University under the guidance of Professor Mahadev Satyanarayanan. He has been a general chair for both MobiSys 2016 and UbiComp 2018 and has served as a program chair for HotMobile 2012 and MobiSys 2019. In addition, he also organised student workshop, called ASSET, that ran at MobiCom 2019, COMSNETS 2018, and MobiSys 2016. Prof. Balan has a strong interest in applied research and was a director for LiveLabs (http:\/\/www.livelabs.smu.edu.sg), a large research \/ startup lab that turned real-world environments (such as a university, a convention centre, and a resort island) into living testbeds for mobile systems experiments. He founded a startup to more effectively provide LiveLabs technologies to interested commercial clients. These experiences have given Prof Balan a great insight into how hard and meaningful it is to translate research into tangible systems that are tested and deployed in the real world.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/lei-chen.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Lei Chen<\/strong><\/p>\n<p>Hong Kong University of Science and Technology<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-396\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-396\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-395\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-395\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-396\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Lei Chen has BS degree in computer science and engineering from Tianjin University, Tianjin, China, MA degree from Asian Institute of Technology, Bangkok, Thailand, and Ph.D. in computer science from the University of Waterloo, Canada. He is a professor in the Department of Computer Science and Engineering, Hong Kong University of Science and Technology (HKUST). Currently, Prof. Chen serves as the director of Big Data Institute at HKUST, the director of Master of Science on Big Data Technology and director of HKUST MOE\/MSRA Information Technology Key Laboratory. Prof. Chen\u2019s research includes human-powered machine learning, crowdsourcing, Blockchain, social media analysis, probabilistic and uncertain databases, and privacy-preserved data publishing. Prof. Chen got the SIGMOD Test-of-Time Award in 2015.The system developed by Prof. Chen\u2019s team won the excellent demonstration award in VLDB 2014. Currently, Pro. Chen serves as Editor-in-Chief of VLDB Journal, associate editor-in-chief of IEEE Transaction on Data and Knowledge Engineering and Program Committee Co-Chair for VLDB 2019. He is an ACM Distinguished Member and an IEEE Senior Member<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Wen-Huang-Cheng.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Wen-Huang Cheng<\/strong><\/p>\n<p>National Chiao Tung University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-398\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-398\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-397\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-397\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-398\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Wen-Huang Cheng is Professor with the Institute of Electronics, National Chiao Tung University (NCTU), Hsinchu, Taiwan, where he is the Founding Director with the Artificial Intelligence and Multimedia Laboratory (AIMMLab). Before joining NCTU, he led the Multimedia Computing Research Group at the Research Center for Information Technology Innovation (CITI), Academia Sinica, Taipei, Taiwan, from 2010 to 2018. His current research interests include multimedia, artificial intelligence, computer vision, machine learning, social media, and financial technology. He has actively participated in international events and played important leading roles in prestigious journals and conferences and professional organizations, like Associate Editor for IEEE Multimedia, General co-chair for ACM ICMR (2021), TPC co-chair for ICME (2020), Chair-Elect for IEEE MSA-TC, governing board member for IAPR. He has received numerous research and service awards, including the 2018 MSRA Collaborative Research Award, the 2017 Ta-Yu Wu Memorial Award from Taiwan\u2019s Ministry of Science and Technology (the highest national research honor for young Taiwanese researchers under age 42), the Top 10% Paper Award from the 2015 IEEE MMSP, the K. T. Li Young Researcher Award from the ACM Taipei\/Taiwan Chapter in 2014, the 2017 Significant Research Achievements of Academia Sinica, the 2016 Y. Z. Hsu Scientific Paper Award, the Outstanding Youth Electrical Engineer Award from the Chinese Institute of Electrical Engineering in 2015, and the Outstanding Reviewer Award of 2018 IEEE ICME.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Minsu-Cho.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Minsu Cho<\/strong><\/p>\n<p>Pohang University of Science and Technology (POSTECH)<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-400\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-400\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-399\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-399\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-400\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Minsu Cho is an assistant professor at the Department of Computer Science and Engineering at POSTECH, South Korea, leading POSTECH Computer Vision Lab. Before joining POSTECH in the fall of 2016, he has worked as a postdoc and a starting researcher in Inria (the French National Institute for computer science and applied mathematics) and ENS (\u00c9cole Normale Sup\u00e9rieure), Paris, France. He completed his Ph.D. in 2012 at Seoul National University, Korea. His research lies in the areas of computer vision and machine learning, especially in the problems of object discovery, weakly-supervised learning, semantic correspondence, and graph matching. In general, he is interested in the relationship between correspondence and supervision in visual learning. He is an editorial board member of International Journal of Computer Vision (IJCV) and has been serving area chairs in top computer vision conferences including CVPR 2018, ICCV 2019, and CVPR 2020.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Seungmoon-Choi.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Seungmoon Choi<\/strong><\/p>\n<p>Pohang University of Science and Technology (POSTECH)<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-402\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-402\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-401\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-401\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-402\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Seungmoon Choi, PhD, is a Professor of Computer Science and Engineering at POSTECH in Korea. He received the BS and MS degrees from Seoul National University and the PhD degree from Purdue University. His main research area is haptics, the science and technology for the sense of touch, as well as its application to various domains including robotics, virtual reality, human-computer interaction, and consumer electronics. He received a 2011 Early Career Award from the IEEE Technical Committee on Haptics.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jaegul-Choo.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Jaegul Choo<\/strong><\/p>\n<p>Korea University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-404\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-404\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-403\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-403\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-404\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Jaegul Choo (https:\/\/sites.google.com\/site\/jaegulchoo\/ ) is an associate professor in the Dept. of Computer Science and Engineering at Korea University. He has been a research scientist at Georgia Tech from 2011 to 2015, where he also received M.S in 2009 and Ph.D in 2013. His research areas include computer vision, and natural language processing, data mining, and visual analytics, and his work has been published in premier venues such as KDD, WWW, WSDM, CVPR, ECCV, EMNLP, AAAI, IJCAI, ICDM, ICWSM, IEEE VIS, EuroVIS, CHI, TVCG, CFG, and CG&amp;A. He earned the Best Student Paper Award at ICDM in 2016, the NAVER Young Faculty Award in 2015, the Outstanding Research Scientist Award at Georgia Tech in 2015, and the Best Poster Award at IEEE VAST (as part of IEEE VIS) in 2014.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Chenhui-Chu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Chenhui Chu<\/strong><\/p>\n<p>Osaka University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-406\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-406\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-405\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-405\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-406\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Chenhui Chu received his B.S. in Software Engineering from Chongqing University in 2008, and M.S., and Ph.D. in Informatics from Kyoto University in 2012 and 2015, respectively. He is currently a research assistant professor at Osaka University. His research won the MSRA collaborative research 2019 grant award, 2018 AAMT Nagao award, and CICLing 2014 best student paper award. He is on the editorial board of the Journal of Natural Language Processing, Journal of Information Processing, and a steering committee member of Young Researcher Association for NLP Studies. His research interests center on natural language processing, particularly machine translation and language and vision understanding.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jun-Du.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Jun Du<\/strong><\/p>\n<p>University of Science and Technology of China<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-408\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-408\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-407\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-407\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-408\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Jun Du received the B.Eng. and Ph.D. degrees from the Department of Electronic Engineering and Information Science, University of Science and Technology of China (USTC), in 2004 and 2009, respectively. From July 2009 to June 2010, he was with iFlytek Research leading a team to develop the ASR prototype system of the mobile app \u201ciFlytek Input\u201d. From July 2010 to January 2013, he joined MSRA as an Associate Researcher, working on handwriting recognition, OCR, and speech recognition. Since February 2013, he has been with the National Engineering Laboratory for Speech and Language Information Processing (NEL-SLIP), USTC. His main research interest includes speech signal processing and pattern recognition applications. He has published more than 100 conference and journal papers with more than 2300 citations in Google Scholar. His team is one of the pioneers in deep-learning-based speech enhancement area, publishing two ESI highly cited papers. As the corresponding author, the IEEE-ACM TASLP paper \u201cA Regression Approach to Speech Enhancement Based on Deep Neural Networks\u201d also received 2018 IEEE Signal Processing Society Best Paper Award. Based on those research achievements of speech enhancement, he led a joint team with members from USTC and iFlytek Research to win the champions of all three tasks in the 2016 CHiME-4 challenge and all four tasks in 2018 CHiME-5 challenge. Currently he is the associate editor of IEEE-ACM TASLP. He is one of the organizers for DIHARD Challenge 2018 and 2019.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Ryo-Furukawa.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Ryo Furukawa<\/strong><\/p>\n<p>Hiroshima City University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-410\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-410\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-409\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-409\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-410\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Ryo Furukawa is an associate professor of Faculty of Information Sciences, Hiroshima City University, Hiroshima, Japan. He received his Ph.D. from Nara Institute of Science and Technology, Japan. His research area includes shape-capturing, 3D modeling, image-based rendering, and medical image analysis. He has won academic awards including ACCV Songde Ma Outstanding Paper Award (2007), PSIVT Best Paper Award (2009), IEVC2014 Best Paper Award (2014), IEEE WACV Best Paper Honorable Mention (2017), MICCAI Workshop CARE, KUKA Best Paper Award 3rd Place (2018).<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Yao-Guo.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Yao Guo<\/strong><\/p>\n<p>Peking University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-412\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-412\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-411\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-411\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-412\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Yao Guo is a professor and vice chair of the Department of Computer Science at Peking University. His recent research interests mainly focus on mobile app analysis, as well as privacy and security of mobile systems. He has received multiple awards for his research work and teaching, including First Prize of National Technology Invention Award, an Honorable Mention Award from UbiComp 2016, as well as a Teaching Excellence Award from Peking University. He received his PhD in computer engineering from University of Massachusetts, Amherst in 2007, and BS\/MS degrees in computer science from Peking University.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Bohyung-Han.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Bohyung Han<\/strong><\/p>\n<p>Seoul National University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-414\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-414\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-413\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-413\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-414\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Bohyung Han is an Associate Professor in the Department of Electrical and Computer Engineering at Seoul National University, Korea. Prior to the current position, he was an Associate Professor in the Department of Computer Science and Engineering at POSTECH, Korea and a visiting research scientist in Machine Intelligence Group at Google, Venice, CA, USA. He is currently visiting Snap Research, Venice, CA. He received the B.S. and M.S. degrees from Seoul National University, Korea, in 1997 and 2000, respectively, and the Ph.D. in Computer Science at the University of Maryland, College Park, MD, USA, in 2005. He served or will be serving as an Area Chair or Senior Program Committee member of major conferences in computer vision and machine learning including CVPR, ICCV, NIPS\/NeurIPS, IJCAI and ACCV, a Tutorial Chair in ICCV 2019, a General Chair in ACCV 2022, a Demo Chair in ECCV 2022, a Workshop Chair in ACCV 2020, and a Demo Chair in ACCV 2014. His research interest is computer vision and machine learning with emphasis on deep learning.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Winston-HSU.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Winston Hsu<\/strong><\/p>\n<p>National Taiwan University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-416\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-416\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-415\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-415\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-416\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Prof. Winston Hsu is an active researcher dedicated to large-scale image\/video retrieval\/mining, visual recognition, and machine intelligence. He is a Professor in the Department of Computer Science and Information Engineering, National Taiwan University. He and his team have been recognized with technical awards in multimedia and computer vision research communities including IBM Research Pat Goldberg Memorial Best Paper Award (2018), Best Brave New Idea Paper Award in ACM Multimedia 2017, First Place for IARPA Disguised Faces in the Wild Competition (CVPR 2018), First Prize in ACM Multimedia Grand Challenge 2011, ACM Multimedia 2013\/2014 Grand Challenge Multimodal Award, etc. Prof. Hsu is keen to realizing advanced researches towards business deliverables via academia-industry collaborations and co-founding startups. He was a Visiting Scientist at Microsoft Research Redmond (2014) and had his 1-year sabbatical leave (2016-2017) at IBM TJ Watson Research Center. He served as the Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) and IEEE Transactions on Multimedia, two premier journals, and was on the Editorial Board for IEEE Multimedia Magazine (2010 \u2013 2017).<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Seung-won-Hwang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Seung-won Hwang<\/strong><\/p>\n<p>Yonsei University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-418\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-418\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-417\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-417\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-418\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Prof. Seung-won Hwang is a Professor of Computer Science at Yonsei University. Prior to joining Yonsei, she had been an Associate Professor at POSTECH for 10 years, after her PhD from UIUC. Her recent research interests has been machine intelligence from data, language, and knowledge, leading to 100+ publication at top-tier AI, DB\/DM, and NLP venues, including ACL, AAAI, EMNLP, IJCAI, KDD, SIGIR, SIGMOD, and VLDB. She has received best paper runner-up and outstanding collaboration award from WSDM and Microsoft Research respectively. Details can be found at http:\/\/dilab.yonsei.ac.kr\/~swhwang.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Hong-Gong-Kang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Hong-Goo Kang<\/strong><\/p>\n<p>Yonsei University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-420\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-420\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-419\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-419\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-420\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Hong-Goo Kang received the B.S., M.S., and Ph.D. degrees from Yonsei University, Korea in 1989, 1991, and 1995, respectively. From 1996 to 2002, he was a senior technical staff member at AT&amp;T Labs-Research, Florham Park, New Jersey. He was an associate editor of the IEEE Transactions on Audio, Speech, and Language processing from 2005 to 2008, and served numerous conferences and program committees. In 2008~2009 and 2015~2016, respectively, he worked for Broadcom (Irvine, CA) and Google (Mountain View, CA) as a visiting scholar, where he participated in various projects on speech signal processing. His research interests include speech\/audio signal processing, machine learning, and human computer interface.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Gunhee-Kim.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Gunhee Kim<\/strong><\/p>\n<p>Seoul National University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-422\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-422\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-421\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-421\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-422\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Gunhee Kim is an associate professor in the Department of Computer Science and Engineering of Seoul National University from 2015. He was a postdoctoral researcher at Disney Research for one and a half years. He received his PhD in 2013 under supervision of Eric P. Xing from Computer Science Department of Carnegie Mellon University. Prior to starting PhD study in 2009, he earned a master\u2019s degree under supervision of Martial Hebert in Robotics Institute, CMU. His research interests are solving computer vision and web mining problems that emerge from big image data shared online, by developing scalable and effective machine learning and optimization techniques. He is a recipient of 2014 ACM SIGKDD doctoral dissertation award, and 2015 Naver New faculty award.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jong-Kim.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Jong Kim<\/strong><\/p>\n<p>Pohang University of Science and Technology (POSTECH)<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-424\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-424\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-423\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-423\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-424\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Jong Kim is a professor in the Department of Computer Science and Engineering at Pohang University of Science and Technology (POSTECH). He received his Ph.D. degree from Penn. State University in 1991. From 1991 to 1992, he worked at University of Michigan as a Research Fellow. His research interests include dependable computing, hardware security, mobile security, and machine learning security. He has published papers on top security and security conferences including S&amp;P, NDSS, CCS, WWW, Micro, and RTSS.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Min-H.-Kim.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Min H. Kim<\/strong><\/p>\n<p>KAIST<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-426\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-426\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-425\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-425\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-426\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Min H. Kim is a KAIST-Endowed Chair Professor of Computer Science at KAIST, Korea, leading the Visual Computing Laboratory (VCLAB). Before coming to KAIST, he had been a postdoctoral researcher at Yale University, working on hyperspectral 3D imaging. He received his Ph.D. in computer science from University College London (UCL) in 2010, with a focus on HDR color reproduction for high-fidelity computer graphics. In addition to serving on international program committees, e.g., ACM SIGGRAPH Asia, Eurographics (EG), Pacific Graphics (PG), CVPR, and ICCV, he has worked as an associate editor of ACM Transactions on Graphics (TOG), ACM Transactions on Applied Perception (TAP), and Elsevier Computers and Graphics (CAG). His recent research interests include a wide variety of computational imaging in the field of computational photography, hyperspectral imaging, BRDF acquisition, and 3D imaging.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Heejo-Lee.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Heejo Lee<\/strong><\/p>\n<p>Korea University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-428\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-428\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-427\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-427\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-428\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Heejo Lee is a Professor in the Department of Computer Science and Engineering, Korea University (KU), Seoul, Korea and the director of CSSA (Center for Software Security and Assurance). Before joining KU, he was at AhnLab, Inc., the leading security company in Korea, as a CTO from 2001 to 2003. He received his BS, MS, PhD from POSTECH, and worked for Purdue and CMU. He is a recipient of the ISC^2 ISLA award and got the most prestigious recognition of Asia-Pacific community service star in 2016.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Seong-Whan-Lee.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Seong-Whan Lee<\/strong><\/p>\n<p>Korea University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-430\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-430\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-429\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-429\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-430\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Seong-Whan Lee is a full professor at Korea University, where he is the head of the Department of Artificial Intelligence and the Department of Brain and Cognitive Engineering.<\/p>\n<p>A Fellow of the IAPR(1998), IEEE(2009), and Korean Academy of Science and Technology(2009), he has served several professional societies as chairman or governing board member. He was the founding Co-Editor-in-Chief of the International Journal of Document Analysis and Recognition and has been an Associate Editor of several international journals: Pattern Recognition, ACM Trans. on Applied Perception, IEEE Trans. on Affective Computing, Image and Vision Computing, International Journal of Pattern Recognition and Artificial Intelligence, and International Journal of Image and Graphics.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Seung-Ah-Lee.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Seung Ah Lee<\/strong><\/p>\n<p>Yonsei University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-432\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-432\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-431\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-431\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-432\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Seung Ah Lee is an assistant professor at the Department of Electrical and Electronic Engineering at Yonsei University. Seung Ah joined Yonsei University in Fall 2018, currently leading the Optical Imaging Systems Laboratory. Prior to Yonsei, she was at Verily Life Sciences, a former Google [x] team, between 2015-2018 as a scientist. She received her PhD in Electrical Engineering at Caltech (2014) and a postdoctoral training at Stanford Bioengineering (2014-2015). She completed her BS (2007) and MS (2009) degree in Electrical Engineering at Seoul National University.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Seungyong-Lee.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Seungyong Lee<\/strong><\/p>\n<p>Pohang University of Science and Technology (POSTECH)<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-434\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-434\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-433\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-433\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-434\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Seungyong Lee is a professor of computer science and engineering at Pohang University of Science and Technology (POSTECH), Korea. He received a PhD degree in computer science from Korea Advanced Institute of Science and Technology (KAIST) in 1995. From 1995 to 1996, he worked at City College of New York as a postdoctoral researcher. Since 1996, he has been a faculty member of POSTECH, where he leads Computer Graphics Group. During his sabbatical years, he worked at MPI Informatik (2003-2004) and Creative Technologies Lab at Adobe Systems (2010-2011). His technologies on image deblurring and photo upright adjustment have been transferred to Adobe Creative Cloud and Adobe Photoshop Lightroom. His current research interests include image and video processing, deep learning based computational photography, and 3D scene reconstruction.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jingwen-Leng.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Jingwen Leng<\/strong><\/p>\n<p>Shanghai Jiao Tong University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-436\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-436\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-435\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-435\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-436\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Jingwen Leng is an Assistant Professor in the John Hopcroft Computer Science Center and Computer Science &amp; Engineering Department at Shanghai Jiao Tong University. His research focuses on building efficient and resilient architectures for deep learning. He received his Ph.D. from the University of Texas at Austin, where he worked on improving the efficiency and resiliency of general-purpose GPUs.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Cheng-Li.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Cheng Li<\/strong><\/p>\n<p>University of Science and Technology of China<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-438\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-438\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-437\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-437\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-438\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Cheng Li is a research professor at the School of Computer Science and Technology, University of Science and Technology of China (USTC). His research interests lie in various topics related to improving performance, consistency, fault tolerance, and availability of distributed systems. Prior to joining USTC, he was an associated researcher at INESC-ID, Portugal, and a senior member of technical staff at Oracle Labs Swiss. He received his PhD degree from Max Planck Institute for Software Sytems (MPI-SWS) in 2016, and his bachelor degree from Nankai University in 2009. His work has been published in the premier peer-reviewed system research venues such as OSDI, USENIX ATC, EuroSys, TPDS and etc. He is a member of ACM Future Computing Academy. He was a co-chair on the Program Committee of the ACM SOSP 2017 Poster Session and ACM TURC 2018 SIGOPS\/ChinaSys workshop.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Shou-De-Lin.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Shou-De Lin<\/strong><\/p>\n<p>National Taiwan University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-440\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-440\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-439\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-439\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-440\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Shou-de Lin is currently a full professor in the CSIE department of National Taiwan University. He holds a BS degree in EE department from National Taiwan University, an MS-EE degree from the University of Michigan, an MS degree in Computational Linguistics and PhD in Computer Science both from the University of Southern California. He leads the Machine Discovery and Social Network Mining Lab in NTU. Before joining NTU, he was a post-doctoral research fellow at the Los Alamos National Lab. Prof. Lin&#8217;s research includes the areas of machine learning and data mining, social network analysis, and natural language processing. His international recognition includes the best paper award in IEEE Web Intelligent conference 2003, Google Research Award in 2007, Microsoft research award in 2008, 2015, 2016 merit paper award in TAAI 2010, 2014, 2016, best paper award in ASONAM 2011, US Aerospace AFOSR\/AOARD research award winner for 5 years. He is the all-time winners in ACM KDD Cup, leading or co-leading the NTU team to win 5 championships. He also leads a team to win WSDM Cup 2016. He has served as the senior PC for SIGKDD and area chair for ACL. He also served as the co-founder and chief scientist of a start-up The OmniEyes.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jiaying-Liu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Jiaying Liu<\/strong><\/p>\n<p>Peking University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-442\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-442\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-441\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-441\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-442\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Jiaying Liu is currently an Associate Professor with the Institute of Computer Science and Technology, Peking University. She received the Ph.D. degree (Hons.) in computer science from Peking University, Beijing China, 2010. She has authored over 100 technical articles in refereed journals and proceedings, and holds 42 granted patents. Her current research interests include multimedia signal processing, compression, and computer vision.<\/p>\n<p>Dr. Liu is a Senior Member of IEEE, CSIG and CCF. She was a Visiting Scholar with the University of Southern California, Los Angeles, from 2007 to 2008. She was a Visiting Researcher with the Microsoft Research Asia in 2015 supported by the Star Track Young Faculties Award. She has served as a member of Multimedia Systems &amp; Applications Technical Committee (MSA TC), Visual Signal Processing and Communications Technical Committee (VSPC TC) and Education and Outreach Technical Committee (EO TC) in IEEE Circuits and Systems Society, a member of the Image, Video, and Multimedia (IVM) Technical Committee in APSIPA. She has also served as the Technical Program Chair of IEEE VCIP-2019\/ACM ICMR-2021, the Publicity Chair of IEEE ICIP-2019\/VCIP-2018\/MIPR 2020, the Grand Challenge Chair of IEEE ICME-2019, and the Area Chair of ICCV-2019. She was the APSIPA Distinguished Lecturer (2016-2017).<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Shixia-Liu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Shixia Liu<\/strong><\/p>\n<p>Tsinghua University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-444\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-444\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-443\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-443\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-444\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Shixia Liu is a tenured associate professor at Tsinghua University. Her research interests include explainble machine learning, interative data quality improvement, and visual text analytics. Shixia is an associate Editor-in-Chief of IEEE Transactions on Visualization and Computer Graphics, IEEE Transactions on Big Data, and ACM Transactions on Interactive Intelligent Systems . She was the Papers Co-Chairs of IEEE VAST 2016\/2017 and the program co-chair of PacifcVis 2014.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Youyou-Lu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Youyou Lu<\/strong><\/p>\n<p>Tsinghua University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-446\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-446\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-445\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-445\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-446\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Youyou Lu is an assistant professor in the Department of Computer Science and Technology at Tsinghua University. He obtained his B.S. degree from Nanjing University in 2009 and his Ph.D degree from Tsinghua University in 2015, both in Computer Science, and was a postdoctoral fellow at Tsinghua from 2015 to 2017. His current research interests include file and storage systems spanning from architectural to system levels. His research works have been published at a number of top-tier conferences including FAST, USENIX ATC, SC, EuroSys etc. His research won the Best Paper Award at NVMSA 2014 and was selected into the Best Papers at MSST 2015. He was elected in the Young Elite Scientists Sponsorship Program by CAST (China Association for Science and Technology) in 2015, and received the CCF Outstanding Doctoral Dissertation Award in 2016.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Atsuko-Miyaji.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Atsuko Miyaji<\/strong><\/p>\n<p>Osaka University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-448\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-448\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-447\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-447\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-448\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>She received the Dr. Sci. degrees in mathematics from Osaka University, Osaka, Japan in 1997. She joined Panasonic Co., LTD from 1990 to 1998.She was an associate professor at the Japan Advanced Institute of Science and Technology (JAIST) in 1998. She joined the UC Davis from 2002 to 2003. She has been a professor at JAIST, a professor at Osaka University, and an Auditor of Information-technology Promotion Agency Japan since 2007, 2015 and 2016 respectively. She has been an editor of ISO\/IEC since 2000.<\/p>\n<p>She received Young Paper Award of SCIS&#8217;93 in 1993, Notable Invention Award of the Science and Technology Agency in 1997, the IPSJ Sakai Special Researcher Award in 2002, the Standardization Contribution Award in 2003, Engineering Sciences Society: Certificate of Appreciation in 2005, the AWARD for the contribution to CULTURE of SECURITY in 2007, IPSJ\/ITSCJ Project Editor Award in 2007, 2008, 2009, 2010, 2012, 2016, and the Director-General of Industrial Science and Technology Policy and Environment Bureau Award in 2007, DoCoMo Mobile Science Awards in 2008, ADMA 2010 Best Paper Award, Prizes for Science and Technology, The Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology, ATIS 2016 Best Paper Award, IEEE Trustocm 2017 Best Paper Award, and IEICE milestone certification in 2017.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Tadashi-Nomoto.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Tadashi Nomoto<\/strong><\/p>\n<p>The SOKENDAI Graduate School of Advanced Studies<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-450\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-450\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-449\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-449\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-450\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Tadashi Nomoto is currently an associate professor at Graduate University for Advanced Studies (SOKENDAI) with a joint appointment to National Institute of Japanese Literature. He has been actively engaged in the area of natural language processing and information retrieval for more than a decade, both in academia and in industry. His research interests include computational linguistics, digital library, data mining, machine translation, and quantitative media analysis. He has published extensively in major international conferences (the likes of SIGIR, ACL, ICML, CIKM). He holds an MA in Linguistics from Sophia University, Japan, and a PhD in Computer Science from Nara Institute of Science and Technology located also in Japan.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Sinno-Jialin-Pan.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Sinno Jialin Pan<\/strong><\/p>\n<p>Nanyang Technological University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-452\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-452\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-451\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-451\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-452\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr Sinno Jialin Pan is a Provost&#8217;s Chair Associate Professor with the School of Computer Science and Engineering, and Deputy Director of the Data Science and AI Research Centre at Nanyang Technological University (NTU), Singapore. He received his Ph.D. degree in computer science from the Hong Kong University of Science and Technology (HKUST) in 2011. Prior to joining NTU, he was a scientist and Lab Head of text analytics with the Data Analytics Department, Institute for Infocomm Research, Singapore from Nov. 2010 to Nov. 2014. He joined NTU as a Nanyang Assistant Professor (university named assistant professor) in Nov. 2014. He was named to &#8220;AI 10 to Watch&#8221; by the IEEE Intelligent Systems magazine in 2018. His research interests include transfer learning, and its applications to wireless-sensor-based data mining, text mining, sentiment analysis, and software engineering.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/1998\/02\/asia-slt-tim-pan-1910.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Tim Pan<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-454\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-454\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-453\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-453\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-454\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr. Tim Pan is the senior director of Outreach of Microsoft Research Asia, responsible for the lab\u2019s academic collaboration in the Asia-Pacific region. He establishes strategies and directions, identifies business opportunities, and designs various programs and projects that strengthen partnership between Microsoft Research and academia.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Xueming-Qian.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Xueming Qian<\/strong><\/p>\n<p>Xi&#8217;an Jiaotong University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-456\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-456\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-455\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-455\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-456\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Xueming Qian PhD\/Professor, received the B.S. and M.S. degrees in Xi&#8217;an University of Technology, Xi&#8217;an, China, in 1999 and 2004, respectively, and the Ph.D. degree in the School of Electronics and Information Engineering, Xi&#8217;an Jiaotong University, Xi&#8217;an, China, in 2008. He was awarded Microsoft fellowship in 2006, outstanding doctoral dissertation of Xi&#8217;an Jiaotong University and Shaanxi Province in 2010 and 2011 respectively. He is the director of SMILES LAB. He was a visit scholar at Microsoft research Asia from August 2010 to March 2011. His research interests include social mobile multimedia mining learning and search.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Huamin-Qu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Huamin Qu<\/strong><\/p>\n<p>Hong Kong University of Science and Technology<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-458\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-458\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-457\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-457\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-458\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Huamin Qu is a full professor in the Department of Computer Science and Engineering (CSE) at the Hong Kong University of Science and Technology (HKUST). His main research interests are in data visualization and human-computer interaction, with focuses on explainable AI, urban informatics, social media analysis, E-learning, and text visualization. He has served as paper co-chairs for IEEE VIS\u201914, VIS\u201915, and VIS\u201918 and an associate editor of IEEE Transactions on Visualization and Computer Graphics (TVCG). He received a BS in Mathematics from Xi\u2019an Jiaotong University and a PhD in Computer Science from Stony Brook University.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Junichi-Rekimoto.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Junichi Rekimoto<\/strong><\/p>\n<p>The University of Tokyo<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-460\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-460\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-459\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-459\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-460\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Jun Rekimoto received his B.A.Sc., M.Sc., and Ph.D. in Information Science from Tokyo Institute of Technology in 1984, 1986, and 1996, respectively. From 1986 to 1994, he worked for the Software Laboratory of NEC. During 1992-1993, he worked in the Computer Graphics Laboratory at the University of Alberta, Canada, as a visiting scientist. Since 1994 he has worked for Sony Computer Science Laboratories (Sony CSL). In 1999 he formed, and has since directed, the Interaction Laboratory within Sony CSL.<\/p>\n<p>Rekimoto&#8217;s research interests include computer augmented environments, mobile\/wearable computing, virtual reality, and information visualization. He has authored dozens of refereed publications in the area of human-computer interactions, including ACM, CHI, and UIST. One of his publications was recognized with the 30th commemorative papers award from the Information Processing Society Japan (IPSJ) in 1992. He also received the Multi-Media Grand Prix Technology Award from the Multi-Media Contents Association Japan in 1998, the Yamashita Memorial Research Award from IPSJ in 1999, and the Japan Inter-Design Award in 2003. In 2007, He elected to ACM SIGCHI Academy.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Insik-Shin.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Insik Shin<\/strong><\/p>\n<p>KAIST<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-462\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-462\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-461\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-461\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-462\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Insik Shin is a professor in the School of Computing and a Chief Professor of Graduate School of Information Security at KAIST, Korea. He received a Ph.D. degree from the University of Pennsylvania. His research interests include real-time embedded systems, systems security, mobile computing, and cyber-physical systems. He serves on program committees of top international conferences, including RTSS, RTAS and ECRTS. He is a recipient of several best (student) paper awards, including MobiCom \u201919, RTSS \u201912, RTAS \u201912, and RTSS \u201903, KAIST Excellence Award, and Naver Young Faculty Award.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jun-Takamatsu.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Jun Takamatsu<\/strong><\/p>\n<p>Nara Institute of Science and Technology<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-464\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-464\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-463\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-463\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-464\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Jun Takamatsu received a Ph.D. degree in Computer Science from the University of Tokyo, Japan, in 2004. From 2004 to 2008, he was with the Institute of Industrial Science, the University of Tokyo. In 2007, he was with Microsoft Research Asia, as a visiting researcher. From 2008 to now, he joined Nara Institute of Science and Technology, Japan, as an associate professor. He was also with Carnegie Mellon University as a visitor in 2012 and 2013 and with Microsoft as a visiting scientist in 2018. His research interests are in robotics including learning-from-observation, task\/motion planning, and feasible motion analysis, 3D shape modeling and analysis, and physics-based vision.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Mingkui-Tan.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Mingkui Tan<\/strong><\/p>\n<p>South China University of Technology<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-466\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-466\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-465\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-465\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-466\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr. Mingkui Tan is currently a professor with the School of Software Engineering at South China University of Technology, China. He received his Bachelor Degree in Environmental Science and Engineering in 2006 and Master degree in Control Science and Engineering in 2009, both from Hunan University in Changsha, China. He received the PhD degree in Computer Science from Nanyang Technological University, Singapore, in 2014. From 2014-2016, he worked as a Senior Research Associate on machine learning and computer vision in the School of Computer Science, University of Adelaide, Australia. His research interests include machine learning, sparse analysis, deep learning and large-scale optimization. He has published about 70 research papers in top-tier conferences such as NeurIPS, ICML and KDD and international peer-reviewed journals such as TNNLS, JMLR and TIP.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2016\/03\/avatar_user__1459357947-177x180.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Xin Tong<\/strong><\/p>\n<p>Microsoft Research<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-468\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-468\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-467\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-467\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-468\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>I am now a principal researcher in Internet Graphics Group of Microsoft Research Asia . I obtained my Ph.D. degree in Computer Graphics from Tsinghua University in 1999. My Ph.D. thesis is about hardware assisted volume rendering. I got my B.S. Degree and Master Degree in Computer Science from Zhejiang University in 1993 and 1996 respectively.<\/p>\n<p>My research interests include appearance modeling and rendering, texture synthesis, and image based modeling and rendering. Specifically, my research concentrates on studying the underline principles of material light interaction and light transport, and developing efficient methods for appearance modeling and rendering. I am also interested in performance capturing and facial animation.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Hongzhi-Wang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Hongzhi Wang<\/strong><\/p>\n<p>Harbin Institute of Technology<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-470\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-470\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-469\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-469\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-470\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Hongzhi Wang, Professor, PHD supervisor, Vice Dean of Honors School of Harbin Institute of Technology, the secretary general of ACM SIGMOD China, CCF outstanding member, a member of CCF databases and big data committee. Research Fields include big data management and analysis, database and data quality. He was \u201cstarring track\u201d visiting professor at MSRA. He has been PI for more than 10 projects including NSFC key project, NSFC projects. He also serve as a member of ACM Data Science Task Force. His publications include over 200 papers including VLDB, SIGMOD, SIGIR papers, and 4 books. His papers were cited more than 1000 times. His personal website is http:\/\/homepage.hit.edu.cn\/wang.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Liwei-Wang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Liwei Wang<\/strong><\/p>\n<p>Peking University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-472\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-472\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-471\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-471\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-472\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Professor in School of Electronics Engineering and Computer Science, Peking University, researcher in Beijing Institute of Big Data Research, adjunct professor in Institute for Interdisciplinary Information Science, Tsinghua University. He was recognized by IEEE Intelligent Systems as one of AI\u2019s 10 to Watch in 2010, the first Asian scholar since the establishment of the award. He received the NSFC excellent young researcher grant in 2012. He was also supported by program for New Century Excellent Talents in University by the Ministry of Education.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Hiroki-Watanabe.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Hiroki Watanabe<\/strong><\/p>\n<p>Hokkaido University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-474\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-474\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-473\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-473\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-474\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Hiroki Watanabe is an assistant professor at Graduate School of Information Science and Technology, Hokkaido University, Japan. He received B. Eng. and M. Eng. and Ph.D. degrees from Kobe University in 2012, 2014, and 2017, respectively. He is working on wearable computing and ubiquitous computing.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Yonggang-Wen.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Yonggang Wen<\/strong><\/p>\n<p>Nanyang Technological University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-476\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-476\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-475\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-475\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-476\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr. Yonggang Wen is the Professor Computer Science and Engineering (SCSE) at Nanyang Technological University (NTU), Singapore. He also serves as the Associate Dean (Research) at the College of Engineering, and the Director of Nanyang Technopreneurship Centre at NTU. He received his PhD degree in Electrical Engineering and Computer Science (minor in Western Literature) from Massachusetts Institute of Technology (MIT), Cambridge, USA, in 2007.<\/p>\n<p>Dr. Wen has worked extensively in learning-based system prototyping and performance optimization for large-scale networked computer systems. In particular, his work in Multi-Screen Cloud Social TV has been featured by global media (more than 1600 news articles from over 29 countries) and received 2013 ASEAN ICT Awards (Gold Medal). His work on Cloud3DView, as the only academia entry, has won 2016 ASEAN ICT Awards (Gold Medal) and 2015 Datacentre Dynamics Awards \u2013 APAC (\u2018Oscar\u2019 award of data centre industry). He is a co-recipient of 2015 IEEE Multimedia Best Paper Award, and a co-recipient of Best Paper Awards at 2016 IEEE Globecom, 2016 IEEE Infocom MuSIC Workshop, 2015 EAI\/ICST Chinacom, 2014 IEEE WCSP, 2013 IEEE Globecom and 2012 IEEE EUC. He was the sole winner of 2016 Nanyang Awards in Entrepreneurship and Innovation at NTU, and received 2016 IEEE ComSoc MMTC Distinguished Leadership Award. He serves on editorial boards for ACM Transactions Multimedia Computing, Communications and Applications, IEEE Transactions on Circuits and Systems for Video Technology, IEEE Wireless Communication Magazine, IEEE Communications Survey &amp; Tutorials, IEEE Transactions on Multimedia, IEEE Transactions on Signal and Information Processing over Networks, IEEE Access Journal and Elsevier Ad Hoc Networks, and was elected as the Chair for IEEE ComSoc Multimedia Communication Technical Committee (2014-2016). His research interests include cloud computing, blockchain, green data centre, distributed machine learning, big data analytics, multimedia network and mobile computing.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Wenfei-Wu-New.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Wenfei Wu<\/strong><\/p>\n<p>Tsinghua University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-478\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-478\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-477\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-477\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-478\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Wenfei Wu is an assistant professor in the Institute for Interdisciplinary Information Sciences (IIIS) at Tsinghua University. Wenfei Wu obtained his Ph.D. from the CS department at the University of Wisconsin-Madison in 2015. Dr. Wu&#8217;s research interests are in networked systems, including architecture design, data plane optimization, and network management optimization. He was awarded the best student paper in SoCC&#8217;13. Currently, Dr. Wu is working on model-centric DevOps for network functions, in-network computation for distributed systems (including distributed neural networks and big data systems), and secure network protocol design.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Yingcai-Wu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Yingcai Wu<\/strong><\/p>\n<p>Zhejiang University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-480\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-480\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-479\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-479\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-480\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Yingcai Wu is a National Youth-1000 scholar and a ZJU100 Young Professor at the State Key Lab of CAD &amp; CG, College of Computer Science and Technology, Zhejiang University. He obtained his Ph.D. degree in Computer Science from the Hong Kong University of Science and Technology (HKUST). Prior to his current position, Yingcai Wu was a researcher in the Microsoft Research Asia, Beijing, China from 2012 to 2015, and a postdoctoral researcher at the University of California, Davis from 2010 to 2012. He was a paper co-chair of IEEE Pacific Visualization 2017 and ChinaVis 2016-2017. His main research interests are in visual analytics and human-computer interaction, with focuses on sports analytics, urban computing, and social media analysis. He has published more than 50 refereed papers, including 25 IEEE Transactions on Visualization and Computer Graphics (TVCG) papers. His three papers have been awarded Honorable Mention at IEEE VIS (SciVis) 2009, IEEE VIS (VAST) 2014, and IEEE PacificVis 2016. For more information, visit www.ycwu.org<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Hiroaki-Yamane.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Hiroaki Yamane<\/strong><\/p>\n<p>RIKEN AIP &amp; The University of Tokyo<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-482\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-482\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-481\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-481\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-482\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Hiroaki Yamane is a post-doctoral researcher at RIKEN AIP and a visiting researcher at the University of Tokyo. He completed his PhD at Keio University where he proposed slogan generating systems. After PhD acquisition, he was dedicated to brain decoding and currently is working on building machine intelligence for medical engineering at RIKEN AIP. Because he has a strong interest in human intelligence, sensitivity, and health, his research interests include: word embedding on commonsense, sentiment analysis, sentence generation, and domain adaptation. He is more broadly interested in multidisciplinary areas natural language processing, computer vision, cognitive &amp; neuroscience, and AI applications to medical.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Rui-Yan.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Rui Yan<\/strong><\/p>\n<p>Peking University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-484\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-484\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-483\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-483\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-484\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr. Rui Yan is an assistant professor at Peking University, an adjunct professor at Central China Normal University and Central University of Finance and Economics, and he was a Senior Researcher at Baidu Inc. He has investigated several open-domain conversational systems and dialogue systems in vertical domains. Till now he has published more than 100 highly competitive peer-reviewed papers. He serves as a (senior) program committee member of several top-tier venues (such as KDD, SIGIR, ACL, WWW, IJCAI, AAAI, CIKM, and EMNLP, etc.).<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Chuck-Yoo.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Chuck Yoo<\/strong><\/p>\n<p>Korea University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-486\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-486\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-485\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-485\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-486\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Chuck Yoo received B.S. degree from Seoul National University in 1982, and M.S. and Ph.D degrees from University of Michigan, Ann Arbor, Michigan in 1986 and 1990 respectively. From 1990 to 1995, he was with Sun Microsystems, Mountain View, California, working on Sun\u2019s operating systems. In 1995, he joined the computer science department of Korea University and served the dean of the College of Informatics for 5 years until Jan. of 2018.<\/p>\n<p>He has been working on virtualization, starting with hypervisor for mobile phones, virtualized automotive platform, integrated SLA (service level agreement) for clouds and network virtualization including virtual routers and SDN. He hosted Xen Summit in Seoul in 2011 and served program committees of various conferences. In addition to publishing quite a number of papers, his research has influenced global industry leaders such as Samsung and LG to inspire and enhance their products.<\/p>\n<p>Recently, he is working with the College of Medicine for precision medicine and also with the College of Law to bring up new and revised legislative bills for the fourth industrial revolution.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Sung-eui-Yoon.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Sung-eui Yoon<\/strong><\/p>\n<p>KAIST<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-488\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-488\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-487\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-487\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-488\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Sung-Eui Yoon is a professor at Korea Advanced Institute of Science and Technology (KAIST). He received the B.S. and M.S. degrees in computer science from Seoul National University in 1999 and 2001, respectively. He received his Ph.D. degree in computer science from the University of North Carolina at Chapel Hill in 2005. He was a postdoctoral scholar at Lawrence Livermore National Laboratory, USA. His research interests include graphics, vision, and robotics. He has published about 100 technical papers, and gave numerous tutorials on ray tracing, collision detection, and image search in premier conferences like ACM SIGGRAPH, IEEE Visualization, CVPR, ICRA, etc. He served as conf. co-chair and paper co-chair for ACM I3D 2012 and 2013 respectively. At 2008, he published a monograph on real-time massive model rendering with other three co-authors. Recently, we also published an online book on Rendering at 2018. Some of his papers received a test-of-time award, a distinguished paper award, and a few invitations to IEEE Trans. on Visualization and Graphics. He is currently senior members of IEEE and ACM.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Masatoshi-Yoshikawa.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Masatoshi Yoshikawa<\/strong><\/p>\n<p>Kyoto University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-490\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-490\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-489\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-489\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-490\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Masatoshi Yoshikawa received the B.E., M.E. and Ph.D. degrees from Department of Information Science, Kyoto University in 1980, 1982 and 1985, respectively. In 1985, he joined The Institute for Computer Sciences, Kyoto Sangyo University as an Assistant Professor. From April 1989 to March 1990, he has been a Visiting Scientist at the Computer Science Department of University of Southern California (USC). In 1993, he joined Nara Institute of Science and Technology as an Associate Professor of Graduate School of Information Science. From April 1996 to January 1997, he has stayed at Department of Computer Science, University of Waterloo as a Visiting Associate Professor. From June 2002 to March 2006, he served as a professor at Nagoya University. From April 2006, he has been a professor of Graduate School of Informatics, Kyoto University.<\/p>\n<p>One of his current research topics is theory and practice of privacy protection. As a basic research, he investigated the potential privacy loss of a traditional Differential Privacy (DP) mechanism under temporal correlations. He is also interested in personal data market. Particularly, he is studying a mechanism for pricing and selling personal data perturbed by DP.<\/p>\n<p>He was a General Co-Chair of the 6th IEEE International Conference on Big Data and Smart Computing (BigComp 2019). He is a Steering Committee member of the International Conference on Big Data and Smart Computing (BigComp), He is serving as a PC member of VLDB2020 and ICDE2030. He is member of the IEEE ICDE Steering Committee, Science Council of Japan (SCJ), ACM, IPSJ and IEICE.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Huanjing-Yue.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Huanjing Yue<\/strong><\/p>\n<p>Tianjin University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-492\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-492\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-491\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-491\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-492\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Huanjing Yue received the B.S. and Ph.D. degrees from Tianjin University, Tianjin, China, in 2010 and 2015, respectively. She was an Intern with Microsoft Research Asia from 2011 to 2012, and from 2013 to 2015. She visited the Video Processing Laboratory, University of California at San Diego, from 2016 to 2017. She is currently an Associate Professor with the School of Electrical and Information Engineering, Tianjin University. Her current research interests include image processing and computer vision. She received the Microsoft Research Asia Fellowship Honor in 2013 and was selected into the Elite Scholar Program of Tianjin University in 2017.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Lijun-Zhang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Lijun Zhang<\/strong><\/p>\n<p>Nanjing University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-494\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-494\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-493\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-493\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-494\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Lijun Zhang received the B.S. and Ph.D. degrees in Software Engineering and Computer Science from Zhejiang University, China, in 2007 and 2012, respectively. He is currently an associate professor of the Department of Computer Science and Technology, Nanjing University, China. Prior to joining Nanjing University, he was a postdoctoral researcher at the Department of Computer Science and Engineering, Michigan State University, USA. His research interests include machine learning and optimization. He has published 80 academic papers, most of which are on prestigious conferences and journals, such as ICML, NeurIPS, COLT and JMLR. He received the DAMO Academy Young Fellow of Alibaba, and AAAI-12 Outstanding Paper Award.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Min-Zhang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Min Zhang<\/strong><\/p>\n<p>Tsinghua University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-496\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-496\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-495\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-495\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-496\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr. Min Zhang is a tenured associate professor in the Dept. of Computer Science &amp; Technology, Tsinghua University, specializes in Web search and recommendation, and user modeling. She is the vice director of State Key Lab. of Intelligent Technology &amp; Systems, the executive director of Tsinghua-MSRA Lab on Media and Search. She also serves as the ACM SIGIR Executive Committee member, associate editor for the ACM Transaction of Information Systems (TOIS), Short Paper co-Chair of SIGIR 2018, Program co-Chair of WSDM 2017, etc. She has published more than 100 papers on top level conferences with 4100+ citations. She was awarded Beijing Science and Technology Award (First Prize), etc. She also owns 12 patents. And she has made a lot of cooperation with international and domestic enterprises, such as Microsoft, Toshiba, Samsung, Sogou, WeChat, Zhihu, JD, etc<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Tianzhu-Zhang.png\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Tianzhu Zhang<\/strong><\/p>\n<p>University of Science and Technology of China<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-498\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-498\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-497\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-497\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-498\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Tianzhu Zhang is currently a Professor at the Department of Automation, School of Information Science and Technology, University of Science and Technology of China. His current research interests include pattern recognition, computer vision, multimedia computing, and machine learning. He has authored or co-authored over 80 journal and conference papers in these areas, including over 60 IEEE\/ACM Transactions papers (TPAMI\/IJCV\/TIP) and top-tier conference papers (ICCV\/CVPR\/ACM MM). According to the Google Scholar, his papers have been cited more than 4900 times. His work has been recognized by 2017 China Multimedia Conference Best Paper Award and 2016 ACM Multimedia Conference Best Paper Award (CCF-A). He has got Chinese Academy of Sciences President Award of Excellence in 2011, Excellent Doctoral Dissertation of Chinese Academy of Sciences in 2012, Youth Innovation Promotion Association CAS in 2018, and the Natural Science Award (first Prize) of Chinese Institute of Electronics in 2018. He served\/serves as the Area Chair for CVPR 2020, ICCV 2019, ACM MM 2019, WACV 2018, ICPR 2018, and MVA 2017, the Associate Editor for IEEE T-CSVT and Neurocomputing. He received the outstanding reviewer award in MMSJ, ECCV 2016 and CVPR 2018.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Yu-Zhang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Yu Zhang<\/strong><\/p>\n<p>University of Science &amp; Technology of China<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-500\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-500\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-499\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-499\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-500\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Yu Zhang is an associate professor in School of Computer Science &amp; Technology, University of Science and Technology of China (USTC). She got her Ph.D. at USTC in Jan. 2005. Her current research interests include programming languages and systems for emerging AI applications, quantum software.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Zhou-Zhao.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Zhou Zhao<\/strong><\/p>\n<p>Zhejiang University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t \t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-502\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-502\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-501\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-501\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-502\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Zhou Zhao received his Ph.D. from the Hong Kong University of Science and Technology in 2015. He subsequently worked at Zhejiang University as an associate professor and doctoral supervisor. Zhao\u2019s main research interests are in natural language processing and multimedia key technology research and development. Zhao is a fellow of the Association for Computing Machinery(ACM),a fellow of the Institute of Electrical and Electronics Engineers(IEEE),and a fellow of the China Computer Federation(CCF).In addition, he release more than sixty papers on the top international conference, such as NIPS, CLR, ICML. Zhao was rewarded the Innovation Award of the Information Department of Zhejiang University the title of the Outstanding Youth in Zhejiang.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Wei-Shi-Zheng.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><br \/>\n<strong>Wei-Shi Zheng<\/strong><\/p>\n<p>Sun Yat-sen University<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-504\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-504\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-503\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBio\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-503\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-504\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Dr. Wei-Shi Zheng is now a Professor with Sun Yat-sen University. Dr. Zheng received the PhD degree in Applied Mathematics from Sun Yat-sen University in 2008. He is now a full Professor at Sun Yat-sen University. He has now published more than 100 papers, including more than 80 publications in main journals (TPAMI, TNN\/TNNLS, TIP, TSMC-B, PR) and top conferences (ICCV, CVPR, IJCAI, AAAI). He has joined the organisation of four tutorial presentations in ACCV 2012, ICPR 2012, ICCV 2013 and CVPR 2015. His research interests include person\/object association and activity understanding in visual surveillance, and the related large-scale machine learning algorithm. Especially, Dr. Zheng has active research on person re-identification in the last five years. He serves a lot for many journals and conference, and he was announced to perform outstanding review in recent top conferences (ECCV 2016 &amp; CVPR 2017). He has ever joined Microsoft Research Asia Young Faculty Visiting Programme. He has ever served as a senior PC\/area chair\/associate editor of AVSS 2012, ICPR 2018, IJCAI 2019\/2020, AAAI 2020 and BMVC 2018\/2019. He is an IEEE MSA TC member. He is an associate editor of Pattern Recognition. He is a recipient of Excellent Young Scientists Fund of the National Natural Science Foundation of China, and a recipient of Royal Society-Newton Advanced Fellowship of United Kingdom.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t \t\t\t\t\t<\/ul>\n\t<\/div>\n\t<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Technology Showcase\"} --><!-- wp:freeform --><h2>Technology Showcase by Microsoft Research Asia<\/h2>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-506\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-506\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-505\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tAutoSys: Learning based approach for system optimization\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-505\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-506\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Mao Yang, Microsoft Research<\/p>\n<p>As computer systems and networking get increasingly complicated, optimizing them manually with explicit rules and heuristics becomes harder than ever before, sometimes impossible. At Microsoft Research Asia, our AutoSys project applies learning to large-scale system performance tuning. Our AutoSys framework (1) defines interfaces to expose system features for learning, (2) introduces monitors to detect learning-induced failures, and (3) runs resource management to support heterogenous requirements of learning-related tasks. Based on AutoSys, we have built a tool to help many crucial system scenarios within Microsoft. These scenarios include multimedia search for Bing (e.g., tail latency reduced by up to ~40%, and capacity increased by up to ~30%), job scheduling for Bing Ads (e.g., tail latency reduced by up to ~13%), and so on.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-508\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-508\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-507\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDual Learning and Its Applications to Machine Translation and Speech Synthesis\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-507\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-508\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Yingce Xia and Xu Tan, Microsoft Research<\/p>\n<p>Many AI tasks are emerged in dual forms, e.g., English-to-French translation vs. French-to-English translation, speech recognition vs. speech synthesis, question answering vs. question generation, and image classification vs. image generation. Dual learning is a new learning framework that leverages the primal-dual structure of AI tasks to obtain effective feedback or regularization signals to enhance the learning\/inference process. In this demo, we will show two applications of dual learning: machine translation and speech synthesis.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-510\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-510\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-509\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFluency Boost Learning and Inference for Neural Grammar Checker\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-509\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-510\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Tao Ge, Microsoft Research<\/p>\n<p>Neural sequence-to-sequence (seq2seq) approaches have proven to be successful in grammatical error correction (GEC). Based on the seq2seq framework, we propose a novel fluency boost learning and inference mechanism. Fluency boosting learning generates diverse error-corrected sentence pairs during training, enabling the error correction model to learn how to improve a sentence&#8217;s fluency from more instances, while fluency boosting inference allows the model to correct a sentence incrementally with multiple inference steps. Combining fluency boost learning and inference with conventional seq2seq models, our approach achieves the state-of-the-art performance in the GEC benchmarks.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-512\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-512\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-511\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tOneOCR For Digital Transformation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-511\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-512\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Qiang Huo, Microsoft Research<\/p>\n<p>In Microsoft, we have been developing a new generation OCR engine (aka OneOCR), which can detect both printed and handwritten text in an image captured by a camera or mobile phone, and recognize the detected text for follow-up actions. Our unified OneOCR engine can recognize mixed printed and handwritten English text lines with arbitrary orientations (even flipped), outperforming significantly other leading industrial OCR engines on a wide range of application scenarios. Empowered by OneOCR engine, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/docs.microsoft.com\/en-us\/azure\/cognitive-services\/computer-vision\/concept-recognizing-text#read-api\">Computer Vision Read<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> capability and <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/azure.microsoft.com\/en-us\/services\/search\/\">Cognitive Search capability of Azure Search<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> are generally available, and a <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/azure.microsoft.com\/en-us\/services\/cognitive-services\/form-recognizer\/\">Form Recognizer<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> with <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/docs.microsoft.com\/en-us\/azure\/cognitive-services\/form-recognizer\/quickstarts\/python-receipts\">Receipt Understanding<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> capability is available for preview, all in Azure Cognitive Services, which can power enterprise workflows and Robotic Process Automation (RPA) to spur digital transformation. In this presentation, I will demonstrate the capabilities of Microsoft\u2019s latest OneOCR engine, highlight its core component technologies, and explain the roadmap ahead.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-514\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-514\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-513\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tSpreadsheet Intelligence for Ideas in Excel\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-513\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-514\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter:<\/strong> Shi Han, Microsoft Research<\/p>\n<p>Ideas in Excel aims at such one-click intelligence\u2014when a user clicks the Ideas button on the Home tab of Excel, the intelligent service will empower the user to understand his or her data via automatic recommendation of visual summaries and interesting patterns. Then the user can insert the recommendations to the spreadsheet to help further analysis or as analysis result directly. To enable such one-click intelligence, there are underlying technical challenges to solve. At the Data, Knowledge and Intelligence group of Microsoft Research Asia, we have long-term research on spreadsheet intelligence and automated insights accordingly. And via close collaboration with Excel product teams, we transferred a suite of technologies and shipped Ideas in Excel together. In this demo presentation, we will show this intelligent feature and introduce corresponding technologies.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<\/p>\n<h2>Technology Showcase by Academic Collaborators<\/h2>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-516\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-516\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-515\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\t3D Caricature Generation from Real Face Images\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-515\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-516\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Yucheol Jung, Wonjong Jang, and Seungyong Lee, POSTECH<\/p>\n<p>A 3D caricature can be defined as a 3D mesh with cartoon-style shape exaggeration of a face. We present a novel deep learning based framework that generates a 3D caricature for a given real face image. Our approach exploits 3D geometry information in the caricature generation process and produces more convincing 3D shape exaggerations than 2D caricature-based approaches.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-518\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-518\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-517\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tA Co-Training Method towards Machine Reading Comprehension\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-517\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-518\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Minlie Huang, Tsinghua University<\/p>\n<p>A Co-Training Method towards Machine Reading Comprehension<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-520\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-520\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-519\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tA Method for Controlling Human Hearing by Editing the Frequency of the Sound in Real Time\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-519\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-520\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Hiroki Watanabe, Hokkaido University<\/p>\n<p>A Method for Controlling Human Hearing by Editing the Frequency of the Sound in Real Time<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-522\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-522\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-521\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tAbstractive Summarization of Reddit Posts with Multi-level Memory Networks\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-521\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-522\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Gunhee Kim, Seoul National University<\/p>\n<p>We address the problem of abstractive summarization in two directions: proposing a novel dataset and a new model. First, we collect Reddit TIFU dataset, consisting of 120K posts from the online discussion forum Reddit. We use such informal crowd-generated posts as text source, in contrast with existing datasets that mostly use formal documents as source such as news articles. Thus, our dataset could less suffer from some biases that key sentences usually locate at the beginning of the text and favorable summary candidates are already inside the text in similar forms. Second, we propose a novel abstractive summarization model named multi-level memory networks (MMN), equipped with multi-level memory to store the information of text from different levels of abstraction. With quantitative evaluation and user studies via Amazon Mechanical Turk, we show the Reddit TIFU dataset is highly abstractive and the MMN outperforms the state-of-the-art summarization models.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-524\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-524\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-523\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tAdaptive Graph Structure Learning for Image Sentence Matching\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-523\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-524\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> TianZhu Zhang, University of Science and Technology of China<\/p>\n<p>We adapt the attention mechanism for visual and semantic elements representation.<\/p>\n<p>We adaptively construct graphs and update the features for objects and words, making good use of both the intra modality relationship and inter modality relationship.<\/p>\n<p>We consider the structure information across different graphs by proposing a constraint on the semantic element, forcing the semantic element aligning to the corresponded visual element.<\/p>\n<p>The proposed model obtains the promising results on dataset Flickr30K and MS-COCO.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-526\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-526\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-525\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tAdversarial Attacks and Defenses in Deep Learning\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-525\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-526\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Yinpeng Dong, Tsinghua University<\/p>\n<p>Adversarial Attacks and Defenses in Deep Learning<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-528\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-528\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-527\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tAI+VIS: Automated Visualization Production\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-527\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-528\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Huamin Qu, The Hong Kong University of Science and Technology<\/p>\n<p>Existing visualization designs are often based on manual design and need lots of human efforts. How can we apply deep learning techniques to automatically generating visualization products? We report our two recent progresses on this direction:<\/p>\n<p>Automated Graph Drawing: We propose a graph-LSTM-based model to directly generate graph drawings with desirable visual properties similar to the training drawings, which do not need users to tune different algorithm-specific parameters.<\/p>\n<p>Automated Design of Timeline Infographics: We contribute an end-to-end approach to automatically extract an extensible template from a bitmap timeline image. The output can be used to generate new timelines with updated data.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-530\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-530\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-529\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBlockchain-Enabled Incentive and Trading Mechanism Design for AIoT Service Platform\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-529\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-530\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Ai-Chun Pang, National Taiwan University<\/p>\n<p>Ensure data effectiveness by the blockchain technology so as to hold data properties like immutability and credibility during the whole transaction process.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-532\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-532\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-531\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tBypassing Defense Methods for Neural Network Backdoor\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-531\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-532\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Sangwoo Ji and Jong Kim, POSTECH<\/p>\n<p>Bin Zhu, Microsoft Research<\/p>\n<p>Bypass two backdoor detection method: suspicious data instance detection and backdoor trigger detection.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-534\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-534\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-533\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCan Kernel Networking Become Fast Enough?\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-533\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-534\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Chuck Yoo, Korea University<\/p>\n<ul>\n<li>Existing network optimizations suffer from poor stability, low resource efficiency, and a need for API changes<\/li>\n<li>Solution: Kernel-based optimization for high-performance networking<\/li>\n<li>L3 forwarding achieves performance similar to DPDK<\/li>\n<li>A virtual switch achieves 67.5% performance of DPDK-OVS and three times greater resource efficiency<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-536\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-536\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-535\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCDPN: Coordinates-Based Disentangled Pose Network for Real-Time RGB-Based 6-DoF Object Pose Estimation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-535\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-536\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Xiangyang Ji, Tsinghua University<\/p>\n<p>CDPN: Coordinates-Based Disentangled Pose Network for Real-Time RGB-Based 6-DoF Object Pose Estimation<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-538\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-538\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-537\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCommonsense Reasoning with Structured Knowledge\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-537\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-538\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Hongming Zhang, The Hong Kong University of Science and Technology<\/p>\n<p>Understanding human\u2018s language requires complex commonsense knowledge. However, existing large-scale knowledge graphs mainly focus on knowledge about entities while ignoring commonsense knowledge about activities, states, or events, which are used to describe how entities or things act in the real world. To fill this gap, we develop ASER (activities, states, events, and their relations), a large-scale eventuality knowledge graph extracted from more than 11-billion-token unstructured textual data. ASER contains 15 relation types belonging to five categories, 194-million unique eventualities, and 64-million unique edges among them. Both human and extrinsic evaluations demonstrate the quality and effectiveness of ASER.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-540\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-540\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-539\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tComplex Correlation Modeling and Analysis Framework for Incomplete, Multimodal and Dynamic Data\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-539\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-540\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Zizhao Zhang, Tsinghua University<\/p>\n<p>A well constructed hypergraph structure can represent the data correlation accurately, yet leading to better performance.How to construct a good hypergraph to fit complex data?<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-542\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-542\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-541\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tConcordia: Distributed Shared Memory with In-Network Cache Coherence\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-541\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-542\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Youyou Lu, Tsinghua University<\/p>\n<p>Divides coherence responsibility between the switch and servers. The switch serializes conflicted requests and forwards them to correct destinations via a lock-check-forward pipeline. Servers execute requester-driven coherence control to reach coherence and transit states.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-544\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-544\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-543\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tContinual Learning with Dynamic Network Expansion\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-543\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-544\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Sung Ju Hwang, KAIST<\/p>\n<ul>\n<li>Perform effective knowledge transfer from earlier tasks to later tasks.<\/li>\n<li>Prevent catastrophic forgetting, where the earlier task performance gets negatively affected by semantic drift of the representations as the model adapts to later tasks.<\/li>\n<li>Obtain maximal performance with minimal increase in the network capacity.<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-546\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-546\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-545\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCounting Hypergraph Colorings in the Local Lemma Regime\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-545\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-546\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Chao Liao, Shanghai Jiao Tong University<\/p>\n<p>Counting Hypergraph Colorings in the Local Lemma Regime<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-548\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-548\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-547\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCross-Lingual Visual Grounding and Multimodal Machine Translation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-547\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-548\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Chenhui Chu, Osaka University<\/p>\n<p>Cross-Lingual Visual Grounding and Multimodal Machine Translation<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-550\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-550\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-549\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCuriosity-Bottleneck: Exploration by Distilling Task-Specific Novelty\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-549\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-550\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Gunhee Kim, Seoul National University<\/p>\n<p>Exploration based on state novelty has brought great success in challenging reinforcement learning problems with sparse rewards. However, existing novelty-based strategies become inefficient in real-world problems where observation contains not only task-dependent state novelty of our interest but also task-irrelevant information that should be ignored. We introduce an information- theoretic exploration strategy named Curiosity-Bottleneck that distills task-relevant information from observation. Based on the information bottleneck principle, our exploration bonus is quantified as the compressiveness of observation with respect to the learned representation of a compressive value network. With extensive experiments on static image classification, grid-world and three hard-exploration Atari games, we show that Curiosity-Bottleneck learns an effective exploration strategy by robustly measuring the state novelty in distractive environments where state-of-the-art exploration methods often degenerate.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-552\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-552\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-551\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDeep Reinforcement Learning for the Transfer from Simulation to the Real World with Uncertainties for AI Curling Robot System\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-551\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-552\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Dong-Ok Won and Seong-Whan Lee, Korea University<\/p>\n<p>Recently, deep reinforcement learning (DRL) has even enabled real world applications such as robotics. Here we teach a robot to succeed in curling (Olympic discipline), which is a highly complex real-world application where a robot needs to carefully learn to play the game on the slippery ice sheet in order to compete well against human opponents. This scenario encompasses fundamental challenges: uncertainty, non-stationarity, infinite state spaces and most importantly scarce data. One fundamental objective of this study is thus to better understand and model the transfer from simulation to real-world scenarios with uncertainty. We demonstrate our proposed framework and show videos, experiments and statistics about Curly our AI curling robot being tested on a real curling ice sheet. Curly performed well both, in classical game situations and when interacting with human opponents.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-554\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-554\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-553\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDeep Text Generation: Conversation and Application\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-553\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-554\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Rui Yan, Peking University<\/p>\n<p>Deep Text Generation: Conversation and Application<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-556\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-556\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-555\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDevelopment of 3D capsule endoscopic system\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-555\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-556\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Ryo Furukawa, Hiroshima City University<\/p>\n<p>Development of 3D capsule endoscopic system<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-558\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-558\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-557\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDevelopment of automatic Labanotation estimation system from video using Deep Learning\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-557\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-558\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Hiroshi Kawasaki, Kyushu University<\/p>\n<p>Our project aims to research on human representation and understanding human motion based on vision-based approach and develop new applications.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-560\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-560\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-559\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDissecting and Accelerating Neural Network via Graph Instrumentation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-559\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-560\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Jingwen Leng, Shanghai Jiao Tong University<\/p>\n<p>The proposed graph instrumentation framework can observe and modify neural networks using user-defined analysis code without changes in source code.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-562\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-562\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-561\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tDistant Supervised Domain-Specific Knowledge Base Construction and Population\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-561\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-562\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Lei Chen, The Hong Kong University of Science and Technology<\/p>\n<p>Our Goal in Domain-Specific KB Construction<\/p>\n<ul>\n<li>Entity Extraction, Entity Typing and Relation Extraction related to the target domain.<\/li>\n<li>Training data generation based on distant-supervision without human annotation.<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-564\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-564\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-563\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tEfficient and Effective Sparse DNNs with Bank-Balanced Sparsity\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-563\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-564\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Shijie Cao, Harbin Institute of Technology<\/p>\n<p>Efficient and Effective Sparse DNNs with Bank-Balanced Sparsity<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-566\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-566\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-565\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tEfficient Deep Neural Networks for Realistic Noise Removal\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-565\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-566\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Huanjing Yue, Tianjin University<\/p>\n<p>We propose an end-to-end noise estimation and removal network, where the estimated noise map is weighted concatenated with the noisy input to improve the denoising performance.<\/p>\n<p>The proposed noise estimation network takes advantage of the Bayer pattern prior of the noise maps, which not only improves the estimation accuracy but also reduces the memory cost.<\/p>\n<p>We propose a RSD block to fully take advantage of the spatial and channel correlations of realistic noise. The ablation study demonstrates the effectiveness of the proposed module.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-568\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-568\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-567\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tEmoji-Powered Representation Learning for Cross-Lingual Sentiment Analysis\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-567\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-568\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Zhenpeng Chen, Peking University<\/p>\n<p>Emoji-Powered Representation Learning for Cross-Lingual Sentiment Analysis<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-570\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-570\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-569\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tErebus: A Stealthier Partitioning Attack against Bitcoin Peer-to-Peer Network\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-569\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-570\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Muoi Tran, National University of Singapore<\/p>\n<p>We present the\u00a0Erebus\u00a0attack, which allows large malicious Internet Service Providers (ISPs) isolate any targeted public Bitcoin nodes from the Bitcoin peer-to-peer network. The Erebus attack does\u00a0not\u00a0require routing manipulation (e.g., BGP hijacks) and hence it is\u00a0virtually undectable\u00a0to any control-plane and even typical data-plane detectors.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-572\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-572\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-571\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tExplaining Word Embeddings via Disentangled Representations\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-571\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-572\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Shou-de Lin, National Taiwan University<\/p>\n<p>We propose transforming word embeddings into interpretable representations disentangling explainable factors<\/p>\n<p>Examples of factors: a) Topical factors: food, location, animal, etc. b) Part-of-Speech factors: noun, adj, verb, etc.<\/p>\n<p>We define and propose 4 desirable properties of our disentangled word vectors: a) Modularity, b) Compactness, c) Explicitness, d) Feature preservation<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-574\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-574\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-573\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFree-form Video Inpainting with 3D Gated Conv, TPD, and LGTSM\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-573\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-574\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Winston Hsu, National Taiwan University.<\/p>\n<p>Free-form Video Inpainting with 3D Gated Conv, TPD, and LGTSM<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-576\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-576\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-575\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFluid: A Blockchain based Framework for Crowdsourcing\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-575\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-576\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Lei Chen, The Hong Kong University of Science and Technology<\/p>\n<p>Fluid: A Blockchain based Framework for Crowdsourcing<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-578\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-578\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-577\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFLUID: Flexible User Interface Distribution for Ubiquitous Multi-device Interaction\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-577\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-578\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Insik Shin, KAIST<\/p>\n<p>Key idea: separation between app logic &amp; UI parts1) Distributing target UI objects to remote devices and rendering them2) Giving an illusion as if app logic and UI objects were in the same process<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-580\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-580\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-579\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFuzzing with Interleaving Coverage for Multi-threading Program\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-579\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-580\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Youngjoo Ko and Jong Kim, POSTECH<\/p>\n<p>Bin Zhu, Microsoft Research<\/p>\n<p>Increase the performance of fuzzing to discover more bugs in multi-threading programs using interleaving coverage.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-582\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-582\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-581\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tGenerative Model-based Speech Enhancement for Speech Recognition\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-581\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-582\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Jinyoung Lee and Hong-Goo Kang, Yonsei University<\/p>\n<ul>\n<li>Remove ambient noise to improve automatic speech recognition performance<\/li>\n<li>Overcome the problems of conventional masking-based speech enhancement algorithms, e.g. speech signal distortion<\/li>\n<li>Propose a generative and adversarial model-based approach that effectively utilizes spectro-temporal characteristics of speech and noise components<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-584\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-584\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-583\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tGlobal-Local Temporal Representations For Video Person Re-Identification\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-583\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-584\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Shiliang Zhang, Peking University<\/p>\n<ul>\n<li>Propose Dilated Temporal Convolution (DTC) to learn short-term temporal cues<\/li>\n<li>Propose Temporal Self Attention (TSA) to learn the long-term temporal cues<\/li>\n<li>DTC and TSA learn complementary temporal feature<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-586\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-586\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-585\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tGradient Descent Finds Global Minima of DNNs\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-585\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-586\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Liwei Wang, Peking University<\/p>\n<p>Gradient Descent Finds Global Minima of DNNs<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-588\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-588\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-587\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tGraph Neural Networks for 3D Face Anti-spoofing\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-587\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-588\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Wei HU and Gusi Te, Peking University<\/p>\n<p>This project aims to explore the emerging graph neural networks (GNN) based on texture plus depth features to address the problem of 3D face anti spoofing. Various spoofing attacks are growing by presenting a fake or copied facial evidence to obtain valid authentication. While anti spoofingtechniques using 2D facial data have matured, 3D face anti spoofing hasn\u2019t been studied much, thus allowing advanced spoofing techniques such as 3D masking at large. Hence, we propose to address this problem, based on texture plus depth cues acquired from RGBD cameras, and in the framework of GNN.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-590\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-590\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-589\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tGraph-structured Knowledge Base Management and Applications\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-589\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-590\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Hongzhi Wang, Harbin Institute of Technology<\/p>\n<p>Graph-structured Knowledge Base Management and Applications<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-592\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-592\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-591\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tHome Location Selection with Reachability\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-591\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-592\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> YingcaiWu, Zhejiang University<\/p>\n<p>This study characterizes the problem of reachabilitycentric multi-criteria decision-making for choosing ideal homes.The system can also be adopted inother location selection scenarios, in which the reachability of locations is considered (e.g., selecting a location for a convenience store).<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-594\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-594\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-593\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tIdentifying Structures in Spreadsheets\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-593\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-594\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Wensheng Dou, Chinese Academy of Sciences<\/p>\n<p>Identifying Structures in Spreadsheets<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-596\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-596\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-595\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tImage-to-Image Translation via Group-wise Deep Whitening-and-Coloring Transformation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-595\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-596\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Jaegul Choo, Korea University<\/p>\n<p>Recently, unsupervised exemplar-based image-to-image translation has accomplished substantial advancements. In order to transfer the information from an exemplar to an input image, existing methods often use a normalization technique, e.g., adaptive instance normalization, that controls the channel-wise statistics of an input activation map at a particular layer, such as the mean and the variance. Meanwhile, style transfer approaches similar task to image translation by nature, demonstrated superior performance by using the higher-order statistics such as covariance among channels in representing a style. However, applying this approach in image translation is computationally intensive and error-prone due to the expensive time complexity and its non-trivial backpropagation. In response, this paper proposes an end-to-end approach tailored for image translation that efficiently approximates this transformation with our novel regularization methods. We further extend our approach to a group-wise form for memory and time efficiency as well as image quality. Extensive qualitative and quantitative experiments demonstrate that our proposed method is fast, both in training and inference, and highly effective in reflecting the style of an exemplar.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-598\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-598\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-597\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tImmersive Biology - An Interactive Microscope for Informal Biology Education\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-597\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-598\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Jaewoo Jung, Kyungwon Lee and Seung Ah Lee, Yonsei University<\/p>\n<p>We developed a new hybrid digital-biological system that provide interactive and immersive experiences between humans and biological objects for applications in life science education and research. The scope of this work includes;<\/p>\n<ul>\n<li>Construction of an automated optical stimulation microscope, which uses light to both image and interface with light-sensitive cells.<\/li>\n<li>Use of human interaction modalities to convert human\u2019s natural input into stimuli for the microscopic biological objects.<\/li>\n<li>Comparative user study as a public installation that evaluated user behaviors, user engagement and learning outcomes.<\/li>\n<\/ul>\n<p>We expect that this platform will transform microscopes from a passive observation tool to an active interaction medium, assisting scientific research, life science education and clinical interventions.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-600\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-600\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-599\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tImproving Join Reorderability with Compensation Operators\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-599\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-600\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> TaiNing Wang and Chee-Yong Chan, National University of Singapore<\/p>\n<p>Improving Join Reorderability with Compensation Operators<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-602\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-602\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-601\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tImproving the Performance of Video Analytics Using WIFI Signal\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-601\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-602\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Hai Truong, Rajesh Krishna Balan, Singapore Management University<\/p>\n<p>Automatic analysis of the behaviour of large groups of people is an important requirement for a large class of important applications such as crowd management, traffic control, and surveillance. For example, attributes such as the number of people, how they are distributed, which groups they belong to, and what trajectories they are taking can be used to optimize the layout of a mall to increase overall revenue. A common way to obtain these attributes is to use video camera feeds coupled with advanced video analytics solutions. However, solely utilizing video feeds is challenging in high people-density areas, such as a normal mall in Asia, as the high people density significantly reduces the effectiveness of video analytics due to factors such as occlusion. In this work, we propose to combine video feeds with WiFi data to achieve better classification results of the number of people in the area and the trajectories of those people. In particular, we believe that our approach will combine the strengths. of the two different sensors, WiFi and video, while reducing the weaknesses of each sensor. This work has started fairly recently and we will present our thoughts and current results up to now.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-604\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-604\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-603\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tIntelligent Action Analytics\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-603\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-604\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Jiaying Liu, Peking University<\/p>\n<p>Intelligent Action Analytics<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-606\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-606\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-605\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tInteractive Methods to Improve Data Quality\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-605\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-606\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Changjian Chen, Tsinghua University<\/p>\n<p>Interactive Methods to Improve Data Quality<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-608\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-608\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-607\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tInter-learner shadowing framework for comprehensibility-based assessment of learners&#039; speech\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-607\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-608\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Nobuaki MINEMATSU, University of Tokyo<\/p>\n<p>Inter-learner shadowing framework for comprehensibility-based assessment of learners&#8217; speech<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-610\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-610\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-609\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tIoTcube: An Open Platform for Feedback based Protocol Fuzzing\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-609\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-610\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Heejo Lee, Korea University<\/p>\n<p>An open platform for feedback based fuzzing improves its testing performance using two factors: binary feedback and user feedback.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-612\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-612\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-611\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tLearning Multi-label Feature for Fine-Grained Food Recognition\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-611\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-612\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Xueming Qian, Xi&#8217;an Jiaotong University<\/p>\n<p>1.We proposed Attention Fusion Network (AFN). it pay attention to food discrimination region against unstru-ctured defeat, and generate the feature embeddings jointly aware the ingredients and food.<\/p>\n<p>2.We proposed the balance focal loss (BFL) to enhance the joint learning of ingredients and food, optimize feature expression ability for multi-label ingredients<\/p>\n<p>3. The effectiveness is proved through the comparative experiments.\u00a0 In particular, the use of balance focal loss make the Micro-F1, Macro-F1 and Accuracy of ingredi-ents improved by 5.76%, 12.62% and 5.78%.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-614\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-614\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-613\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMAP Inference for Customized Determinantal Point Processes via Maximum Inner Product Search\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-613\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-614\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Insu Han, KAIST<\/p>\n<p>MAP Inference for Customized Determinantal Point Processes via Maximum Inner Product Search<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-616\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-616\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-615\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMinimizing Network Footprint in Distributed Deep Learning\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-615\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-616\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Hong Xu, City University of Hong Kong<\/p>\n<p>Minimizing Network Footprint in Distributed Deep Learning<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-618\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-618\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-617\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMultilingual End-to-End Speech Translation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-617\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-618\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Hirofumi Inaguma, Kyoto University<\/p>\n<p>Directly translate source speech to target languages with a single sequence-to-sequence (S2S) model<\/p>\n<ul>\n<li>Many-to-many (M2M)<\/li>\n<li>One-to-many (O2M)<\/li>\n<\/ul>\n<p>Outperformed the bilingual end-to-end speech translation (E2E-ST) models<\/p>\n<p>Shared representations obtained from multilingual E2E-ST were more effective than those from the bilingual one for transfer learning to a very low-resource ST task: Mboshi-&gt;French (4.4h)<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-620\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-620\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-619\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMulti-marginal Wasserstein GAN\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-619\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-620\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Mingkui Tan, South China University of Technology<\/p>\n<ul>\n<li>We propose a novel MWGAN to optimize the multi-marginal distance among different domains.<\/li>\n<li>We define and analyze the generalization performance of MWGAN for the multiple domain translation task.<\/li>\n<li>Extensive experiments demonstrate the effectiveness of MWGAN on balanced and imbalanced translation tasks.<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-622\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-622\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-621\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tNAT: Neural Architecture Transformer for Accurate and Compact Architectures\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-621\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-622\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Mingkui Tan, South China University of Technology<\/p>\n<ul>\n<li>Propose a novel Neural Architecture Transformer (NAT) to optimize any arbitrary architecture.<\/li>\n<li>Cast the problem into a Markov Decision Process.<\/li>\n<li>Employ Graph Convolution Network to learn the policy.<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-624\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-624\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-623\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tNFD: Using Behavior Models to Develop Cross-Platform NFs\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-623\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-624\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Wenfei Wu, Tsinghua University<\/p>\n<p>We propose a new NF development framework named NFD which consists of an NF abstraction layer to develop NF behavior models and a compiler to adapt NF models to specific runtime environments.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-626\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-626\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-625\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tNon-factoid Question Answering for Text and Video\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-625\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-626\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Seung-won Hwang, Yonsei University<\/p>\n<p>Question Answering (QA) has been mostly studied in the context of factoid, providing concise facts. In contrast, we study Non-factoid QA, extending to cover more realistic questions such as how- or why-questions with long answers, from long texts or videos. This demo and poster address the following questions:<\/p>\n<ul>\n<li>Non-factoid QA for text, combining the complementary strength of representation- and interaction-focused approaches [EMNLP19]. Extending this task for video has the opportunity and challenge, coming from multimodality and having no pre-divided answer candidates (e.g. paragraph), which is our ongoing MSRA collaboration.<\/li>\n<li>Human-in-the-loop debugging for QA Demo [SIGIR19]<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-628\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-628\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-627\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tNPA: Neural News Recommendation with Personalized Attention\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-627\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-628\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Chuhan Wu, Tsinghua University<\/p>\n<ul>\n<li>Different users usually have different interests in news.<\/li>\n<li>Different users may click the same news article due to different interests.<\/li>\n<li>We need personalized news and user representation!<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-630\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-630\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-629\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tNumerical\/quantitative system for common sense natural language processing\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-629\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-630\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Hiroaki Yamane, The University of Tokyo<\/p>\n<p>We construct methods for converting contextual language to numerical variables for quantitative\/numerical common sense in natural language processing.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-632\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-632\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-631\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tOnline Convex Optimization in Non-stationary Environments\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-631\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-632\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Shiyin Lu, Nanjing University<\/p>\n<p>Online Convex Optimization in Non-stationary Environments<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-634\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-634\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-633\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tOptimizing Quality of Experience (QoE) for Adaptive Bitrate Streaming via Deep Video Analytics\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-633\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-634\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Yonggang Wen, Nanyang Technological University<\/p>\n<p>QoE depending multiple families of Influential Factors (IF), to be optimized jointly for the best user experience.<\/p>\n<p>How to develop a unified and scalable framework to optimize QoE for multimedia communications, in the presence of system dynamics?<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-636\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-636\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-635\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tParaphrasing and Simplification with Lean Vocabulary\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-635\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-636\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Tadashi Nomoto, National Institute of Japanese Literature<\/p>\n<p>This work explores the impact of the subword representation on paraphrasing and text simplification. Experiments found that when combined with REINFORCE, the subword scheme boosted performance beyond the current state of the art both in paraphrasing and text simplification.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-638\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-638\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-637\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tPick-Carry-Place Household Tasks Using Labanotation for Learning-from-Observation Robots\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-637\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-638\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Jun Takamatsu, Nara Institute of Science and Technology<\/p>\n<p>Pick-Carry-Place Household Tasks Using Labanotation for Learning-from-Observation Robots<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-640\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-640\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-639\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tPredicting Future Instance Segmentation with Contextual Pyramid ConvLSTMs\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-639\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-640\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Wei-Shi Zheng, Sun Yat-sen University<\/p>\n<p>Predicting Future Instance Segmentation<\/p>\n<ul>\n<li>Given several frames in a video, this task is to predict future instance segmentation before the corresponding frames are observed.<\/li>\n<li>It is challenging due to the uncertainty in appearance variation caused by object moving, occlusion between objects, and viewpoint changing in videos.<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-642\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-642\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-641\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tProject Title: Secure and compact elliptic curve cryptosystems\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-641\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-642\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Yaoan Jin and Atsuko Miyaji, Graduate School of Engineering Osaka University<\/p>\n<p>Any attack based on information, such as timing information and power consumption, gained from the implementation of a cryptosystem.<\/p>\n<ul>\n<li>Simple Power Analysis (SPA)<\/li>\n<li>Safe Error Attack<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-644\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-644\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-643\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tPruning from Scratch\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-643\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-644\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Hang Su, Tsinghua University<\/p>\n<p>In this work, we find that pre-training an over-parameterized model is not necessary for obtaining an efficient pruned structure. We propose a novel network pruning pipeline which allows pruning from scratch.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-646\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-646\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-645\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tRecent Progress of Handwritten Mathematical Expression Recognition\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-645\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-646\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Jun Du, University of Science and Technology of China<\/p>\n<p>Recent Progress of Handwritten Mathematical Expression Recognition<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-648\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-648\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-647\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tRecurrent Temporal Aggregation Framework for Deep Video Inpainting\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-647\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-648\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Dahun Kim, KAIST<\/p>\n<ul>\n<li>To remove unwanted object from a video<\/li>\n<li>Frame-by-frame image inpainting<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-650\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-650\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-649\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tRelational Knowledge Distillation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-649\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-650\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Wonpyo park, Dongju Kim, and Minsu Cho, POSTECH<\/p>\n<p>Yan Lu, Microsoft Research<\/p>\n<p>Knowledge distillation aims at transferring knowledge acquired in one model (a teacher) to another model (a student) that is typically smaller. Previous approaches can be expressed as a form of training the student to mimic output activations of individual data examples represented by the teacher. We introduce a novel approach, dubbed relational knowledge distillation (RKD), that transfers mutual relations of data examples instead. For concrete realizations of RKD, we propose distance-wise and angle-wise distillation losses that penalize structural differences in relations. Experiments conducted on different tasks show that the proposed method improves educated student models with a significant margin. In particular for metric learning, it allows students to outperform their teachers&#8217; performance, achieving the state of the arts on standard benchmark datasets.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-652\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-652\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-651\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tResearch on Deep Learning Framework for Julia\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-651\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-652\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Yu Zhang, YuxiangZhang, YitongHuang, Xing Guo, University of Science and Technology of China<\/p>\n<p>Research on Deep Learning Framework for Julia<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-654\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-654\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-653\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tSARA: Self-Replay Augmented Record and Replay for Android in Industrial Cases\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-653\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-654\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Ting Liu, Xi&#8217;an Jiaotong University<\/p>\n<p>SARA: Self-Replay Augmented Record and Replay for Android in Industrial Cases<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-656\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-656\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-655\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tsecGAN: A Cycle-Consistent GAN for Securely-Recoverable Video Transformation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-655\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-656\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Fengyuan Xu, Nanjing University<\/p>\n<p>Video transformation needs to meet new requirements in actual use, such as privacy protection under surveillance scenarios:<\/p>\n<ul>\n<li>The transformed video can be restored to the original ones.<\/li>\n<li>The transformed video only can be restored by the authorized party.<\/li>\n<\/ul>\n<p>We need a unified translation style and a unique stenography.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-658\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-658\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-657\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tStyleMe: An AI Fashion Consultant for Personal Shopping and Style Advice\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-657\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-658\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Shintami Chusnul Hidayati, Institut Teknologi Sepuluh Nopember; Wen-Huang Cheng, National Chiao Tung University; Jianlong Fu, Microsoft Research<\/p>\n<p>StyleMe: An AI Fashion Consultant for Personal Shopping and Style Advice<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-660\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-660\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-659\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tSystem support for designing efficient gradient compression algorithms for distributed DNN training\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-659\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-660\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Cheng Li, University of Science and Technology of China<\/p>\n<p>System support for designing efficient gradient compression algorithms for distributed DNN training<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-662\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-662\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-661\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tTemporal Cause and Effect Localization on Car Crash Videos Via Multi-Task Neural Architecture Search\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-661\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-662\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong>Tackgeun You, POSTECH and Bohyung Han, Seoul National University<\/p>\n<ul>\n<li>Introduce a benchmark for temporal cause and effect localization on car crash videos.<\/li>\n<li>Propose a multi-task baseline for simultaneously conducting temporal cause and effect localization.<\/li>\n<li>Propose a multi-task neural architecture search that decides to share or separate building blocks<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-664\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-664\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-663\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tTowards a Deep and Unified Understanding of Deep Neural Models in NLP\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-663\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-664\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Chaoyu Guan, Shanghai Jiao Tong University<\/p>\n<p>A unified information based measure : quantify the information of each input word that is encoded in an intermediate layer of a deep NLP model.<\/p>\n<p>The information based measure as a tool<\/p>\n<ul>\n<li>Evaluating different explanation methods.<\/li>\n<li>Explaining different deep NLP models<\/li>\n<\/ul>\n<p>This measure enriches the capability of explaining DNNs.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-666\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-666\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-665\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tTowards Complex Text-to-SQL in Cross-Domain Database with Intermediate Representation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-665\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-666\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Ting Liu, Xi&#8217;an Jiaotong University<\/p>\n<p>Towards Complex Text-to-SQL in Cross-Domain Database with Intermediate Representation<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-668\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-668\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-667\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tVibration-Mediated Sensing Techniques for Tangible Interaction\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-667\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-668\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter:<\/strong> Seungmoon Choi and Seungjae Oh, POSTECH<\/p>\n<ul>\n<li>Recognize contact finger(s) on any rigid surfaces by decoding transmitted frequencies<\/li>\n<li>Identify a grasped object by visualizing the propagation dynamics of vibration<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-670\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-670\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-669\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tVideo Generation from Natural Language by Decomposing the Components of Video : Background, Object, and Action\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-669\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-670\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Kibeom Hong and Hyeran Byun, Yonsei University<\/p>\n<ul>\n<li style=\"list-style-type: none\">\n<ul>\n<li>Video can be created by separating Background and Foreground, and Foreground can be divided into Object and Action.<\/li>\n<li>We can get background and foreground information for video generation from text.<\/li>\n<li>In the Image domain, previous works[1,2,3] have studied image generation with text extensively, [4,5,6] expanded this idea to video domain.<\/li>\n<li>In this work, we want to create a video with three components in order to control more realistic and fine-grained parts.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-672\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-672\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-671\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tVideo Dialog via Progressive Inference and Cross-Transformer\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-671\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-672\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Zhou Zhao, Zhejiang University<\/p>\n<p>Video dialog is a new and challenging task, which requires the agent to answer questions combining video information with dialog history. And different from single-turn video question answering, the additional dialog history is important for video dialog, which often includes contextual information for the question. Existing visual dialog methods mainly use RNN to encode the dialog history as a single vector representation, which might be rough and straightforward. Some more advanced methods utilize hierarchical structure, attention and memory mechanisms, which still lack an explicit reasoning process. In this paper, we introduce a novel progressive inference mechanism for video dialog, which progressively updates query information based on dialog history and video content until the agent think the information is sufficient and unambiguous. In order to tackle the multi- modal fusion problem, we propose a cross-transformer module, which could learn more fine-grained and comprehensive interactions both inside and between the modalities. And besides answer generation, we also consider question generation, which is more challenging but significant for a complete video dialog system. We evaluate our method on two largescale datasets, and the extensive experiments show the effectiveness of our method.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-674\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-674\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-673\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tWidar 3.0: Zero-Effort Cross-Domain Gesture Recognition with Wi-Fi\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-673\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-674\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Zheng Yang, Tsinghua University<\/p>\n<p>Widar 3.0: Zero-Effort Cross-Domain Gesture Recognition with Wi-Fi<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-676\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-676\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-675\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tYour Tweets Reveal What You Like: Introducing Cross-media Content Information into Multi-domain Recommendation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-675\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-676\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Presenter: <\/strong> Min Zhang, Tsinghua University<\/p>\n<p>The key to solving this problem is to conduct better user profiling.<\/p>\n<p>How about off-topic features in other platforms, such as tweets?<\/p>\n<ul>\n<li style=\"list-style-type: none\">\n<ul>\n<li style=\"list-style-type: none\">\n<ul>\n<li>On-topic features are helpful in understanding users\u2019 interests and preference.<\/li>\n<li>Off-topic features are able to describe users too.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>We will try to introduce these off-topic features (tweets) into different rating prediction algorithms.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Information\"} --><!-- wp:freeform --><h3><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-293876\" style=\"vertical-align: top\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/icon-address.png\" alt=\"21ccc-icon-5\" width=\"30\" height=\"30\" \/>\u00a0<strong>Microsoft Address<\/strong><\/h3>\n<p>Venue: Tower 1-1F, No. 5 Danling Street, Haidian District, Beijing, China<\/p>\n<p>\u5730\u5740\uff1a\u4e2d\u56fd\u5317\u4eac\u6d77\u6dc0\u533a\u4e39\u68f1\u88575\u53f7\u5fae\u8f6f\u5927\u53a61\u53f7\u697c<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Image Gallery\"} --><!-- wp:freeform --><p><ul id='gallery-1' class='gallery galleryid-0 gallery-columns-2 gallery-size-medium stripped ms-row fixed-small'><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5484_095210-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5484_095210-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5484_095210-300x169.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4796_141029-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4796_141029-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4796_141029-300x201.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4892_143429-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4892_143429-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4892_143429-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4873_142816-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4873_142816-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4873_142816-300x201.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4853_142431-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4853_142431-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4853_142431-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4828_141729-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4828_141729-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4828_141729-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4812_141434-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4812_141434-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A4812_141434-300x204.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5107_155210-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5107_155210-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5107_155210-300x200.jpg\" alt=\"a man holding a microphone\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5216_164751-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5216_164751-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5216_164751-300x204.jpg\" alt=\"a man holding a guitar\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5135_161527-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5135_161527-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5135_161527-300x194.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5256_165720-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5256_165720-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/11\/vbox5169_HZ9A5256_165720-300x198.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4908_143958-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4908_143958-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4908_143958-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4935_145935-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4935_145935-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4935_145935-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4951_150340-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4951_150340-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4951_150340-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4964_150600-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4964_150600-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4964_150600-300x203.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4985_151118-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4985_151118-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4985_151118-300x201.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4991_151456-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4991_151456-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4991_151456-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5010_151930-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5010_151930-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5010_151930-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5026_152922-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5026_152922-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5026_152922-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5045_153748-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5045_153748-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5045_153748-300x202.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5058_153915-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5058_153915-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5058_153915-300x202.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5063_153948-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5063_153948-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5063_153948-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5066_154304-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5066_154304-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5066_154304-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5069_154334-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5069_154334-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5069_154334-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5094_154830-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5094_154830-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5094_154830-300x202.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5150_161747-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5150_161747-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5150_161747-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5172_162144-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5172_162144-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5172_162144-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5204_164458-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5204_164458-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5204_164458-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5209_164611-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5209_164611-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5209_164611-300x204.jpg\" alt=\"a man wearing a blue shirt\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5220_164922-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5220_164922-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5220_164922-300x196.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5235_165314-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5235_165314-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5235_165314-300x204.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5239_165329-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5239_165329-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5239_165329-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5242_165417-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5242_165417-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5242_165417-300x193.jpg\" alt=\"a group of people posing for the camera\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5246_165556-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5246_165556-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5246_165556-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5251_165637-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5251_165637-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5251_165637-300x199.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5261_165801-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5261_165801-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5261_165801-300x205.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5265_165830-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5265_165830-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5265_165830-300x204.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5267_165941-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5267_165941-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5267_165941-300x195.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5278_170321-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5278_170321-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5278_170321-300x202.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5280_170349-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5280_170349-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5280_170349-300x200.jpg\" alt=\"a group of people posing for the camera\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5281_170401-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5281_170401-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5281_170401-300x187.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5287_170858-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5287_170858-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5287_170858-300x203.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5292_171251-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5292_171251-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5292_171251-300x201.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5297_171556-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5297_171556-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5297_171556-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5327_090112-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5327_090112-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5327_090112-300x200.jpg\" alt=\"Hsiao-Wuen Hon wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5335_090407-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5335_090407-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5335_090407-300x200.jpg\" alt=\"Hsiao-Wuen Hon wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5358_091013-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5358_091013-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5358_091013-300x200.jpg\" alt=\"Hsiao-Wuen Hon wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5398_091646-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5398_091646-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5398_091646-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5421_094247-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5421_094247-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5421_094247-300x169.jpg\" alt=\"Hsiao-Wuen Hon wearing a suit and tie posing for a photo\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5423_094257-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5423_094257-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5423_094257-300x169.jpg\" alt=\"Hsiao-Wuen Hon standing posing for the camera\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5426_094309-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5426_094309-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5426_094309-300x169.jpg\" alt=\"Hsiao-Wuen Hon wearing a suit and tie posing for a photo\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5430_094323-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5430_094323-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5430_094323-300x169.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5433_094340-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5433_094340-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5433_094340-300x169.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5436_094354-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5436_094354-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5436_094354-300x169.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5438_094405-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5438_094405-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5438_094405-300x169.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5441_094417-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5441_094417-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5441_094417-300x169.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5444_094431-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5444_094431-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5444_094431-300x169.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5448_094442-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5448_094442-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5448_094442-300x169.jpg\" alt=\"Hsiao-Wuen Hon wearing a suit and tie posing for a photo\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5452_094457-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5452_094457-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5452_094457-300x169.jpg\" alt=\"Hsiao-Wuen Hon wearing a suit and tie posing for a photo\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5455_094507-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5455_094507-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5455_094507-300x169.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5460_094559-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5460_094559-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5460_094559-300x169.jpg\" alt=\"Monika Yulianti, Tian Pengfei, Hsiao-Wuen Hon posing for a photo\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5467_094653-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5467_094653-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5467_094653-300x169.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5538_100935-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5538_100935-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5538_100935-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5553_101525-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5553_101525-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5553_101525-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5578_102003-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5578_102003-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5578_102003-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5588_102119-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5588_102119-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5588_102119-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5602_102314-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5602_102314-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5602_102314-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5621_103734-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5621_103734-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5621_103734-300x169.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5655_110157-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5655_110157-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5655_110157-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5574_101831-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5574_101831-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5574_101831-200x300.jpg\" alt=\"a man wearing a suit and tie holding a cell phone\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5662_110345-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5662_110345-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5662_110345-300x200.jpg\" alt=\"a man wearing a blue shirt\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5701_112037-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5701_112037-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5701_112037-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5719_113101-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5719_113101-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5719_113101-300x200.jpg\" alt=\"a person posing for the camera\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5743_113915-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5743_113915-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5743_113915-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5804_121551-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5804_121551-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5804_121551-300x200.jpg\" alt=\"Rong Xu wearing a suit and tie standing in a room\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5805_121614-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5805_121614-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5805_121614-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5806_121636-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5806_121636-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5806_121636-300x200.jpg\" alt=\"a group of people standing in front of a computer\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5807_121643-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5807_121643-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5807_121643-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5808_121720-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5808_121720-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5808_121720-300x200.jpg\" alt=\"a man standing in front of a computer\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5810_121807-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5810_121807-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5810_121807-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5812_121854-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5812_121854-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5812_121854-300x200.jpg\" alt=\"Lip-Bu Tan et al. standing in a room\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5815_121901-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5815_121901-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5815_121901-300x200.jpg\" alt=\"Lip-Bu Tan et al. standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5816_121913-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5816_121913-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5816_121913-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5819_124414-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5819_124414-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5819_124414-300x169.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5821_124508-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5821_124508-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5821_124508-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5825_124546-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5825_124546-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5825_124546-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5827_124627-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5827_124627-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5827_124627-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5830_132319-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5830_132319-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5830_132319-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5832_132336-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5832_132336-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5832_132336-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5833_132344-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5833_132344-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5833_132344-300x200.jpg\" alt=\"Monika Yulianti et al. standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5834_132401-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5834_132401-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5834_132401-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5837_132423-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5837_132423-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5837_132423-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5839_132459-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5839_132459-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5839_132459-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5840_132519-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5840_132519-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5840_132519-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5841_132525-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5841_132525-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5841_132525-300x200.jpg\" alt=\"a group of people standing next to a man in a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5842_132540-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5842_132540-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5842_132540-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5843_132559-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5843_132559-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5843_132559-300x200.jpg\" alt=\"a woman standing next to a man in a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5847_132657-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5847_132657-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5847_132657-300x200.jpg\" alt=\"a group of people standing in front of a building\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5848_132719-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5848_132719-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5848_132719-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5963_143056-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5963_143056-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5963_143056-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5970_143228-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5970_143228-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5970_143228-300x200.jpg\" alt=\"a group of people sitting at a table using a laptop computer\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5974_143337-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5974_143337-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5974_143337-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5981_143450-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5981_143450-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5981_143450-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5984_143503-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5984_143503-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5984_143503-300x200.jpg\" alt=\"a group of people sitting at a table using a laptop computer\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5985_143512-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5985_143512-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5985_143512-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5988_143540-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5988_143540-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5988_143540-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5994_143650-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5994_143650-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5994_143650-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5997_143710-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5997_143710-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5997_143710-300x200.jpg\" alt=\"Xu Shousheng et al. sitting at a table\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6009_144012-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6009_144012-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6009_144012-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6016_144106-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6016_144106-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6016_144106-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6018_144302-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6018_144302-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6018_144302-300x200.jpg\" alt=\"a group of people posing for the camera\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6022_144359-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6022_144359-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6022_144359-300x200.jpg\" alt=\"a man looking at the camera\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6027_144509-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6027_144509-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6027_144509-300x200.jpg\" alt=\"a man sitting at a table using a laptop computer\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6029_144609-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6029_144609-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6029_144609-300x200.jpg\" alt=\"a group of people sitting at a table using a laptop\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6031_144720-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6031_144720-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6031_144720-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6047_145101-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6047_145101-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6047_145101-300x200.jpg\" alt=\"a group of people looking at a laptop\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6050_145115-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6050_145115-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6050_145115-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6051_145125-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6051_145125-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6051_145125-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6058_145418-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6058_145418-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6058_145418-300x200.jpg\" alt=\"a man looking at the camera\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6060_145530-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6060_145530-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6060_145530-300x200.jpg\" alt=\"a group of people sitting at a desk\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6076_150914-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6076_150914-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6076_150914-300x200.jpg\" alt=\"a person standing posing for the camera\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6080_151113-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6080_151113-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6080_151113-300x200.jpg\" alt=\"a group of people sitting at a desk\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6086_151332-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6086_151332-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6086_151332-300x200.jpg\" alt=\"a group of people in a room\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6093_152405-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6093_152405-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6093_152405-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6099_152610-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6099_152610-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6099_152610-300x200.jpg\" alt=\"a man standing in front of a screen\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6106_152801-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6106_152801-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6106_152801-300x200.jpg\" alt=\"a group of people sitting at a desk\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6116_153342-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6116_153342-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6116_153342-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6119_153502.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6119_153502.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6119_153502-200x300.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6121_154027-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6121_154027-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6121_154027-300x200.jpg\" alt=\"a person posing for the camera\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6142_154819-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6142_154819-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6142_154819-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6150_155029-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6150_155029-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6150_155029-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6158_155959-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6158_155959-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6158_155959-300x200.jpg\" alt=\"a group of people looking at a laptop\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6159_160013-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6159_160013-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6159_160013-300x200.jpg\" alt=\"a group of people looking at a laptop\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6164_160459-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6164_160459-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6164_160459-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6169_160642-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6169_160642-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6169_160642-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6181_161214-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6181_161214-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6181_161214-300x200.jpg\" alt=\"a person wearing glasses\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6194_161535-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6194_161535-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6194_161535-300x200.jpg\" alt=\"a group of people sitting at a desk\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6198_163048-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6198_163048-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6198_163048-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6217_164042-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6217_164042-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6217_164042-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6241_170824-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6241_170824-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6241_170824-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6250_171041-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6250_171041-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6250_171041-300x200.jpg\" alt=\"a man wearing glasses and a blue shirt\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4770_140241-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4770_140241-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4770_140241-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4789_140724-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4789_140724-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A4789_140724-300x203.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5850_132738.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5850_132738.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5850_132738-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5851_132747.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5851_132747.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5851_132747-300x200.jpg\" alt=\"a man standing in front of a screen\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5853_132817.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5853_132817.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5853_132817-300x200.jpg\" alt=\"a person wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5854_132829.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5854_132829.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5854_132829-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5856_132914.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5856_132914.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5856_132914-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5858_132944.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5858_132944.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5858_132944-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5860_133003.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5860_133003.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5860_133003-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5861_133011.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5861_133011.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5861_133011-300x200.jpg\" alt=\"a person standing posing for the camera\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5862_133033.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5862_133033.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5862_133033-300x200.jpg\" alt=\"a group of people standing next to a person\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5863_133045.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5863_133045.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5863_133045-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5867_133123.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5867_133123.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5867_133123-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5868_133150.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5868_133150.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5868_133150-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5871_133227.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5871_133227.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5871_133227-300x200.jpg\" alt=\"a group of people standing in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5874_133242.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5874_133242.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5874_133242-300x200.jpg\" alt=\"a woman standing in a room\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5875_133251.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5875_133251.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5875_133251-300x200.jpg\" alt=\"a group of people looking at a laptop\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5876_133302.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5876_133302.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5876_133302-300x200.jpg\" alt=\"a person wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5877_133316.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5877_133316.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5877_133316-300x200.jpg\" alt=\"a group of people sitting at a desk in front of a computer\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5883_140732.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5883_140732.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5883_140732-300x200.jpg\" alt=\"a group of people sitting at a desk\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5891_140854.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5891_140854.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5891_140854-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5902_141317.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5902_141317.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5902_141317-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5928_142118.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5928_142118.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5928_142118-300x200.jpg\" alt=\"a group of people sitting at a table\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5935_142520.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5935_142520.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5935_142520-300x200.jpg\" alt=\"a man sitting at a table using a laptop computer\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5948_142750.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5948_142750.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A5948_142750-300x200.jpg\" alt=\"a man looking at the camera\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6188_161355.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6188_161355.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6188_161355-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6210_163540.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6210_163540.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6210_163540-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6213_163553.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6213_163553.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6213_163553-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6220_170022.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6220_170022.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6220_170022-300x169.jpg\" alt=\"a group of people in a room\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6221_170214.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6221_170214.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6221_170214-300x200.jpg\" alt=\"Prof Lawrence Jun Zhang wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6183_161222.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6183_161222.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6183_161222-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/1.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/1.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/1-300x200.jpg\" alt=\"a man wearing a suit and tie\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6266_182202.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6266_182202.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6266_182202-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6270_182317.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6270_182317.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6270_182317-300x200.jpg\" alt=\"a screen shot of a man\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6273_182403.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6273_182403.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6273_182403-300x200.jpg\" alt=\"a screenshot of a man\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6276_182447.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6276_182447.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6276_182447-300x200.jpg\" alt=\"a screenshot of a man\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6277_182523.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6277_182523.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6277_182523-300x169.jpg\" alt=\"a screen shot of a person\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6314_183444.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6314_183444.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6314_183444-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6280_182549.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6280_182549.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6280_182549-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6282_182619.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6282_182619.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6282_182619-300x200.jpg\" alt=\"a group of people posing for the camera\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6288_182711.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6288_182711.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6288_182711-300x200.jpg\" alt=\"a screenshot of a man\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6293_182903.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6293_182903.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6293_182903-300x200.jpg\" alt=\"a screen shot of a person\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6295_182942.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6295_182942.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6295_182942-300x200.jpg\" alt=\"a screen shot of a man\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/2.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/2.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/2-300x200.jpg\" alt=\"a screen shot of a person\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6300_183040-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6300_183040-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6300_183040-300x200.jpg\" alt=\"a screenshot of a man\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6304_183146-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6304_183146-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6304_183146-300x200.jpg\" alt=\"a screenshot of a man\" class=\"db full-width\" \/><\/a><\/li><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6308_183249-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6308_183249-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6308_183249-300x200.jpg\" alt=\"\" class=\"db full-width\" \/><\/a><\/li><br style=\"clear: both\" \/><li class='s-col-12-24 xs-margin-bottom-sp1 s-margin-bottom-sp2'><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6290_182806-scaled.jpg\" data-mfp-src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6290_182806-scaled.jpg\" data-caption=\"\" class=\"gallery-item\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/vbox5169_HZ9A6290_182806-300x200.jpg\" alt=\"a screenshot of a computer screen\" class=\"db full-width\" \/><\/a><\/li>\n\t\t\t<br style='clear: both' \/>\n\t\t<\/ul>\n<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- \/wp:msr\/content-tabs -->","tab-content":[{"id":0,"name":"About","content":"The Academic Day 2019 event brings together the intellectual power of researchers from across Microsoft Research Asia and the academic community to attain a shared understanding of the contemporary ideas and issues facing the field of tech. Together, we will advance the frontier of technology towards an ideal world of computing.\r\n\r\nThrough our Microsoft Research Outreach Programs, Microsoft Research Asia has been actively collaborating with academic institutions to promote and progress further development in computer science and other technology domains. We have an ever-expanding partnership with leading universities across the Asia Pacific region to advance state-of-the-art research through various programs and initiatives.\r\n\r\nWe are excited for \u201cMicrosoft Research Asia Academic Day 2019\u201d to facilitate comprehensive and insightful exchanges between Microsoft Research Asia and the academic community.\r\n<h2>Program Chairs<\/h2>\r\n<ul class=\"msr-people-list stripped ms-row no-margin-bottom\">\r\n \t<li class=\"xs-col-12-24 s-col-8-24 m-col-6-24 l-col-8-24 margin-bottom-sp3\"><img class=\"avatar avatar-180 photo msr-profile-image\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/miran_lee.png\" alt=\"\" width=\"300\" height=\"300\" \/>\r\n<p class=\"body-alt no-margin-bottom\">Miran Lee<\/p>\r\n<p class=\"body-alt no-margin-bottom\">Outreach Director<\/p>\r\n<\/li>\r\n \t<li class=\"xs-col-12-24 s-col-8-24 m-col-6-24 l-col-8-24 margin-bottom-sp3\"><img class=\"avatar avatar-180 photo msr-profile-image\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/yongqiang_xiong.jpg\" alt=\"Portrait of Susan Dumais\" width=\"150\" height=\"150\" \/>\r\n<p class=\"body-alt no-margin-bottom\">Yongqiang Xiong<\/p>\r\n<p class=\"body-alt no-margin-bottom\">Principal Research Manager<\/p>\r\n<\/li>\r\n \t<li class=\"xs-col-12-24 s-col-8-24 m-col-6-24 l-col-8-24 margin-bottom-sp3\"><img class=\"avatar avatar-180 photo msr-profile-image\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/07\/lyx-2019.png\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<p class=\"body-alt no-margin-bottom\">Yunxin Liu<\/p>\r\n<p class=\"body-alt no-margin-bottom\">Principal Research Manager<\/p>\r\n<\/li>\r\n \t<li class=\"xs-col-12-24 s-col-8-24 m-col-6-24 l-col-8-24 margin-bottom-sp3\"><img class=\"avatar avatar-180 photo msr-profile-image\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2016\/08\/avatar_user__1470987161-180x180.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<p class=\"body-alt no-margin-bottom\">Tao Qin<\/p>\r\n<p class=\"body-alt no-margin-bottom\">Senior Principal Research Manager<\/p>\r\n<\/li>\r\n \t<li class=\"xs-col-12-24 s-col-8-24 m-col-6-24 l-col-8-24 margin-bottom-sp3\"><img class=\"avatar avatar-180 photo msr-profile-image\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2016\/07\/avatar_user__1468038567-180x180.png\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<p class=\"body-alt no-margin-bottom\">Wenjun Zeng<\/p>\r\n<p class=\"body-alt no-margin-bottom\">Senior Principal Research Manager<\/p>\r\n<\/li>\r\n<\/ul>"},{"id":1,"name":"Agenda","content":"<h2>November 7<\/h2>\r\n[accordion]\r\n\r\n[panel header=\"Workshop on System and Networking for AI\"]\r\n\r\n<strong>Abstract<\/strong>: We live in a world of connected entities including various systems (ranging from big cloud and edge systems to individual memory and disk systems) networked together. Innovations in systems and networking are key driving forces in the era of big data and artificial intelligence, to empower advanced intelligent algorithms with reliable, secure, scalable and efficient computing capacity to process huge volumes of data. We have witnessed the significant progress in cloud systems, and recently, edge computing, in particular AI on Edge, has attracted increasing attention from both academia and industry. This workshop aims to report and discuss the most recent progress and trends on general system and networking area, especially on various infrastructure support for machine learning systems.\r\n\r\n<strong>Event owners<\/strong>: <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/yunliu\/\">Yunxin Liu<\/a>, <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/yqx\/\">Yongqiang Xiong<\/a>\r\n\r\n&nbsp;\r\n<table class=\"msr-table-schedule\" style=\"border-spacing: inherit;border-collapse: collapse\" width=\"100%\">\r\n<thead class=\"thead\">\r\n<tr class=\"tr\">\r\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 20%\" width=\"20%\">Time (CST)<\/th>\r\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 30%\" width=\"30%\">Workshops<\/th>\r\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 35%\" width=\"35%\">Speaker<\/th>\r\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 15%\" width=\"15%\">Location<\/th>\r\n<\/tr>\r\n<\/thead>\r\n<tbody class=\"tbody\">\r\n<tr class=\"tr\">\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">2:00 PM\u20132:10 PM<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Welcome and introductions<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Yunxin Liu &amp; Yongqiang Xiong, Microsoft Research<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Dong Zhi Men, Microsoft Tower 1-1F<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">2:10 PM\u20133:25 PM<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Research at Microsoft (25 mins per talk x3)<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\r\n<ul>\r\n \t<li>Peng Cheng, Microsoft Research<\/li>\r\n \t<li>Ting Cao, Microsoft Research<\/li>\r\n \t<li>Quanlu Zhang, Microsoft Research<\/li>\r\n<\/ul>\r\n<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">3:25 PM\u20134:40 PM<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Research talks (25 mins per talk x3)<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\r\n<ul>\r\n \t<li>Chuan Wu, University of Hong Kong<\/li>\r\n \t<li>Xuanzhe Liu, Peking University<\/li>\r\n \t<li>Rajesh Krishna Balan, Singapore Management University<\/li>\r\n<\/ul>\r\n<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">4:40 PM\u20135:20 PM<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Panel with discussion\r\n\r\nTitle: \u201cWhat\u2019s missing in system &amp; networking for AI?\u201d<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\r\n<ul>\r\n \t<li>Yunxin Liu, Microsoft Research (Moderator)<\/li>\r\n \t<li>Yongqiang Xiong, Microsoft Research (Moderator)<\/li>\r\n \t<li>Chuan Wu, University of Hong Kong<\/li>\r\n \t<li>Xuanzhe Liu, Peking University<\/li>\r\n \t<li>Rajesh Krishna Balan, Singapore Management University<\/li>\r\n \t<li>Peng Cheng, Microsoft Research<\/li>\r\n \t<li>Ting Cao, Microsoft Research<\/li>\r\n \t<li>Quanlu Zhang, Microsoft Research<\/li>\r\n<\/ul>\r\n<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">5:20 PM\u20135:30 PM<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Wrap-up and closing<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n[\/panel]\r\n\r\n[panel header=\"Workshop on Low-Resource Machine Learning\"]\r\n\r\n<strong>Abstract<\/strong>: Deep learning has greatly driven this wave of AI. While deep learning has made many breakthroughs in recent years, its success heavily relies on big labeled data, big model, and big computing. As edge computing becomes the trend and more and more IoT devices become available, deep learning faces the low-resource challenge: how to learn from limited labeled data, with limited model size, and limited computation resources. The theme of this workshop is low-resource machine learning: learning from low-resource data, learning compact models, and learning with limited computational resources. This workshop aims to report latest progress and discuss the trends and frontiers of research on low-resource machine learning.\r\n\r\n<strong>Event owner<\/strong>: <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/taoqin\/\">Tao Qin<\/a>\r\n\r\n&nbsp;\r\n<table class=\"msr-table-schedule\" style=\"border-spacing: inherit;border-collapse: collapse\" width=\"100%\">\r\n<thead class=\"thead\">\r\n<tr class=\"tr\">\r\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 20%\" width=\"20%\">Time (CST)<\/th>\r\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 30%\" width=\"30%\">Workshops<\/th>\r\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 35%\" width=\"35%\">Speaker<\/th>\r\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 15%\" width=\"15%\">Location<\/th>\r\n<\/tr>\r\n<\/thead>\r\n<tbody class=\"tbody\">\r\n<tr class=\"tr\">\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">2:00 PM\u20132:10 PM<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Welcome and introductions<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Tao Qin, Microsoft Research<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Xi Zhi Men, Microsoft Tower 1-1F<\/td>\r\n<td><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">2:10 PM\u20133:25 PM<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Research at Microsoft (25 mins per talk x3)<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\r\n<ul>\r\n \t<li>Yingce Xia, Microsoft Research<\/li>\r\n \t<li>Xu Tan, Microsoft Research<\/li>\r\n \t<li>Guolin Ke, Microsoft Research<\/li>\r\n<\/ul>\r\n<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<td><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">3:25 PM\u20134:40 PM<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Research talks (25 mins per talk x3)<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\r\n<ul>\r\n \t<li>Jaegul Choo, Korea University<\/li>\r\n \t<li>Sinno Jialin Pan, Nanyang Technological University<\/li>\r\n \t<li>Sung Ju Hwang, KAIST<\/li>\r\n<\/ul>\r\n<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">4:40 PM\u20135:20 PM<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Panel with discussion\r\n\r\nTitle: \u201cChallenges and Future of Low-Resource Machine Leaning\u201d<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\r\n<ul>\r\n \t<li>Tao Qin, Microsoft Research (Moderator)<\/li>\r\n \t<li>Jaegul Choo, Korea University<\/li>\r\n \t<li>Sung Ju Hwang, KAIST<\/li>\r\n \t<li>Shujie Liu, Microsoft Research<\/li>\r\n \t<li>Dongdong Zhang, Microsoft Research<\/li>\r\n<\/ul>\r\n<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">5:20 PM\u20135:30 PM<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Wrap-up and closing<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n[\/panel]\r\n\r\n[panel header=\"Workshop on Multimodal Representation Learning and Applications\"]\r\n\r\n<strong>Abstract<\/strong>: We live in a world of multimedia (text, image, video, audio, sensor data, 3D, etc.). These modalities are integral components of real-world events and applications. A full understanding of multimedia relies heavily on feature learning, entity recognition, knowledge, reasoning, language representation, etc. Cross-modal learning, which requires joint feature learning and cross-modal relationship modeling, has attracted increasing attention from both academia and industry. This workshop aims to report and discuss the most recent progress and trends on multimodal representation learning for multimedia applications.\r\n\r\n<strong>Event owners<\/strong>: <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<\/a>, <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/nanduan\/\">Nan Duan<\/a>\r\n\r\n&nbsp;\r\n<table class=\"msr-table-schedule\" style=\"border-spacing: inherit;border-collapse: collapse\" width=\"100%\">\r\n<thead class=\"thead\">\r\n<tr class=\"tr\">\r\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 20%\" width=\"20%\">Time (CST)<\/th>\r\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 30%\" width=\"30%\">Workshops<\/th>\r\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 35%\" width=\"35%\">Speaker<\/th>\r\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 15%\" width=\"15%\">Location<\/th>\r\n<\/tr>\r\n<\/thead>\r\n<tbody class=\"tbody\">\r\n<tr class=\"tr\">\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">2:00 PM\u20132:10 PM<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Welcome and introductions<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Wenjun Zeng, Microsoft Research<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Tian An Men, Microsoft Tower 1-1F<\/td>\r\n<td><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">2:10 PM\u20133:25 PM<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Research at Microsoft (25 mins per talk x3)<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\r\n<ul>\r\n \t<li>Nan Duan, Microsoft Research<\/li>\r\n \t<li>Yue Cao, Microsoft Research<\/li>\r\n \t<li>Chong Luo, Microsoft Research<\/li>\r\n<\/ul>\r\n<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<td><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">3:25 PM\u20134:40 PM<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Research talks (25 mins per talk x3)<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\r\n<ul>\r\n \t<li>Gunhee Kim, Seoul National University<\/li>\r\n \t<li>Winston Hsu, National Taiwan University<\/li>\r\n \t<li>Jiwen Lu, Tsinghua University<\/li>\r\n<\/ul>\r\n<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">4:40 PM\u20135:20 PM<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Panel with discussion\r\n\r\nTitle: Opportunities and Challenges for Cross-Modal Learning<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\r\n<ul>\r\n \t<li>Wenjun Zeng, Microsoft Research (Moderator)<\/li>\r\n \t<li>Xilin Chen, Chinese Academy of Science<\/li>\r\n \t<li>Winston Hsu, National Taiwan University<\/li>\r\n \t<li>Gunhee Kim, Seoul National University<\/li>\r\n \t<li>Nan Duan, Microsoft Research<\/li>\r\n<\/ul>\r\n<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">5:20 PM\u20135:30 PM<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Wrap-up and closing<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n[\/panel] [\/accordion]\r\n<h2>November 8<\/h2>\r\n<table class=\"msr-table-schedule\" style=\"border-spacing: inherit;border-collapse: collapse\" width=\"100%\">\r\n<thead class=\"thead\">\r\n<tr class=\"tr\">\r\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 20%\" width=\"20%\">Time (CST)<\/th>\r\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 30%\" width=\"30%\">Workshops<\/th>\r\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 35%\" width=\"35%\">Speaker<\/th>\r\n<th class=\"th\" style=\"padding: 8px;border-bottom: 1px solid #000000;width: 15%\" width=\"15%\">Location<\/th>\r\n<\/tr>\r\n<\/thead>\r\n<tbody class=\"tbody\">\r\n<tr class=\"tr\">\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">09:00 \u2013 09:30<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Welcome &amp; MSRA Overview<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Hsiao-Wuen Hon<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Gu Gong, Microsoft Tower 1-1F<\/td>\r\n<\/tr>\r\n<tr class=\"tr\">\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">09:30 \u2013 09:40<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Fellowship Award Ceremony<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Presenter: Hsiao-Wuen Hon<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<\/tr>\r\n<tr class=\"tr\">\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">09:40 \u2013 10:00<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Photo session &amp; Break<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<\/tr>\r\n<tr class=\"tr\">\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">10:00 \u2013 10:40<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Panel Discussion\r\n\r\nTitle: \u201cHow to foster a computer scientist\u201d<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Moderator: Tim Pan, Microsoft Research\r\n\r\nPanelists:\r\n<ul>\r\n \t<li>Bohyung Han, Seoul National University<\/li>\r\n \t<li>Junichi Rekimoto, The University of Tokyo<\/li>\r\n \t<li>Winston Hsu, National Taiwan University<\/li>\r\n \t<li>Xin Tong, Microsoft Research<\/li>\r\n<\/ul>\r\n<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<\/tr>\r\n<tr class=\"tr\">\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">10:40 \u2013 11:55<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Technology Showcase by Microsoft Research Asia (5)<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">\r\n<ul>\r\n \t<li>\u201cOneOCR For Digital Transformation\u201d by Qiang Huo<\/li>\r\n \t<li>\u201cNN grammar check\u201d by Tao Ge<\/li>\r\n \t<li>\u201cAutoSys: Learning based approach for system optimization\u201d by Mao Yang<\/li>\r\n \t<li>\u201cDual learning and its application in translation and speech from ML\u201d by Tao Qin(Yingce Xia and Xu Tan)<\/li>\r\n \t<li>\u201cSpreadsheet Intelligence for Ideas in Excel\u201d by Shi Han<\/li>\r\n<\/ul>\r\n<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<\/tr>\r\n<tr class=\"tr\">\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">12:00 \u2013 14:00<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Technology Showcase by Academic Collaborators<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Lunch, Microsoft Tower1-1F<\/td>\r\n<\/tr>\r\n<tr class=\"tr\">\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">14:00 \u2013 17:30<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Breakout Sessions<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<\/tr>\r\n<tr class=\"tr\">\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Language and Knowledge<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leader: Xing Xie\r\n\r\nSpeakers: Seung-won Hwang, Min Zhang, Lei Chen, Masatoshi Yoshikawa, Shou-De Lin, Rui Yan, Hiroaki Yamane, Chenhui Chu, Tadashi Nomoto<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Zhong Guan Cun, Microsoft Tower 2-4F<\/td>\r\n<\/tr>\r\n<tr class=\"tr\">\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">System and Networking<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leaders: Lidong Zhou, Yunxin Liu\r\n\r\nSpeakers: Insik Shin, Wenfei Wu, Rajesh Krishna Balan, Youyou Lu, Chuck Yoo, Yu Zhang, Atsuko Miyaji, Jingwen Leng, Yao Guo, Heejo Lee, Cheng Li<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">San Li Tun, Microsoft Tower 2-4F<\/td>\r\n<\/tr>\r\n<tr class=\"tr\">\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Computer Vision<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leader: Wenjun Zeng\r\n\r\nSpeakers: Gunhee Kim, Tianzhu Zhang, Yonggang Wen, Wen-Huang Cheng, Jiaying Liu, Bohyung Han, Wei-Shi Zheng, Jun Takamatsu, Xueming Qian<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Qian Men, Microsoft Tower 2-4F<\/td>\r\n<\/tr>\r\n<tr class=\"tr\">\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Graphics<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leader: Xin Tong\r\n\r\nSpeakers: Min H. Kim, Seungyong Lee, Sung-eui Yoon<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Di Tan, Microsoft Tower 2-4F<\/td>\r\n<\/tr>\r\n<tr class=\"tr\">\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Multimedia<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leader: Yan Lu\r\n\r\nSpeakers: Seung Ah Lee, Huanjing Yue, Hiroki Watanabe, Minsu Cho, Zhou Zhao, Seungmoon Choi<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Gu Lou, Microsoft Tower 2-4F<\/td>\r\n<\/tr>\r\n<tr class=\"tr\">\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Healthcare<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leader: Eric Chang\r\n\r\nSpeakers: Ryo Furukawa, Winston Hsu<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Dong Cheng, Microsoft Tower 2-4F<\/td>\r\n<\/tr>\r\n<tr class=\"tr\">\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Data, Knowledge, and Intelligence<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leaders: Jian-Guang Lou, Qingwei Lin\r\n\r\nSpeakers: Shixia Liu, Huamin Qu, Jong Kim, Yingcai Wu<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Xi Cheng, Microsoft Tower 2-4F<\/td>\r\n<\/tr>\r\n<tr class=\"tr\">\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Machine Learning<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leader: Tao Qin\r\n\r\nSpeakers: Hongzhi Wang, Seong-Whan Lee, Sinno Jialin Pan, Lijun Zhang, Jaegul Choo, Mingkui Tan, Liwei Wang<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Ri Tan, Microsoft Tower 2-4F<\/td>\r\n<\/tr>\r\n<tr class=\"tr\">\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Speech<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Leader: Frank Soong\r\n\r\nSpeakers: Jun Du, Hong-Goo Kang<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Guo Zi Jian, Microsoft Tower 2-4F<\/td>\r\n<\/tr>\r\n<tr class=\"tr\">\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">17:30-18:00<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Transition Break<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<\/tr>\r\n<tr class=\"tr\">\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">18:15 \u2013 20:30<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Banquet<\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\"><\/td>\r\n<td style=\"padding: 8px;vertical-align: middle;border-bottom: 1px solid #000000\">Ballroom located @ 3F, Tylfull Hotel<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>"},{"id":2,"name":"Abstracts","content":"<h2>Workshops<\/h2>\r\n[accordion]\r\n\r\n[panel header=\"AI Platform Acceleration with Programmable Hardware\"]\r\n\r\n<strong>Speaker<\/strong>: Peng Cheng, Microsoft Research\r\n\r\nProgrammable hardware has been used to build high throughput, low latency real-time core AI engine such as BrainWave. Instead of AI engine, we focus on solving AI-platform-related bottlenecks, for instance in this case, storage and networking I\/O, model distribution, synchronization and data pre-processing in machine learning tasks, with acceleration from programmable hardware. Our proposed system enables direct hardware-assisted device-to-device interconnection with inline processing. We choose FPGA as our first prototype to build a general platform for AI acceleration since FPGA has been widely deployed in Azure to achieve high performance with much lower economy cost. Our system can accelerate AI in many aspects. It now enables GPUs directly fetch training data from storage to GPU memory to bypass costly CPU involvement. As an intelligent hub, it can also do inline data pre-processing efficiently. More accelerating scenarios are under development including in-network inference acceleration and hardware parameter server for distributed machine learning, etc.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Audio captioning and knowledge-grounded conversation\"]\r\n\r\n<strong>Speaker<\/strong>: Gunhee Kim, Seoul National University\r\n\r\nIn this talk, I will introduce two recent works about NLP from Vision and Learning Lab of Seoul National University. First, we present our work that explores the problem of audio captioning: generating natural language description for any kind of audio in the wild, which has been surprisingly unexplored in previous research. We not only contribute a large-scale dataset of about 46K audio clips to human-written text pairs collected via crowdsourcing but also propose two novel components that help improve audio captioning performance of attention-based neural models. Second, I discuss about our work on knowledge-grounded dialogues, in which we address the problem of better modeling the knowledge selection in the multi-turn knowledge-grounded dialogue. We propose a sequential latent variable model as the first approach to this matter. Our experimental results show that the proposed model improves the knowledge selection accuracy and subsequently the performance of utterance generation. [\/panel]\r\n\r\n[panel header=\"Building Large-Scale Decentralized Intelligent Software Systems\"]\r\n\r\n<strong>Speaker:<\/strong> Xuanzhe Liu, Peking University\r\n\r\nWe are in the fast-growing flood of \"data\" and we significantly benefit from the \"intelligence\" derived from it. Such intelligence heavily relies on the centralized paradigm, i.e., the cloud-based systems and services. However, we realize that we are also at the dawn of emerging \"decentralized\" fashion to make intelligence more pervasive and even \"handy\" over smartphones, wearables, IoT devices, along with the collaborations among them and the cloud. This talk tries to discuss some technical challenges and opportunities of building the decentralized intelligence, mostly from a software system perspective, covering aspects of programming abstraction, performance, privacy, energy, and interoperability. We also share our recent efforts on building such software systems and industrial experiences. [\/panel]\r\n\r\n[panel header=\"Coloring with Limited Data: Few-Shot Colorization via Memory-Augmented Networks\"]\r\n\r\n<strong>Speaker:<\/strong> Jaegul Choo, Korea University\r\n\r\nDespite recent advancements in deep learning-based automatic colorization, they are still limited when it comes to few-shot learning. Existing models require a significant amount of training data. To tackle this issue, we present a novel memory-augmented colorization model that can produce high-quality colorization with limited data. In particular, our model can capture rare instances and successfully colorize them. We also propose a novel threshold triplet loss that enables unsupervised training of memory networks without the need of class labels. Experiments show that our model has superior quality in both few-shot and one-shot colorization tasks.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"FastSpeech: Fast, Robust and Controllable Text to Speech\"]\r\n\r\n<strong>Speaker:<\/strong> Xu Tan, Microsoft Research\r\n\r\nNeural network based end-to-end text to speech (TTS) has significantly improved the quality of synthesized speech. However, neural network based end-to-end models suffer from slow inference speed, and the synthesized speech is usually not robust (i.e., some words are skipped or repeated) and lack of controllability (voice speed or prosody control). In this work, we propose a novel feed-forward network based on Transformer to generate mel-spectrogram in parallel for TTS. Experiments show that our parallel model matches autoregressive models in terms of speech quality, nearly eliminates the problem of word skipping and repeating in particularly hard cases, and can adjust voice speed smoothly. Most importantly, compared with autoregressive Transformer TTS, our model speeds up the mel-spectrogram generation by 270x and the end-to-end speech synthesis by 38x.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Improving the Performance of Video Analytics Using WiFi Signals\"]\r\n\r\n<strong>Speaker:<\/strong> Rajesh Krishna Balan, Singapore Management University\r\n\r\nAutomatic analysis of the behaviour of large groups of people is an important requirement for a large class of important applications such as crowd management, traffic control, and surveillance. For example, attributes such as the number of people, how they are distributed, which groups they belong to, and what trajectories they are taking can be used to optimize the layout of a mall to increase overall revenue. A common way to obtain these attributes is to use video camera feeds coupled with advanced video analytics solutions. However, solely utilizing video feeds is challenging in high people-density areas, such as a normal mall in Asia, as the high people density significantly reduces the effectiveness of video analytics due to factors such as occlusion. In this work, we propose to combine video feeds with WiFi data to achieve better classification results of the number of people in the area and the trajectories of those people. In particular, we believe that our approach will combine the strengths. of the two different sensors, WiFi and video, while reducing the weaknesses of each sensor. This work has started fairly recently and we will present our thoughts and current results up to now.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Learning Beyond 2D Images\"]\r\n\r\n<strong>Speaker:<\/strong> Winston Hsu, National Taiwan University\r\n\r\nWe observed super-human capabilities from current (2D) convolutional networks for the images -- either for discriminative or generative models. For this talk, we will show our recent attempts in visual cognitive computing beyond 2D images. We will first demonstrate the huge opportunities as augmenting the leaning with temporal cues, 3D (point cloud) data, raw data, audio, etc. over emerging domains such as entertainment, security, healthcare, manufacturing, etc. In an explainable manner, we will justify how to design neural networks leveraging the novel (and diverse) modalities. We will demystify the pros and cons for these novel signals. We will showcase a few tangible applications ranging from video QA, robotic object referring, situation understanding, autonomous driving, etc. We will also review the lessons we learned as designing the advanced neural networks which accommodate the multimodal signals in an end-to-end manner. [\/panel]\r\n\r\n[panel header=\"LightGBM: A highly efficient gradient boosting machine\"]\r\n\r\n<strong>Speaker<\/strong>: Guolin Ke, Microsoft Research\r\n\r\nGradient Boosting Decision Tree (GBDT) is a popular machine learning algorithm and widely-used in the real-world applications. We open-sourced LightGBM, which contains many critical optimizations for the efficient training of GBDT and becomes one of the most popular GBDT tools. During this talk, I will introduce the key technologies behind LightGBM.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"MobiDL: Unleash the Mobile CPU Computing Power for Deep Learning Inference\"]\r\n\r\n<strong>Speaker:<\/strong> Ting Cao, Microsoft Research\r\n\r\nDeep learning (DL) models are increasingly deployed into real-world applications on mobile devices. However, current mobile DL frameworks neglect the CPU asymmetry, and the CPUs are seriously underutilized. We propose MobiDL for mobile DL inference, targeting improved CPU utilization and energy efficiency through novel designs for hardware asymmetry and appropriate frequency setting. It integrates four main techniques: 1) cost-model directed matrix block partition; 2) prearranged memory layout for model parameters; 3) asymmetry-aware task scheduling; and 4) data-reuse based CPU frequency setting. During the one-time initialization, the proper block partition, parameter layout, and efficient frequency for DL models can be configured by MobiDL. During inference, MobiDL scheduling balances tasks to fully utilize all the CPU cores. Evaluation shows that for CNN models, MobiDL can achieve 85% performance and 72% energy efficiency improvement on average compared to default TensorFlow. For RNN, it achieves up-to 17.51X performance and 8.26X energy efficiency improvement. [\/panel]\r\n\r\n[panel header=\"Multi-agent dual learning\"]\r\n\r\n<strong>Speaker<\/strong>: Yingce Xia, Microsoft Research\r\n\r\nDual learning is our recently proposed framework, where a primal task (e.g. Chinese-to-English translation) and a dual task (e.g., English-to-Chinese translation) are jointly optimized through a feedback signal. We extend standard dual learning to multi-agent dual learning, where multiple models for the primal task and multiple models for the dual task are evolved. In the case, the feedback signal is enhanced and we can get better performances. Experimental results on low-resource settings show that our method works pretty well. On WMT'19 machine translation competition, we won four top places using multi-agent dual learning.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Multi-view Deep Learning for Visual Content Understanding\"]\r\n\r\n<strong>Speaker:<\/strong> Jiwen Lu, Tsinghua University\r\n\r\nIn this talk, I will overview the trend of multi-view deep learning techniques and discuss how they are used to improve the performance of various visual content understanding tasks. Specifically, I will present three multi-view deep learning approaches: multi-view deep metric learning, multi-modal deep representation learning, and multi-agent deep reinforcement learning, and show how these methods are used for visual content understanding tasks. Lastly, I will discuss some open problems in multi-view deep learning to show how to further develop more advanced multi-view deep learning methods for computer vision in the future. [\/panel]\r\n\r\n[panel header=\"NNI: An open source toolkit for neural architecture search and hyper-parameter tuning\"]\r\n\r\n<strong>Speaker<\/strong>: Quanlu Zhang, Microsoft Research\r\n\r\nRecent years have witnessed the great success of deep learning in a broad range of applications. Model tuning becomes a key step for finding good models. To be effective in practice, a system is demanded to facilitate this tuning procedure from both programming effort and searching efficiency. Thus, we open source NNI (Neural Network Intelligence), a toolkit for neural architecture search and hyper-parameter tuning, which provides easy-to-use interface, rich built-in AutoML algorithms. Moreover, it is highly extensible to support various new tuning algorithms and requirements. With high scalability, many trials could run in parallel on various training platforms.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Pre-training for Video-Language Cross-Modal Tasks\"]\r\n\r\n<strong>Speaker:<\/strong> Chong Luo, Microsoft Research\r\n\r\nVideo-language cross-modal tasks are receiving increasing interests in recent years, from video retrieval, video captioning, to spatial-temporal localization in video by language query. In this talk, we will present the research and application of some of these tasks. We will show how pre-trained single-modality models have made these tasks tractable and discuss the paradigm shift in deep neural network design with pre-trained models. In addition, we propose a universal cross-modality pre-training framework which may benefit a wide range of video-language tasks. We hope that our work will provide inspiration to other researchers in solving these interesting but challenging cross-modal tasks. [\/panel]\r\n\r\n[panel header=\"Resource Scheduling for Distributed Deep Training\"]\r\n\r\n<strong>Speaker:<\/strong> Chuan Wu, University of Hong Kong\r\n\r\nMore and more companies\/institutions are running AI clouds\/machine learning clusters with various ML model training workloads, to support various AI-driven services. Efficient resource scheduling is the key to maximize the performance of ML workloads, as well as hardware efficiency of the very expensive ML cluster. A large room exists in improving today\u2019s ML cluster schedulers, e.g., to include interference awareness in task placement and to schedule not only computation but also communication, etc. In this talk, I will share our recent work on designing deep learning job schedulers for ML clusters, aiming at expediting training speeds and minimizing training completion time. Our schedulers decide communication scheduling, the number of workers\/PSs, and the placement of workers\/PSs for jobs in the cluster, through both heuristics with theoretical support and reinforcement learning approaches. [\/panel]\r\n\r\n[panel header=\"Transferable Recursive Neural Networks for Fine-grained Sentiment Analysis\"]\r\n\r\n<strong>Speaker:<\/strong> Sinno Jialin Pan, Nanyang Technological University\r\n\r\nIn \ufb01ne-grained sentiment analysis, extracting aspect terms and opinion terms from user-generated texts is the most fundamental task in order to generate structured opinion summarization. Existing studies have shown that the syntactic relations between aspect and opinion words play an important role for aspect and opinion terms extraction. However, most of the works either relied on pre-de\ufb01ned rules or separated relation mining with feature learning. Moreover, these works only focused on single-domain extraction which failed to adapt well to other domains of interest, where only unlabeled data is available. In real-world scenarios, annotated resources are extremely scarce for many domains or languages. In this talk, I am going to introduce our recent series of works on transfer learning for cross-domain and cross-language fine-grained sentiment analysis based on recursive neural networks. [\/panel]\r\n\r\n[panel header=\"VL-BERT: Pre-training of Generic Visual-Linguistic Representations\"]\r\n\r\n<strong>Speaker:<\/strong> Yue Cao, Microsoft Research\r\n\r\nWe introduce a new pre-trainable generic representation for visual-linguistic tasks, called Visual-Linguistic BERT (VL-BERT for short). VL-BERT adopts the simple yet powerful Transformer model as the backbone and extends it to take both visual and linguistic embedded features as input. In it, each element of the input is either of a word from the input sentence, or a region-of-interest (RoI) from the input image. It is designed to fit for most of the visual-linguistic downstream tasks. To better exploit the generic representation, we pre-train VL-BERT on the massive-scale Conceptual Captions dataset, together with text-only corpus. Extensive empirical analysis demonstrates that the pre-training procedure can better align the visual-linguistic clues and benefit the downstream tasks, such as visual commonsense reasoning, visual question answering and referring expression comprehension. [\/panel]\r\n\r\n[panel header=\"When Language Meets Vision: Multi-modal NLP with Visual Contents\"]\r\n\r\n<strong>Speaker:<\/strong> Nan Duan, Microsoft Research\r\n\r\nIn this talk, I will introduce our latest work on multi-modal NLP, including (i) multi-modal pre-training, which aims to learn the joint representations between language and visual contents; (ii) multi-modal reasoning, which aims to handle complex queries by manipulating knowledge extracted from language and visual contents; (iii) video-based QA\/summarization, which aims to make video contents readable and searchable. [\/panel]\r\n\r\n[\/accordion]\r\n<h2>Breakout Sessions<\/h2>\r\n[accordion]\r\n\r\n[panel header=\"Adaptive Regret for Online Learning\"]\r\n\r\n<strong>Speaker<\/strong>: Lijun Zhang, Nanjing University\r\n\r\nTo deal with changing environments, a new performance measure\u2014adaptive regret, defined as the maximum static regret over any interval, is proposed in online learning. Under the setting of online convex optimization, several algorithms have been developed to minimize the adaptive regret. However, existing algorithms are problem-independent and lack universality. In this talk, I will briefly introduce our two contributions in this direction. The first one is to establish problem-dependent bounds of adaptive regret by exploiting the smoothness condition. The second one is to design an universal algorithm that can handle multiple types of functions simultaneously.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Advances and Challenges on Human-Computer Conversational Systems\"]\r\n\r\n<strong>Speaker<\/strong>: Rui Yan, Peking University\r\n\r\nNowadays, automatic human-computer conversational systems have attracted great attention from both industry and academia. Intelligent products such as XiaoIce (by Microsoft) have been released, while tons of Artificial Intelligence companies have been established. We see that the technology behind the conversational systems is accumulating and now open to the public gradually. With the investigation of researchers, conversational systems are more than scientific fictions: they become real. It is interesting to review the recent advances of human-computer conversational systems, especially the significant changes brought by deep learning techniques. It would also be exciting to anticipate the development and challenges in the future.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"AI and Data: a closed Loop\"]\r\n\r\n<strong>Speaker<\/strong>: Hongzhi Wang, Harbin Institute of Technology\r\n\r\nData is the base of modern Artificial Intelligence (AI). Efficient and effective AI requires the support of data acquirement, governance, management, analytics and mining, which brings new challenges. From another aspect, the advances of AI provide new chances for data process to increase its automation. Thus, AI and data forms a closed loop and promote each other. In this talk, the speaker will demonstrate the mutual promotion of AI and data with some examples and discuss the further chance of promote bother of these areas.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Artificial Intelligence for Fashion\"]\r\n\r\n<strong>Speaker<\/strong>: Wen-Huang Cheng, National Chiao Tung University\r\n\r\nThe fashion industry is one of the biggest in the world, representing over 2 percent of global GDP (2018). Artificial intelligence (AI) has been a predominant theme in the fashion industry and is impacting its every part in scales from personal to industrial and beyond. In recent years, I and my research group have devoted to advanced AI research on helping revolutionize the fashion industry to enable innovative applications and services with improved user experience. In this talk, I would like to give an overview of the major outcomes of our researches and discuss what research subjects we can further work on together with Microsoft researchers to make new impact on the fashion domains.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"BERT is not all you need\"]\r\n\r\n<strong>Speaker<\/strong>: Seung-won Hwang, Yonsei University\r\n\r\nThis talk is inspired by a question to my talk at MSRA faculty summit last year: presenting NLP models where injecting (diverse forms of) knowledge contributes to meaningfully enhancing the accuracy and robustness. Then Chin-yew asked: \u201cDo you think BERT implicitly contains all these information already?\u201d This talk an extended investigation to support my short answer at the talk. The title is a spoiler.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Big Data, AI and HI, What is Next?\"]\r\n\r\n<strong>Speaker<\/strong>: Lei Chen, Hong Kong University of Science and Technology\r\n\r\nRecently, AI has become quite popular and attractive, not only to academia but also to the industry. The successful stories of AI on various applications raise significant public interests in AI. Meanwhile, human intelligence is turning out to be more sophisticated, and Big Data technology is everywhere to improve our life quality. The question that we all want to ask is \u201cwhat is the next?\". In this talk, I will discuss about DHA, a new computing paradigm, which combines big Data, Human intelligence, and AI (DHA). Specifically, I will first briefly explain the motivation of the DHA. Then I will present challenges, after that, I will highlight some possible solutions to build such a new paradigm.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Combinatorial Inference against Label Noise\"]\r\n\r\n<strong>Speaker<\/strong>: Bohyung Han, Seoul National University\r\n\r\nLabel noise is one of the critical sources that degrade generalization performance of deep neural networks significantly. To handle the label noise issue in a principled way, we propose a unique classification framework of constructing multiple models in heterogeneous coarse-grained meta-class spaces and making joint inference of the trained models for the final predictions in the original (base) class space. Our approach reduces noise level by simply constructing meta-classes and improves accuracy via combinatorial inferences over multiple constituent classifiers. Since the proposed framework has distinct and complementary properties for the given problem, we can even incorporate additional off-the-shelf learning algorithms to improve accuracy further. We also introduce techniques to organize multiple heterogeneous meta-class sets using k-means clustering and identify a desirable subset leading to learn compact models. Our extensive experiments demonstrate outstanding performance in terms of accuracy and efficiency compared to the state- of-the-art methods under various synthetic noise configurations and in a real-world noisy dataset.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Communication-Efficient Geo-Distributed Multi-Task Learning\"]\r\n\r\n<strong>Speaker<\/strong>: Sinno Jialin Pan, Nanyang Technological University\r\n\r\nMulti-task learning aims to learn multiple tasks jointly by exploiting their relatedness to improve the generalization performance for each task. Traditionally, to perform multi-task learning, one needs to centralize data from all the tasks to a single machine. However, in many real-world applications, data of different tasks is owned by different organizations and geo-distributed over different local machines. Due to heavy communication caused by transmitting the data and the issue of data privacy and security, it is impossible to send data of different task to a master machine to perform multi-task learning. In this paper, we present our recent work on distributed multi-task learning, which jointly learns multiple tasks in the parameter server paradigm without sharing any training data, and has a theoretical guarantee on convergence to the solution obtained by the corresponding centralized multi-task learning algorithm.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Compact Snapshot Hyperspectral Imaging with Diffracted Rotation\"]\r\n\r\n<strong>Speaker<\/strong>: Min H. Kim, KAIST\r\n\r\nTraditional snapshot hyperspectral imaging systems include various optical elements: a dispersive optical element (prism), a coded aperture, several relay lenses, and an imaging lens, resulting in an impractically large form factor. We seek an alternative, minimal form factor of snapshot spectral imaging based on recent advances in diffractive optical technology. We there- upon present a compact, diffraction-based snapshot hyperspectral imaging method, using only a novel diffractive optical element (DOE) in front of a conventional, bare image sensor. Our diffractive imaging method replaces the common optical elements in hyperspectral imaging with a single optical element. To this end, we tackle two main challenges: First, the traditional diffractive lenses are not suitable for color imaging under incoherent illumination due to severe chromatic aberration because the size of the point spread function (PSF) changes depending on the wavelength. By leveraging this wavelength-dependent property alternatively for hyperspectral imaging, we introduce a novel DOE design that generates an anisotropic shape of the spectrally-varying PSF. The PSF size remains virtually unchanged, but instead the PSF shape rotates as the wavelength of light changes. Second, since there is no dispersive element and no coded aperture mask, the ill-posedness of spectral reconstruction increases significantly. Thus, we pro- pose an end-to-end network solution based on the unrolled architecture of an optimization procedure with a spatial-spectral prior, specifically designed for deconvolution-based spectral reconstruction. Finally, we demonstrate hyperspectral imaging with a fabricated DOE attached to a conventional DSLR sensor. Results show that our method compares well with other state- of-the-art hyperspectral imaging methods in terms of spectral accuracy and spatial resolution, while our compact, diffraction-based spectral imaging method uses only a single optical element on a bare image sensor.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"ContextDM: Context-aware Permanent Data Management Framework for Android\"]\r\n\r\n<strong>Speaker<\/strong>: Jong Kim, Pohang University of Science and Technology (POSTECH)\r\n\r\nThe data management practices by third-party apps have failed in terms of manageability and security because the modern systems cannot provide a fine-grained data management and security due to lack of understanding about stored data. As results, users suffer from storage shortage, data stealing, and data tampering.\r\n\r\nTo tackle the problem, we propose a novel and general data management framework, ContextDM, that sheds light on the storage to help system services and aid-apps for storage to have a better understanding on permanent data. In specific, the framework provides permanent data with metadata that includes contextual semantic information in terms of importance and sensitivity of data. Further, we show the effectiveness of our framework by demonstrating ContextDM based aid-tools that automatically identifying important and useless data as well as sensitive data that is disclosed.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Controlling Deep Natural Language Generation Models\"]\r\n\r\n<strong>Speaker<\/strong>: Shou-De Lin, National Taiwan University\r\n\r\nDeep Neural Network based solutions have shown promising results in natural language generation recently. From Autoencoder to the Seq2Seq models to the GAN-based solutions, deep learning models can already generate text that pass Turing Test, making the outputs non-distinguishable to human generated ones. However, researchers have pointed out that the content generated from deep neural networks can be fairly unpredictable, meaning that it is non-trivial for human to control the outputs to be generated. This talk will be discussing how to control the outputs of an NLG model and demonstrating some of our recent works along this line.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Cross-lingual Visual Grounding and Multimodal Machine Translation\"]\r\n\r\n<strong>Speaker<\/strong>: Chenhui Chu, Osaka University\r\n\r\nIn this talk, we will introduce two of our recent work on multilingual and multimodal processing: cross-lingual visual grounding and multimodal machine translation. Visual grounding is a vision and language understanding task aiming at locating a region in an image according to a specific query phrase. We will present our work on cross-lingual visual grounding to expand the task to different languages. In addition, we will introduce our work on multimodal machine translation that incorporate semantic image regions with both visual and textural attention.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Cryptographi-based security solutions for internet of things\"]\r\n\r\n<strong>Speaker<\/strong>: Atsuko Miyaji, Osaka University\r\n\r\nThe consequences of security failures in the era of internet of things (IoT) can be catastrophic, as have been demonstrated by a rapidly growing list of IoT security incidents. As a result, people have begun to recognize the importance and value of bringing the highest level of security to IoT. Tradition wisdom has it that, though technologically superior, public-key cryptography (PKC) is too expensive to deploy in IoT devices and networks. In this talk, we present our cost-effective improvement of elliptic curve cryptography (ECC) in terms of memory and computational resource.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Deep Efficient Image (Video) Restoration\"]\r\n\r\n<strong>Speaker<\/strong>: Huanjing Yue, Tianjin University\r\n\r\nIn this talk, I will introduce our team\u2019s work on image (video) denoising and demoir\u00e9ing.\r\n\r\nRealistic noise, which is introduced when capturing images under high ISO modes or low light conditions, is more complex than Gaussian noise, and therefore is difficult to be removed. By exploring the spatial, channel, and temporal correlations via deep CNNs, we can efficiently remove noise for images and videos. We construct two datasets to facilitate research on realistic noise removal for images and videos.\r\n\r\nMoir\u00e9 patterns, caused by aliasing between the grid of the display device and the array of camera sensor, greatly degrade the visual quality of recaptured screen images. Considering that the recaptured screen image and the original screen content usually have a large difference in brightness, we construct a moir\u00e9 removal and brightness improvement (MRBI) database with moir\u00e9-free and moir\u00e9 image pairs to facilitate the supervised learning and quantitative evaluation. Correspondingly, we propose a CNN based moir\u00e9 removal and brightness improvement method. Our work provides a benchmark dataset and a good baseline method for the demoir\u00e9ing task.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Deep Reinforcement Learning for the Transfer from Simulation to the Real World with Uncertainties for AI Curling Robot System\"]\r\n\r\n<strong>Speaker<\/strong>: Seong-Whan Lee, Korea University\r\n\r\nRecently, deep reinforcement learning (DRL) has even enabled real world applications such as robotics. Here we teach a robot to succeed in curling (Olympic discipline), which is a highly complex real-world application where a robot needs to carefully learn to play the game on the slippery ice sheet in order to compete well against human opponents. This scenario encompasses fundamental challenges: uncertainty, non-stationarity, infinite state spaces and most importantly scarce data. One fundamental objective of this study is thus to better understand and model the transfer from simulation to real-world scenarios with uncertainty. We demonstrate our proposed framework and show videos, experiments and statistics about Curly our AI curling robot being tested on a real curling ice sheet. Curly performed well both, in classical game situations and when interacting with human opponents.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Development of a 3D endoscopic system with abilities of multi-frame, wide-area scanning \"]\r\n\r\n<strong>Speaker<\/strong>: Ryo Furukawa, Hiroshima City University\r\n\r\nFor effective in situ endoscopic diagnosis and treatment, or robotic surgery, 3D endoscopic systems have been attracting many researchers. We have been developing a 3D endoscopic system based on an active stereo technique, which projects a special pattern wherein each feature is coded. We believe it is a promising approach because of simplicity and high precision. However, previous works of this approach have problems. First, the quality of 3D reconstruction depended on stabilities of feature extraction from the images captured by the endoscope camera. Second, due to the limited pattern projection area, the reconstructed region was relatively small. In this talk, we describe our works of a learning-based technique using CNNs to solve the first problem and an extended bundle adjustment technique, which integrates multiple shapes into a consistent single shape, to address the second. The effectiveness of the proposed techniques compared to previous techniques was evaluated experimentally.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Differential Privacy for Spatial and Temporal Data\"]\r\n\r\n<strong>Speaker<\/strong>: Masatoshi Yoshikawa, Kyoto University\r\n\r\nDifferential Privacy (DP) has received increased attention as a rigorous privacy framework. In this talk, we introduce our recent studies on extension of DP to spatial temporal data. The topics include i) DP mechanism under temporal correlations in the context of continuous data release; and ii) location privacy for location-based service over road networks.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Dissecting and Accelerating Neural Network via Graph Instrumentation\"]\r\n\r\n<strong>Speaker<\/strong>: Jingwen Leng, Shanghai Jiao Tong University\r\n\r\nDespite the enormous success of deep neural network, there is still no solid understanding of deep neural network\u2019s working mechanism. As such, one fundamental question arises - how should architects and system developers perform optimizations centering DNNs? Treating them as black box leads to efficiency and security issues: 1) DNN models require fixed computation budge regardless of input; 2) a human-imperceivable perturbation to the input causes a DNN misclassification. This talk will present our efforts toward addressing those challenges. We recognize an increasing need of monitoring and modifying the DNN\u2019s runtime behavior, as evident by our recent work effective path, and other researchers\u2019 work of network pruning and quantization. As such, we present our on-going effort of building a graph instrumentation framework that provides programmers with the great convenience of achieving those abilities.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Dynamic GPU Memory Management for DNNs\"]\r\n\r\n<strong>Speaker<\/strong>: Yu Zhang, University of Science &amp; Technology of China\r\n\r\nWhile deep learning researchers are seeking deeper and wider nonlinear networks, there is an increasing challenge for deploying deep neural network applications on low-end GPU devices for mobile and edge computing due to the limited size of GPU DRAM. The existing deep learning frameworks lack effective GPU memory management for different reasons. It is hard to apply effective GPU memory management on dynamic computation graphs which cannot get global computation graph (e.g. PyTorch), or can only impose limited dynamic GPU memory management strategies for static computation graphs (e.g. Tensorflow). In this talk, I will analyze the state of the art GPU memory management in the existing DL frameworks, present challenges on GPU memory management faced by running deep neural networks on low-end resource-constrained devices and finally give our thinking.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Emotional Speech Synthesis with Granularized Control\"]\r\n\r\n<strong>Speaker: <\/strong>Hong-Goo Kang, Yonsei University\r\n\r\nTangible interaction allows a user to interact with a computer using ordinary physical objects. It substantially expands the interaction space owing to the natural affordance and metaphors provided by real objects. However, tangible interaction requires to identify the object held by the user or how the user is touching the object. In this talk, I will introduce two sensing techniques for tangible interaction, which exploits active sensing using mechanical vibration. A vIn end-to-end deep learning-based emotional text-to-speech (TTS) systems such as the ones using Tacotron networks, it is very important to provide additional embedding vectors to flexibly control the distinct characteristic of target emotion.\r\n\r\nThis talk introduces a couple of methods to effectively estimate representative embedding vectors. Using the mean of embedding vectors is a simple approach, but the expressiveness of synthesized speech is not satisfactory. To enhance the expressiveness, we needs to consider the distribution of emotion embedding vectors. An inter-to-intra (I2I) distance ratio-based algorithm recently proposed by our research team shows much higher performance than the conventional mean-based one. The I2I algorithm is also useful for gradually changing the intensity of expressiveness. Listening test results verify that the emotional expressiveness and control-ability of the I2I algorithm is superior to those of the mean-based one. ibration is transmitted from an exciter worn in the user\u2019s hand or fingers, and the transmitted vibration is measured using a sensor. By comparing the input-output pair, we can recognize the object held between two fingers or the fingers touching the object. The mechanical vibrations also provide pleasant confirmation feedback to the user. Details will be shared in the talk. [\/panel]\r\n\r\n[panel header=\"Fairness in Recommender Systems\"]\r\n\r\n<strong>Speaker<\/strong>: Min Zhang, Tsinghua University\r\n\r\nRecommender systems have played significant roles in our daily life, and are expected to be available to any user, regardless of their gender, age or other demographic factors. Recently, there has been a growing concern about the bias that can creep into personalization algorithms and produce unfairness issues. In this talk, I will introduce the trending topics and our recent research progresses at THUIR (Tsinghua University Information Retrieval) group on fairness issue in recommender systems, including the causes of unfairness and the approaches to handle it. These series of work provide new ideas for building fairness-aware recommender system, and have been published on related top-tier international conferences SIGIR 2018, WWW 2019, SIGIR 2019, etc.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"FLUID: Flexible User Interface Distribution for Ubiquitous Multi-device Interaction\"]\r\n\r\n<strong>Speaker<\/strong>: Insik Shin, KAIST\r\n\r\nThe growing trend of multi-device ownerships creates a need and an opportunity to use applications across multiple devices. However, in general, the current app development and usage still remain within the single-device paradigm, falling far short of user expectations. For example, it is currently not possible for a user to dynamically partition an existing live streaming app with chatting capabilities across different devices, such that she watches her favorite broadcast on her smart TV while real-time chatting on her smartphone. In this paper, we present FLUID, a new Android-based multi-device platform that enables innovative ways of using multiple devices. FLUID aims to i) allow users to migrate or replicate individual user interfaces (UIs) of a single app on multiple devices (high flexibility), ii) require no additional development effort to support unmodified, legacy applications (ease of development), and iii) support a wide range of apps that follow the trend of using custom-made UIs (wide applicability). FLUID, on the other hand, meets the goals by carefully analyzing which UI states are necessary to correctly render UI objects, deploying only those states on different devices, supporting cross-device function calls transparently, and synchronizing the UI states of replicated UI objects across multiple devices. Our evaluation with 20 unmodified, real-world Android apps shows that FLUID can transparently support a wide range of apps and is fast enough for interactive use.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Global Texture Mapping for Dynamic Objects\"]\r\n\r\n<strong>Speaker<\/strong>: Seungyong Lee, Pohang University of Science and Technology (POSTECH)\r\n\r\nIn this talk, I will introduce a novel framework to generate a global texture atlas for a deforming geometry. Our approach distinguishes from prior arts in two aspects. First, instead of generating a texture map for each timestamp to color a dynamic scene, our framework reconstructs a global texture atlas that can be consistently mapped to a deforming object. Second, our approach is based on a single RGB-D camera, without the need of a multiple-camera setup surrounding a scene. In our framework, the input is a 3D template model with an RGB-D image sequence, and geometric warping fields are found using a state-of-the-art non-rigid registration method to align the template mesh to noisy and incomplete input depth images. With these warping fields, our multi-scale approach for texture coordinate optimization generates a sharp and clear texture atlas that is consistent with multiple color observations over time. Our approach provides a handy configuration to capture a dynamic geometry along with a clean texture atlas, and we demonstrate it with practical scenarios, particularly human performance capture.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Gradient Descent Finds Global Minima of Deep Neural Networks\"]\r\n\r\n<strong>Speaker<\/strong>: Liwei Wang, Peking University\r\n\r\nGradient descent finds a global minimum in training deep neural networks despite the objective function being non-convex. The current paper proves gradient descent achieves zero training loss in polynomial time for a deep over-parameterized neural network with residual connections (ResNet). Our analysis relies on the particular structure of the Gram matrix induced by the neural network architecture. This structure allows us to show the Gram matrix is stable throughout the training process and this stability implies the global optimality of the gradient descent algorithm. We further extend our analysis to deep residual convolutional neural networks and obtain a similar convergence result.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Graph-based Action Assessment\"]\r\n\r\n<strong>Speaker<\/strong>: Wei-Shi Zheng, Sun Yat-sen University\r\n\r\nWe present a new model to assess the performance of actions visually from videos by graph-based joint relation modelling. Previous works mainly focused on the whole scene including the performer's body and background, yet they ignored the detailed joint interactions. This is insufficient for fine-grained and accurate action assessment, because the action quality of each joint is dependent of its neighboring joints. Therefore, we propose to learn the detailed joint motion based on the joint relations. We build trainable Joint Relation Graphs, and analyze joint motion on them. We propose two novel modules, namely the Joint Commonality Module and the Joint Difference Module, for joint motion learning. The Joint Commonality Module models the general motion for certain body parts, and the Joint Difference Module models the motion differences within body parts. We evaluate our method on six public Olympic actions for performance assessment. Our method outperforms previous approaches (+0.0912) and the whole-scene model (+0.0623) in terms of the Spearman's Rank Correlation. We also demonstrate our model's ability of interpreting the action assessment process.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Intelligent Action Analytics with Multi-Modal Reasoning\"]\r\n\r\n<strong>Speaker<\/strong>: Jiaying Liu, Peking University\r\n\r\nIn this talk, we focus on intelligent action analytics in videos with multi-modal reasoning, which is important but remains under explored. We first present challenges in this problem by introducing PKU-MMD dataset collected by ourselves, i.e., multi-modal complementary feature learning, noise-robust feature learning, and dealing with tedious label annotation, etc. To tackle the above issues, we propose initial solutions with multi-modal reasoning. A modality compensation network is proposed to explicitly explore relationship of different modalities and further boost multi-modal feature learning. A noise-invariant network is developed to recognize human actions from noisy skeletons by referring denoised skeletons. To light up the community, we introduce possible future work in the end, such as self-supervised learning, language-guided reasoning.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Kafe: can OS kernel handle packets fast enough\"]\r\n\r\n<strong>Speaker<\/strong>: Chuck Yoo, Korea University\r\n\r\nIt is widely believed that commodity operating systems cannot deliver high-speed packet processing, and a number of alternative approaches (including user-space network stacks) have been proposed. This talk revisits the inef\ufb01ciency of packet processing inside kernel and explores whether a redesign of kernel network stacks can improve the incompetence. We present a case through a redesign: Kafe \u2013 a kernel-based advanced forwarding engine. Contrary to the belief, Kafe can process packets as fast as user-space network stacks. Kafe neither adds any new API nor depends on proprietary hardware features.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Learning Multi-label Feature for Fine-Grained Food Recognition\"]\r\n\r\n<strong>Speaker<\/strong>: Xueming Qian, Xi'an Jiaotong University\r\n\r\nFine-grained food recognition is the detailed classification provide more specialized and professional attribute information of food. It is the basic work to realize healthy diet recommendation and cooking instructions, nutrition intake management and caf\u00e9teria self-checkout system. Chinese food appearance without the structured information, and ingredients composition is an important consideration. We proposed a new method for fine-grained food and ingredients recognition, include Attention Fusion Network (AFN) and Food-Ingredient Joint Learning. In AFN, it is focus on important attention regional features, and generates the feature descriptor. In Food-Ingredient Joint Learning, we proposed the balance focal loss to solve the issue of imbalanced ingredients multi-label. Finally, a series of experiments to prove results have significantly improved on the existing methods.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Learning to Appreciate: Transforming Multimedia Communications via Deep Video Analytics\"]\r\n\r\n<strong>Speaker<\/strong>: Yonggang Wen, Nanyang Technological University\r\n\r\nMedia-rich applications will continue to dominate mobile data traffic with an exponential growth, as predicted by Cisco Video Index. The improved quality of experience (QoE) for the video consumers plays an important role in shaping this growth. However, most of the existing approaches in improving video QoE are system-centric and model-based, in that they tend to derive insights from system parameters (e.g., bandwidth, buffer time, etc) and propose various mathematical models to predict QoE scores (e.g., mean opinion score, etc). In this talk, we will share our latest work in developing a unified and scalable framework to transform multimedia communications via deep video analytics. Specifically, our framework consists two main components. One is a deep-learning based QoE prediction algorithm, by combining multi-modal data inputs to provide a more accurate assessment of QoE in real-time manner. The other is a model-free QoE optimization paradigm built upon deep reinforcement learning algorithm. Our preliminary results verify the effectiveness of our proposed framework. We believe that the hybrid approach of multimedia communications and computing would fundamentally transform how we optimization multimedia communications system design and operations.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Lensless Imaging for Biomedical Applications\"]\r\n\r\n<strong>Speaker<\/strong>: Seung Ah Lee, Yonsei University\r\n\r\nMiniaturization of microscopes can be a crucial stepping stone towards realizing compact,cost-effective and portable platforms for biomedical research and healthcare. This talk reports on implementations lensless microscopes and lensless cameras for a variety of biological imaging applications in the form of mass-producible semiconductor devices, which transforms the fundamental design of optical imaging systems.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Leveraging Generative Adversarial Networks for Data Augmentation by Disentangling Class-Independent Features\"]\r\n\r\n<strong>Speaker<\/strong>: Jaegul Choo, Korea University\r\n\r\nConsidering its success in generating high-quality, realistic data, generative adversarial networks (GANs) have potentials to be used for data augmentation to improve the prediction accuracy in diverse problems where the limited amount of training data is given. However, GANs themselves require a nontrivial amount of data for their training, so data augmentation via GANs does not often improve the accuracy in practice. This talk will briefly review existing literature and our on-going approach based on feature disentanglement. I will conclude the talk with further research issues that I would like to address in the future.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Manipulatable Auditory Perception in Wearable Computing\"]\r\n\r\n<strong>Speaker<\/strong>: Hiroki Watanabe, Hokkaido University\r\n\r\nSince auditory perception is passive sense, we often do not notice important information and acquire unimportant information. We focused on a earphone-type wearable computer (hearable device) that not only has speakers but also microphones. In a hearable computing environment, we always attach microphones and speakers to the ears. Therefore, we can manipulate our auditory perception using a hearable device. We manipulated the frequency of the input sound from the microphones and transmitted the converted sound from the speakers. Thus, we could acquire the sound that is not heard with our normal auditory perception and eliminate the unwanted sound according to the user\u2019s requirements.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Model Centric DevOps for Network Functions\"]\r\n\r\n<strong>Speaker<\/strong>: Wenfei Wu, Tsinghua University\r\n\r\nNetwork Functions play important roles in improving performance and enhancing security in modern computer networks. More and more NFs are being developed, integrated, and managed in production networks. However, the connection between the development and the operation for network functions has not drawn attention yet, which slows down the development and delivery of NFs and complicates NF network management.\r\n\r\nWe propose that building a common abstraction layer for network functions would benefit both the development and operation. For NF development, having a uniform abstraction layer to describe NF behaviors would make the cross-platform development to be rapid and agile, which accelerate the NF delivery for NF vendors, and we would introduce our recent NF development framework based on language and compiler technologies. For NF operation, having a behavior model would ease the network reasoning, which can avoid runtime bugs, and more crucially, the behavior model is guaranteed to reflect the actual implementation; we would introduce our NF verification work based on the NF modeling language. Around our model-centric NF development and operation, we also other NF model works which lay the foundation of NF modeling language, and fill in the semantic gap between legacy NFs and NF models.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"NAT: Neural Architecture Transformer for Accurate and Compact Architectures\"]\r\n\r\n<strong>Speaker<\/strong>: Mingkui Tan, South China University of Technology\r\n\r\nArchitecture design is one of the key factors behind the success of deep neural networks. Existing deep architectures are either manually designed or automatically searched by some Neural Architecture Search (NAS) methods. However, even a well-searched architecture may still contain many non-significant or redundant modules or operations (e.g., convolution or pooling), which not only incur substantial memory consumption and computational cost but may also deteriorate the performance. Thus, it is necessary to optimize the operations inside the architecture to improve the performance without introducing extra computational cost. However, such a constrained optimization problem is an NP-hard problem and is very hard to solve. To address this problem, we cast the optimization problem into a Markov decision process (MDP) and learn a Neural Architecture Transformer (NAT) to replace the redundant operations with the more computationally efficient ones (e.g., skip connection or directly removing the connection). In MDP, we train NAT with reinforcement learning to obtain the architecture optimization policies w.r.t. different architectures. To verify the effectiveness of the proposed method, we apply NAT on both hand-crafted architectures and NAS based architectures. Extensive experiments on two benchmark datasets, i.e., CIFAR-10 and ImageNet, show that the transformed architecture significantly outperforms both the original architecture and the architectures optimized by the existing methods.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Novelty-aware exploration in RL and Conditional GANs for diversity\"]\r\n\r\n<strong>Speaker<\/strong>: Gunhee Kim, Seoul National University\r\n\r\nIn this talk, I will introduce two recent works on machine learning from Vision and Learning Lab of Seoul National University. First, we present our work in reinforcement learning. We introduce an information-theoretic exploration strategy named Curiosity-Bottleneck (CB) that distills task-relevant information from observation. In our experiments, we observe that the CB algorithm robustly measures the state novelty in distractive environments where state-of-the-art exploration methods often degenerate. Second, we propose novel training schemes with a new set of losses that can prevent conditional GANs from losing the diversity in their outputs. We perform thorough experiments on image-to-image translation, super-resolution and image inpainting and show that our methods achieve a great diversity in outputs while retaining or even improving the visual fidelity of generated samples.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Numerical\/quantitative system for common sense natural language processing\"]\r\n\r\n<strong>Speaker<\/strong>: Hiroaki Yamane, RIKEN AIP &amp; The University of Tokyo\r\n\r\nNumerical common sense (e.g., \u201ca person with a height of 2m is very tall\u201d) is essential when deploying artificial intelligence (AI) systems in society. We construct methods for converting contextual language to numerical variables for quantitative\/numerical common sense in natural language processing (NLP).\r\n\r\nWe are living the world where we need common sense. We use some common sense when observing objects: A 165 cm human cannot be bigger than a 1 km bridge. The weight of the aforementioned human ranges from 40 kg to 90 kg. If one\u2019s weight is less than 50 kg, they are more likely to be very thin. This can be also applied to money. If the latest Surface Pro is $500, it is quite cheap. There is a necessity to account for common sense in future AI system.\r\n\r\nTo address this problem, we first use a crowdsourcing service to obtain sufficient data for a subjective agreement on numerical common sense. Second, to examine whether common sense is attributed to current word embedding, we examined the performance of a regressor trained on the obtained data.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Paraphrasing and Simplification with Lean Vocabulary\"]\r\n\r\n<strong>Speaker<\/strong>: Tadashi Nomoto, The SOKENDAI Graduate School of Advanced Studies\r\n\r\nIn this work, we examine whether it is possible to achieve the state of the art performance in paraphrase generation with reduced vocabulary. Our approach consists of building a convolution to sequence model (Conv2Seq) partially guided by the reinforcement learning, and training it on the sub-word representation of the input. The experiment on the Quora dataset, which contains over 140,000 pairs of sentences and corresponding paraphrases, found that with less than 1,000 token types, we were able to achieve performance that exceeded that of the current state of the art. We also report that the same architecture works equally well for text simplification, with little change.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Ray-SSL: Ray Tracing based Sound Source Localization considering Reflection and Diffraction\"]\r\n\r\n<strong>Speaker<\/strong>: Sung-eui Yoon, KAIST\r\n\r\nIn this talk, we discuss a novel, ray tracing based technique for 3D sound source localization for indoor and outdoor environments. Unlike prior approaches, which are mainly based on continuous sound signals from a stationary source, our formulation is designed to localize the position instantaneously from signals within a single frame. We consider direct sound and indirect sound signals that reach the microphones after reflecting off surfaces such as ceilings or walls. We then generate and trace direct and reflected acoustic paths using backward acoustic ray tracing and utilize these paths with Monte Carlo localization to estimate a 3D sound source position. For complex cases with many objects, we also found that diffraction effects caused by the wave characteristics of sound become dominant. We propose to handle such non-trivial problems even with ray tracing, since directly applying wave simulation is prohibitively expensive.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Recent Advances and Trends in Visual Tracking\"]\r\n\r\n<strong>Speaker<\/strong>: Tianzhu Zhang, University of Science and Technology of China\r\n\r\nVisual tracking is one of the most fundamental topics in computer vision with various applications in video surveillance, human computer interaction and vehicle navigation. Although great progress has been made in recent years, it remains a challenging problem due to factors such as illumination changes, geometric deformations, partial occlusions, fast motions and background clutters. In this talk, I will first review several recent models of visual tracking including particle filtering, classifier learning for tracking, sparse tracking, deep learning tracking, and correlation filter based tracking. Then, I will review several recent works of our group including correlation particle filter tracking, and graph convolutional tracking.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Relational Knowledge Distillation\"]\r\n\r\n<strong>Speaker<\/strong>: Minsu Cho, Pohang University of Science and Technology (POSTECH)\r\n\r\nKnowledge distillation aims at transferring knowledge acquired in one model (a teacher) to another model (a student) that is typically smaller. Previous approaches can be expressed as a form of training the student to mimic output activations of individual data examples represented by the teacher. We introduce a novel approach, dubbed relational knowledge distillation (RKD), that transfers mutual relations of data examples instead. For concrete realizations of RKD, we propose distance-wise and angle-wise distillation losses that penalize structural differences in relations. Experiments conducted on different tasks show that the proposed method improves educated student models with a significant margin. In particular for metric learning, it allows students to outperform their teachers' performance, achieving the state of the arts on standard benchmark datasets.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Requirements of Computer Vision for Household Robots\"]\r\n\r\n<strong>Speaker<\/strong>: Jun Takamatsu, Nara Institute of Science and Technology\r\n\r\nFor household robots that work in everyday-life dynamic environments, the computer vision (CV) to recognize the environments is essential. Unfortunately, CV issues in household robots sometimes cannot be solved by the methods that were usually proposed in the CV fields. In this talk, I exemplify the two examples and would like to ask their solutions. The first example is CV in learning-from-observation, where it is not enough to recognize names of actions, such as walk and jump. The second example is analysis of usage of time. This requires recognizing activities in the level such as watch TV and spend one\u2019s hobby.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Software and Hardware Co-design for Networked Memory\"]\r\n\r\n<strong>Speaker<\/strong>: Youyou Lu, Tsinghua University\r\n\r\nNon-volatile memory (NVM) and remote direct memory access (RDMA) provide extremely high performance in storage and network hardware. Comparatively, the software overhead in the file systems become a non-negligible part in persistent memory storage systems. To achieve efficient networked memory design, I will present this design choices in Octopus. Octopus is a distributed file system that redesigns file system internal mechanisms by closely coupling NVM and RDMA features. I will further discuss the possible hardware enhancements for networked memory for research in my group.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"System support for designing efficient gradient compression algorithms for distributed DNN training\"]\r\n\r\n<strong>Speaker<\/strong>: Cheng Li, University of Science and Technology of China\r\n\r\nTraining DNN models across a large number of connected devices or machines has been at norm. Studies suggest that the major bottleneck of scaling out the training jobs is to exchange the huge amount of gradients per mini-batch. Thus, a few compression algorithms have been proposed, such as Deep Gradients Compression, Terngrad, and evaluated to demonstrate their benefits of reducing the transmission cost. However, when re-implementing these algorithms and integrating them into mainstream frameworks such as MxNet, we identified that they performed less efficiently than what was claimed in their original papers. The major gap is that the developers of those algorithms did not necessarily understand the internals of the deep learning frameworks. As a consequence, we believe that there is lack of system support for enabling the algorithm developers to primarily focus on the innovations of the compression algorithms, rather than the efficient implementations which may take into account various levels of parallelism. To this end, we propose a domain-specific language that allows the algorithm developers to sketch their compression algorithms, a translator that converts the high-level descriptions into low-level highly optimized GPU codes, and a compiler that generates new computation DAGs that fuses the compression algorithms with proper operators that produce gradients.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Towards solving the cocktail party problem: from speech separation to speech recognition\"]\r\n\r\n<strong>Speaker<\/strong>: Jun Du, University of Science and Technology of China\r\n\r\nSolving the cocktail party problem is one ultimate goal for the machine to achieve the human-level auditory perception. Speech separation and recognition are two related key techniques. With the emergence of deep learning, new milestones are achieved for both speech separation and recognition. In this talk, I will introduce our recent progress and future trends in these areas with the development of DIHARD and CHiME Challenges.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Toward Ubiquitous Operating Systems: Challenges and Research Directions\"]\r\n\r\n<strong>Speaker<\/strong>: Yao Guo, Peking University\r\n\r\nIn recent years, operating systems have expanded beyond traditional computing systems into the cloud, IoT devices, and other emerging technologies and will soon become ubiquitous. We call this new generation of OSs as ubiquitous operating systems (UOSs). Despite the apparent differences among existing OSs, they all have in common so-called \u201csoftware-defined\u201d capabilities\u2014namely, resource virtualization and function programmability. In this talk, I will present our vision and some recent work toward the development of ubiquitous operating systems (UOSs).\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Vibration-Mediated Sensing Techniques for Tangible Interaction\"]\r\n\r\n<strong>Speaker: <\/strong>Seungmoon Choi, Pohang University of Science and Technology (POSTECH)\r\n\r\nTangible interaction allows a user to interact with a computer using ordinary physical objects. It substantially expands the interaction space owing to the natural affordance and metaphors provided by real objects. However, tangible interaction requires to identify the object held by the user or how the user is touching the object. In this talk, I will introduce two sensing techniques for tangible interaction, which exploits active sensing using mechanical vibration. A vibration is transmitted from an exciter worn in the user\u2019s hand or fingers, and the transmitted vibration is measured using a sensor. By comparing the input-output pair, we can recognize the object held between two fingers or the fingers touching the object. The mechanical vibrations also provide pleasant confirmation feedback to the user. Details will be shared in the talk.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Video Analytics in Crowded Spaces\"]\r\n\r\n<strong>Speaker<\/strong>: Rajesh Krishna Balan, Singapore Management University\r\n\r\nI will describe the flow of work I am starting on video analytics in crowded spaces. This includes malls, conferences centres, and university campuses in Asia. The goal of this work is to use video analytics, combined with other sensors to accurately count the number of people in the environments, track their movement trajectories, and discover their demographics and persona.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Video Dialog via Progressive Inference and Cross-Transformer\"]\r\n\r\n<strong>Speaker<\/strong>: Zhou Zhao, Zhejiang University\r\n\r\nVideo dialog is a new and challenging task, which requires the agent to answer questions combining video information with dialog history. And different from single-turn video question answering, the additional dialog history is important for video dialog, which often includes contextual information for the question. Existing visual dialog methods mainly use RNN to encode the dialog history as a single vector representation, which might be rough and straightforward. Some more advanced methods utilize hierarchical structure, attention and memory mechanisms, which still lack an explicit reasoning process. In this paper, we introduce a novel progressive inference mechanism for video dialog, which progressively updates query information based on dialog history and video content until the agent think the information is sufficient and unambiguous. In order to tackle the multimodal fusion problem, we propose a cross-transformer module, which could learn more fine-grained and comprehensive interactions both inside and between the modalities. And besides answer generation, we also consider question generation, which is more challenging but significant for a complete video dialog system. We evaluate our method on two largescale datasets, and the extensive experiments show the effectiveness of our method.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Visual Analytics of Sports Data\"]\r\n\r\n<strong>Speaker<\/strong>: Yingcai Wu, Zhejiang University\r\n\r\nWith the rapid development of sensing technologies and wearable devices, large sports data have been acquired daily. The data usually implies a wide spectrum of information and rich knowledge about sports. Visual analytics, which facilitates analytical reasoning by interactive visual interfaces, has proven its value in solving various problems. In this talk, I will discuss our research experiences in visual analytics of sports data and introduce several recent studies of our group of making sense of sports data through interactive visualization.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Visual Analytics for Data Quality Improvement\"]\r\n\r\n<strong>Speaker<\/strong>: Shixia Liu, Tsinghua University\r\n\r\nThe quality of training data is crucial to the success of supervised and semi-supervised learning. Errors in data have long been known to limit the performance of machine learning models. This talk presents the motivation, major challenges of interactive data quality analysis and improvement. With that perspective, I will then discuss some of my recent efforts on 1) analyzing and correcting poor label quality, and 2) resolving the poor coverage of the training data caused by dataset bias.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"VIS+AI: Making AI more Explainable and VIS more Intelligent\"]\r\n\r\n<strong>Speaker<\/strong>: Huamin Qu, Hong Kong University of Science and Technology\r\n\r\nVIS for AI and AI for VIS have become hot research topics recently. On the one side, visualization plays an important role in explainable AI. On the other side, AI has been transforming the visualization field and automated the whole visualization system development pipeline. In this talk, I will introduce the emerging opportunities of combining AI and VIS to leverage both human intelligence and artificial intelligence to solve some grand challenging problems facing both fields and the society.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"What We Learned from Medical Image Learning\"]\r\n\r\n<strong>Speaker<\/strong>: Winston Hsu, National Taiwan University\r\n\r\nWe observed super-human capabilities from convolutional networks for image learning. It is a natural extension for advancing the technologies towards healthcare applications such as medical image segmentation (CT, MRI), registration, detection, prediction, etc. In the past few years, working closely with the university hospitals, we found many exciting developments in this aspect. However, we also learn a lot as working in the cross-disciplinary setup, which requires strong devotions and deep technologies from the medical and machine learning domains. We\u2019d like to take this opportunity to share what we failed and succeeded for the few attempts in advancing machine learning for medical applications. We will identity promising working models (also the misunderstandings between these two disciplines) derived with the medical experts and evidence the great opportunities to discover new treatment or diagnosis methods across numerous common diseases.\r\n\r\n[\/panel]\r\n\r\n[\/accordion]"},{"id":3,"name":"Speakers","content":"<h2>Workshops<\/h2>\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Rajesh-Krishna-Balan.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Rajesh Krishna Balan<\/strong>\r\n\r\nSingapore Management University\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nProf. Balan is an ACM Distinguished Scientist and has worked in the area of mobile systems for over 18 years. He obtained his Ph.D. in Computer Science in 2006 from Carnegie Mellon University under the guidance of Professor Mahadev Satyanarayanan. He has been a general chair for both MobiSys 2016 and UbiComp 2018 and has served as a program chair for HotMobile 2012 and MobiSys 2019. In addition, he also organised student workshop, called ASSET, that ran at MobiCom 2019, COMSNETS 2018, and MobiSys 2016. Prof. Balan has a strong interest in applied research and was a director for LiveLabs (http:\/\/www.livelabs.smu.edu.sg), a large research \/ startup lab that turned real-world environments (such as a university, a convention centre, and a resort island) into living testbeds for mobile systems experiments. He founded a startup to more effectively provide LiveLabs technologies to interested commercial clients. These experiences have given Prof Balan a great insight into how hard and meaningful it is to translate research into tangible systems that are tested and deployed in the real world.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Ting-Cao.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Ting Cao<\/strong>\r\n\r\nMicrosoft Research\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nTing Cao is now a Researcher in System Research Group of MSRA. Her research interests include HW\/SW co-design, high-level language implementation, software management of heterogeneous hardware, big data and deep learning frameworks. She has reputable publications in ISCA, ASPLOS, PLDI, Proceedings of the IEEE, etc. She got her PhD from the Australian National University. Before joining MSRA, she was a senior software engineer in the Compiler and Computing Language Lab in Huawei Technologies.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Yue-Cao.png\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Yue Cao<\/strong>\r\n\r\nMicrosoft Research\r\n\r\n[accordion]\r\n\r\n[panel header=\"Bio\"]\r\n\r\nYue Cao is now a researcher at Microsoft Research Asia. He received the B.E. degree in Computer Software at 2014 and Ph.D. degree in Software Engineering at 2019, both from Tsinghua University, China. He was awarded the Top-grade Scholarship of Tsinghua University at 2018, and Microsoft Research Asia PhD Fellowship at 2017. His research interests include computer vision and deep learning. He has published more than 20 papers in the top-tier conferences with more than 1,700 citations.\r\n\r\n[\/panel]\r\n\r\n[\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Xilin-Chen.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Xilin Chen<\/strong>\r\n\r\nChinese Academy of Sciences\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nXilin Chen is a professor with the Institute of Computing Technology, Chinese Academy of Sciences (CAS). He has authored one book and more than 300 papers in refereed journals and proceedings in the areas of computer vision, pattern recognition, image processing, and multimodal interfaces. He is currently an associate editor of the IEEE Transactions on Multimedia, and a Senior Editor of the Journal of Visual Communication and Image Representation, a leading editor of the Journal of Computer Science and Technology, and an associate editor-in-chief of the Chinese Journal of Computers, and Chinese Journal of Pattern Recognition and Artificial Intelligence. He served as an Organizing Committee member for many conferences, including general co-chair of FG13 \/ FG18, program co-chair of ICMI 2010. He is \/ was an area chair of CVPR 2017 \/ 2019 \/ 2020, and ICCV 2019. He is a fellow of the IEEE, IAPR, and CCF.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Peng-Cheng.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Peng Cheng<\/strong>\r\n\r\nMicrosoft Research\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nPeng Cheng is the researcher in Networking Research Group, MSRA. His research interests are computer networking and networked systems. His recent work is focusing on Hardware-based System in Data Center. He has publications in NSDI, CoNEXT, EuroSys, SIGCOMM, etc. He received his Ph.D. in Computer Science and Technology from Tsinghua University in 2015.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jaegul-Choo.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Jaegul Choo<\/strong>\r\n\r\nKorea University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nJaegul Choo (https:\/\/sites.google.com\/site\/jaegulchoo\/ ) is an associate professor in the Dept. of Computer Science and Engineering at Korea University. He has been a research scientist at Georgia Tech from 2011 to 2015, where he also received M.S in 2009 and Ph.D in 2013. His research areas include computer vision, and natural language processing, data mining, and visual analytics, and his work has been published in premier venues such as KDD, WWW, WSDM, CVPR, ECCV, EMNLP, AAAI, IJCAI, ICDM, ICWSM, IEEE VIS, EuroVIS, CHI, TVCG, CFG, and CG&amp;A. He earned the Best Student Paper Award at ICDM in 2016, the NAVER Young Faculty Award in 2015, the Outstanding Research Scientist Award at Georgia Tech in 2015, and the Best Poster Award at IEEE VAST (as part of IEEE VIS) in 2014.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Nan-Duan.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Nan Duan<\/strong>\r\n\r\nMicrosoft Research\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nDr. Nan DUAN is a Principle Research Manager at Microsoft Research Asia. He is working on fundamental NLP tasks, especially on question answering, natural language understanding, language + vision, pre-training and reasoning.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Winston-HSU.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Winston Hsu<\/strong>\r\n\r\nNational Taiwan University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nProf. Winston Hsu is an active researcher dedicated to large-scale image\/video retrieval\/mining, visual recognition, and machine intelligence. He is a Professor in the Department of Computer Science and Information Engineering, National Taiwan University. He and his team have been recognized with technical awards in multimedia and computer vision research communities including IBM Research Pat Goldberg Memorial Best Paper Award (2018), Best Brave New Idea Paper Award in ACM Multimedia 2017, First Place for IARPA Disguised Faces in the Wild Competition (CVPR 2018), First Prize in ACM Multimedia Grand Challenge 2011, ACM Multimedia 2013\/2014 Grand Challenge Multimodal Award, etc. Prof. Hsu is keen to realizing advanced researches towards business deliverables via academia-industry collaborations and co-founding startups. He was a Visiting Scientist at Microsoft Research Redmond (2014) and had his 1-year sabbatical leave (2016-2017) at IBM TJ Watson Research Center. He served as the Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) and IEEE Transactions on Multimedia, two premier journals, and was on the Editorial Board for IEEE Multimedia Magazine (2010 \u2013 2017).\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Sung-Ju-Hwang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Sung Ju Hwang<\/strong>\r\n\r\nKAIST\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nSung Ju Hwang is an assistant professor in the Graduate School of Artificial Intelligence and School of Computing at KAIST. He received his Ph.D. degree in computer science at University of Texas at Austin, under the supervision of Professor Kristen Grauman. Sung Ju Hwang's research interest is mainly on developing machine learning models for tackling practical challenges in various application domains, including but not limited to, visual recognition, natural language understanding, healthcare and finance. He regularly presents papers at various top-tier AI conferences, such as NeurIPS, ICML, ICLR, CVPR, ICCV, AAAI and ACL.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Guolin-Ke.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Guolin Ke<\/strong>\r\n\r\nMicrosoft Research\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nGuolin Ke is currently a Researcher in Machine Learning Group, Microsoft Research Asia. His research interests mainly lie in machine learning algorithms.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Gunhee-Kim.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Gunhee Kim<\/strong>\r\n\r\nSeoul National University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nGunhee Kim is an associate professor in the Department of Computer Science and Engineering of Seoul National University from 2015. He was a postdoctoral researcher at Disney Research for one and a half years. He received his PhD in 2013 under supervision of Eric P. Xing from Computer Science Department of Carnegie Mellon University. Prior to starting PhD study in 2009, he earned a master\u2019s degree under supervision of Martial Hebert in Robotics Institute, CMU. His research interests are solving computer vision and web mining problems that emerge from big image data shared online, by developing scalable and effective machine learning and optimization techniques. He is a recipient of 2014 ACM SIGKDD doctoral dissertation award, and 2015 Naver New faculty award.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2016\/07\/avatar_user__1469100866-180x180.png\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Shujie Liu<\/strong>\r\n\r\nMicrosoft Research\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nDr. Shujie Liu is a Principle Researcher in Natural Language Computing group at Microsoft Research Asia, Beijing, China. Shujie joined MSRA-NLC in Jul. 2012 after he received his Ph.D in Jun. 2012 from Department of Computer Science of Harbin Institute of Technology.\r\n\r\nShujie\u2019s research interests include natural language processing and deep learning. He is now working on fundamental NLP problems, models, algorithms and innovations.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Xuanzhe-Liu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Xuanzhe Liu<\/strong>\r\n\r\nPeking University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nProf. Xuanzhe Liu is now an associate professor with the Institute of Software, Peking University, since 2011. He now leads the DAAS (Data, Analytics, Applications, and Systems) lab in Peking University. Prof. Liu\u2019s recent research interests are focused on measuring, engineering, and operating large-scale service-based and intelligent software systems (such as mobility and Web), mostly from a data-driven perspective. Prof. Liu has published more than 80 papers on premier conferences such as WWW, ICSE, OOPSLA, MobiCom, UbiComp, EuroSys, and IMC, and impactful journals such as ACM TOIS\/TOIT and IEEE TSE\/TMC\/TSC. He won the Best Paper Award of WWW 2019. He was also recognized by several academic awards, such as the CCF-IEEE CS Young Scientist Award, the Honorable Young Faculty Award of Yangtze River Scholar Program, and so on. Prof. Liu was a visiting researcher with Microsoft Research (with \"Star-Track Young Faculty Program\") from 2013-2014, and the winner of Microsoft Ph.D. Fellowship in 2007.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jiwen-Lu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Jiwen Lu<\/strong>\r\n\r\nTsinghua University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nJiwen Lu is currently an Associate Professor with the Department of Automation, Tsinghua University, China. His current research interests include computer vision, machine learning, and intelligent robotics. He has authored\/co-authored over 200 scientific papers in these areas, where over 70 of them are IEEE Transactions papers and over 50 of them are CVPR\/ICCV\/ECCV papers. He was a recipient of the National 1000 Young Talents Program of China in 2015, and the National Science Fund of China Award for Excellent Young Scholars in 2018. He serves as the Co-Editor-of-Chief for PR Letters, an Associate Editor for T-IP\/T-CSVT\/T-BIOM\/PR. He is the Program Co-Chair of ICME\u20192020, AVSS\u20192020 and DICTA\u20192019, and an Area Chair for CVPR\u20192020, ICME\u20192017-2019, ICIP\u20192017-2019, and ICPR 2018.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Chong-Luo.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Chong Luo<\/strong>\r\n\r\nMicrosoft Research\r\n\r\n[accordion] [panel header=\"Bio\"] Dr. Chong Luo joined Microsoft Research Asia in 2003 and is now a Principal Researcher at the Intelligent Multimedia Group (IMG). She is an adjunct professor and a Ph.D. advisor at the University of Science and Technology of China (USTC), China. Her current research interests include computer vision, cross-modality multimedia analysis and processing, and multimedia communications. In particular, she is interested in visual object tracking, audio-visual and text-visual video analysis, and hybrid digital-analog transmission of wireless video. She is currently a member of the Multimedia Systems and Applications (MSA) Technical Committee (TC) of the IEEE Circuits and Systems (CAS) society. She is an IEEE senior member.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Sinno-Jialin-Pan.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Sinno Jialin Pan<\/strong>\r\n\r\nNanyang Technological University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nDr Sinno Jialin Pan is a Provost's Chair Associate Professor with the School of Computer Science and Engineering, and Deputy Director of the Data Science and AI Research Centre at Nanyang Technological University (NTU), Singapore. He received his Ph.D. degree in computer science from the Hong Kong University of Science and Technology (HKUST) in 2011. Prior to joining NTU, he was a scientist and Lab Head of text analytics with the Data Analytics Department, Institute for Infocomm Research, Singapore from Nov. 2010 to Nov. 2014. He joined NTU as a Nanyang Assistant Professor (university named assistant professor) in Nov. 2014. He was named to \"AI 10 to Watch\" by the IEEE Intelligent Systems magazine in 2018. His research interests include transfer learning, and its applications to wireless-sensor-based data mining, text mining, sentiment analysis, and software engineering.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2018\/03\/Xu-Tan-Profile-Photo-360-x-360.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Xu Tan<\/strong>\r\n\r\nMicrosoft Research\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nXu Tan is currently a Senior Researcher in Machine Learning Group, Microsoft Research Asia (MSRA). He graduated from Zhejiang University on March, 2015. His research interests mainly lie in machine learning, deep learning, low-resource learning, and their applications on natural language processing and speech processing, including neural machine translation, text to speech, etc.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Chuan-Wu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Chuan Wu<\/strong>\r\n\r\nUniversity of Hong Kong\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nChuan Wu received her B.Engr. and M.Engr. degrees in 2000 and 2002 from the Department of Computer Science and Technology, Tsinghua University, China, and her Ph.D. degree in 2008 from the Department of Electrical and Computer Engineering, University of Toronto, Canada. Between 2002 and 2004, She worked in the Information Technology industry in Singapore. Since September 2008, Chuan Wu has been with the Department of Computer Science at the University of Hong Kong, where she is currently an Associate Professor. Her current research is in the areas of cloud computing, distributed machine learning\/big data analytics systems, and smart elderly care technologies\/systems. She is a senior member of IEEE, a member of ACM, and an associate editor of IEEE Transactions on Cloud Computing, IEEE Transactions on Multimedia, IEEE Transactions on Circuits and Systems for Video Technology and ACM Transactions on Modeling and Performance Evaluation of Computing Systems. She was the co-recipient of the best paper awards of HotPOST 2012 and ACM e-Energy 2016.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Yingce-Xia.png\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Yingce Xia<\/strong>\r\n\r\nMicrosoft Research\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nI am currently a researcher at machine learning group, Microsoft Research Asia. I received my Ph.D. degree from University of Science and Technology in 2018, supervised by Dr. Tie-Yan Liu and Prof. Nenghai Yu. Prior to that, I obtained my bachelor degree from University of Science and Technology of China in 2013.\r\n\r\nMy research revolves around dual learning (a new learning paradigm proposed by our group) and deep learning (with application to neural machine translation and image processing).\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2016\/09\/avatar_user__1474853894-180x180.png\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Dongdong Zhang<\/strong>\r\n\r\nMicrosoft Research\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nDr. Dongdong Zhang is a researcher in Natural Language Computing group at Microsoft Research Asia, Beijing, China. He received his Ph.D in Dec. 2005 from Department of Computer Science of Harbin Institute of Technology under the supervision of Prof. Jianzhong Li. Before that, he received a B.S. degree and M.S. degree from the same department in 1999 and 2001 respectively.\r\n\r\nDongdong\u2019s research interests include natural language processing, machine translation and machine learning. He is now working on research and development of advanced statistical machine translation systems (SMT) as well as related fundamental NLP problems, models, algorithms and innovations.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Quanlu-Zhang.png\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Quanlu Zhang<\/strong>\r\n\r\nMicrosoft Research\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nQuanlu Zhang is a senior researcher at MSRA. He obtained his PhD in computer science from Peking University. His current focuses are on the areas of AutoML systems, GPU cluster management, resource scheduling, and storage support for DL workload. Some works have been published on conferences such as OSDI, SoCC, FAST etc.\r\n\r\n[\/panel] [\/accordion]\r\n<h2>Breakout Sessions<\/h2>\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Rajesh-Krishna-Balan.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Rajesh Krishna Balan<\/strong>\r\n\r\nSingapore Management University\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nProf. Balan is an ACM Distinguished Scientist and has worked in the area of mobile systems for over 18 years. He obtained his Ph.D. in Computer Science in 2006 from Carnegie Mellon University under the guidance of Professor Mahadev Satyanarayanan. He has been a general chair for both MobiSys 2016 and UbiComp 2018 and has served as a program chair for HotMobile 2012 and MobiSys 2019. In addition, he also organised student workshop, called ASSET, that ran at MobiCom 2019, COMSNETS 2018, and MobiSys 2016. Prof. Balan has a strong interest in applied research and was a director for LiveLabs (http:\/\/www.livelabs.smu.edu.sg), a large research \/ startup lab that turned real-world environments (such as a university, a convention centre, and a resort island) into living testbeds for mobile systems experiments. He founded a startup to more effectively provide LiveLabs technologies to interested commercial clients. These experiences have given Prof Balan a great insight into how hard and meaningful it is to translate research into tangible systems that are tested and deployed in the real world.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/lei-chen.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Lei Chen<\/strong>\r\n\r\nHong Kong University of Science and Technology\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nLei Chen has BS degree in computer science and engineering from Tianjin University, Tianjin, China, MA degree from Asian Institute of Technology, Bangkok, Thailand, and Ph.D. in computer science from the University of Waterloo, Canada. He is a professor in the Department of Computer Science and Engineering, Hong Kong University of Science and Technology (HKUST). Currently, Prof. Chen serves as the director of Big Data Institute at HKUST, the director of Master of Science on Big Data Technology and director of HKUST MOE\/MSRA Information Technology Key Laboratory. Prof. Chen\u2019s research includes human-powered machine learning, crowdsourcing, Blockchain, social media analysis, probabilistic and uncertain databases, and privacy-preserved data publishing. Prof. Chen got the SIGMOD Test-of-Time Award in 2015.The system developed by Prof. Chen\u2019s team won the excellent demonstration award in VLDB 2014. Currently, Pro. Chen serves as Editor-in-Chief of VLDB Journal, associate editor-in-chief of IEEE Transaction on Data and Knowledge Engineering and Program Committee Co-Chair for VLDB 2019. He is an ACM Distinguished Member and an IEEE Senior Member\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Wen-Huang-Cheng.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Wen-Huang Cheng<\/strong>\r\n\r\nNational Chiao Tung University\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nWen-Huang Cheng is Professor with the Institute of Electronics, National Chiao Tung University (NCTU), Hsinchu, Taiwan, where he is the Founding Director with the Artificial Intelligence and Multimedia Laboratory (AIMMLab). Before joining NCTU, he led the Multimedia Computing Research Group at the Research Center for Information Technology Innovation (CITI), Academia Sinica, Taipei, Taiwan, from 2010 to 2018. His current research interests include multimedia, artificial intelligence, computer vision, machine learning, social media, and financial technology. He has actively participated in international events and played important leading roles in prestigious journals and conferences and professional organizations, like Associate Editor for IEEE Multimedia, General co-chair for ACM ICMR (2021), TPC co-chair for ICME (2020), Chair-Elect for IEEE MSA-TC, governing board member for IAPR. He has received numerous research and service awards, including the 2018 MSRA Collaborative Research Award, the 2017 Ta-Yu Wu Memorial Award from Taiwan\u2019s Ministry of Science and Technology (the highest national research honor for young Taiwanese researchers under age 42), the Top 10% Paper Award from the 2015 IEEE MMSP, the K. T. Li Young Researcher Award from the ACM Taipei\/Taiwan Chapter in 2014, the 2017 Significant Research Achievements of Academia Sinica, the 2016 Y. Z. Hsu Scientific Paper Award, the Outstanding Youth Electrical Engineer Award from the Chinese Institute of Electrical Engineering in 2015, and the Outstanding Reviewer Award of 2018 IEEE ICME.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Minsu-Cho.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Minsu Cho<\/strong>\r\n\r\nPohang University of Science and Technology (POSTECH)\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nMinsu Cho is an assistant professor at the Department of Computer Science and Engineering at POSTECH, South Korea, leading POSTECH Computer Vision Lab. Before joining POSTECH in the fall of 2016, he has worked as a postdoc and a starting researcher in Inria (the French National Institute for computer science and applied mathematics) and ENS (\u00c9cole Normale Sup\u00e9rieure), Paris, France. He completed his Ph.D. in 2012 at Seoul National University, Korea. His research lies in the areas of computer vision and machine learning, especially in the problems of object discovery, weakly-supervised learning, semantic correspondence, and graph matching. In general, he is interested in the relationship between correspondence and supervision in visual learning. He is an editorial board member of International Journal of Computer Vision (IJCV) and has been serving area chairs in top computer vision conferences including CVPR 2018, ICCV 2019, and CVPR 2020.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Seungmoon-Choi.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Seungmoon Choi<\/strong>\r\n\r\nPohang University of Science and Technology (POSTECH)\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nSeungmoon Choi, PhD, is a Professor of Computer Science and Engineering at POSTECH in Korea. He received the BS and MS degrees from Seoul National University and the PhD degree from Purdue University. His main research area is haptics, the science and technology for the sense of touch, as well as its application to various domains including robotics, virtual reality, human-computer interaction, and consumer electronics. He received a 2011 Early Career Award from the IEEE Technical Committee on Haptics.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jaegul-Choo.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Jaegul Choo<\/strong>\r\n\r\nKorea University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nJaegul Choo (https:\/\/sites.google.com\/site\/jaegulchoo\/ ) is an associate professor in the Dept. of Computer Science and Engineering at Korea University. He has been a research scientist at Georgia Tech from 2011 to 2015, where he also received M.S in 2009 and Ph.D in 2013. His research areas include computer vision, and natural language processing, data mining, and visual analytics, and his work has been published in premier venues such as KDD, WWW, WSDM, CVPR, ECCV, EMNLP, AAAI, IJCAI, ICDM, ICWSM, IEEE VIS, EuroVIS, CHI, TVCG, CFG, and CG&amp;A. He earned the Best Student Paper Award at ICDM in 2016, the NAVER Young Faculty Award in 2015, the Outstanding Research Scientist Award at Georgia Tech in 2015, and the Best Poster Award at IEEE VAST (as part of IEEE VIS) in 2014.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Chenhui-Chu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Chenhui Chu<\/strong>\r\n\r\nOsaka University\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nChenhui Chu received his B.S. in Software Engineering from Chongqing University in 2008, and M.S., and Ph.D. in Informatics from Kyoto University in 2012 and 2015, respectively. He is currently a research assistant professor at Osaka University. His research won the MSRA collaborative research 2019 grant award, 2018 AAMT Nagao award, and CICLing 2014 best student paper award. He is on the editorial board of the Journal of Natural Language Processing, Journal of Information Processing, and a steering committee member of Young Researcher Association for NLP Studies. His research interests center on natural language processing, particularly machine translation and language and vision understanding.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jun-Du.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Jun Du<\/strong>\r\n\r\nUniversity of Science and Technology of China\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nJun Du received the B.Eng. and Ph.D. degrees from the Department of Electronic Engineering and Information Science, University of Science and Technology of China (USTC), in 2004 and 2009, respectively. From July 2009 to June 2010, he was with iFlytek Research leading a team to develop the ASR prototype system of the mobile app \u201ciFlytek Input\u201d. From July 2010 to January 2013, he joined MSRA as an Associate Researcher, working on handwriting recognition, OCR, and speech recognition. Since February 2013, he has been with the National Engineering Laboratory for Speech and Language Information Processing (NEL-SLIP), USTC. His main research interest includes speech signal processing and pattern recognition applications. He has published more than 100 conference and journal papers with more than 2300 citations in Google Scholar. His team is one of the pioneers in deep-learning-based speech enhancement area, publishing two ESI highly cited papers. As the corresponding author, the IEEE-ACM TASLP paper \u201cA Regression Approach to Speech Enhancement Based on Deep Neural Networks\u201d also received 2018 IEEE Signal Processing Society Best Paper Award. Based on those research achievements of speech enhancement, he led a joint team with members from USTC and iFlytek Research to win the champions of all three tasks in the 2016 CHiME-4 challenge and all four tasks in 2018 CHiME-5 challenge. Currently he is the associate editor of IEEE-ACM TASLP. He is one of the organizers for DIHARD Challenge 2018 and 2019.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Ryo-Furukawa.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Ryo Furukawa<\/strong>\r\n\r\nHiroshima City University\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nRyo Furukawa is an associate professor of Faculty of Information Sciences, Hiroshima City University, Hiroshima, Japan. He received his Ph.D. from Nara Institute of Science and Technology, Japan. His research area includes shape-capturing, 3D modeling, image-based rendering, and medical image analysis. He has won academic awards including ACCV Songde Ma Outstanding Paper Award (2007), PSIVT Best Paper Award (2009), IEVC2014 Best Paper Award (2014), IEEE WACV Best Paper Honorable Mention (2017), MICCAI Workshop CARE, KUKA Best Paper Award 3rd Place (2018).\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Yao-Guo.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Yao Guo<\/strong>\r\n\r\nPeking University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nYao Guo is a professor and vice chair of the Department of Computer Science at Peking University. His recent research interests mainly focus on mobile app analysis, as well as privacy and security of mobile systems. He has received multiple awards for his research work and teaching, including First Prize of National Technology Invention Award, an Honorable Mention Award from UbiComp 2016, as well as a Teaching Excellence Award from Peking University. He received his PhD in computer engineering from University of Massachusetts, Amherst in 2007, and BS\/MS degrees in computer science from Peking University.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Bohyung-Han.png\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Bohyung Han<\/strong>\r\n\r\nSeoul National University\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nBohyung Han is an Associate Professor in the Department of Electrical and Computer Engineering at Seoul National University, Korea. Prior to the current position, he was an Associate Professor in the Department of Computer Science and Engineering at POSTECH, Korea and a visiting research scientist in Machine Intelligence Group at Google, Venice, CA, USA. He is currently visiting Snap Research, Venice, CA. He received the B.S. and M.S. degrees from Seoul National University, Korea, in 1997 and 2000, respectively, and the Ph.D. in Computer Science at the University of Maryland, College Park, MD, USA, in 2005. He served or will be serving as an Area Chair or Senior Program Committee member of major conferences in computer vision and machine learning including CVPR, ICCV, NIPS\/NeurIPS, IJCAI and ACCV, a Tutorial Chair in ICCV 2019, a General Chair in ACCV 2022, a Demo Chair in ECCV 2022, a Workshop Chair in ACCV 2020, and a Demo Chair in ACCV 2014. His research interest is computer vision and machine learning with emphasis on deep learning.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Winston-HSU.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Winston Hsu<\/strong>\r\n\r\nNational Taiwan University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nProf. Winston Hsu is an active researcher dedicated to large-scale image\/video retrieval\/mining, visual recognition, and machine intelligence. He is a Professor in the Department of Computer Science and Information Engineering, National Taiwan University. He and his team have been recognized with technical awards in multimedia and computer vision research communities including IBM Research Pat Goldberg Memorial Best Paper Award (2018), Best Brave New Idea Paper Award in ACM Multimedia 2017, First Place for IARPA Disguised Faces in the Wild Competition (CVPR 2018), First Prize in ACM Multimedia Grand Challenge 2011, ACM Multimedia 2013\/2014 Grand Challenge Multimodal Award, etc. Prof. Hsu is keen to realizing advanced researches towards business deliverables via academia-industry collaborations and co-founding startups. He was a Visiting Scientist at Microsoft Research Redmond (2014) and had his 1-year sabbatical leave (2016-2017) at IBM TJ Watson Research Center. He served as the Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) and IEEE Transactions on Multimedia, two premier journals, and was on the Editorial Board for IEEE Multimedia Magazine (2010 \u2013 2017).\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Seung-won-Hwang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Seung-won Hwang<\/strong>\r\n\r\nYonsei University\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nProf. Seung-won Hwang is a Professor of Computer Science at Yonsei University. Prior to joining Yonsei, she had been an Associate Professor at POSTECH for 10 years, after her PhD from UIUC. Her recent research interests has been machine intelligence from data, language, and knowledge, leading to 100+ publication at top-tier AI, DB\/DM, and NLP venues, including ACL, AAAI, EMNLP, IJCAI, KDD, SIGIR, SIGMOD, and VLDB. She has received best paper runner-up and outstanding collaboration award from WSDM and Microsoft Research respectively. Details can be found at http:\/\/dilab.yonsei.ac.kr\/~swhwang.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Hong-Gong-Kang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Hong-Goo Kang<\/strong>\r\n\r\nYonsei University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nHong-Goo Kang received the B.S., M.S., and Ph.D. degrees from Yonsei University, Korea in 1989, 1991, and 1995, respectively. From 1996 to 2002, he was a senior technical staff member at AT&amp;T Labs-Research, Florham Park, New Jersey. He was an associate editor of the IEEE Transactions on Audio, Speech, and Language processing from 2005 to 2008, and served numerous conferences and program committees. In 2008~2009 and 2015~2016, respectively, he worked for Broadcom (Irvine, CA) and Google (Mountain View, CA) as a visiting scholar, where he participated in various projects on speech signal processing. His research interests include speech\/audio signal processing, machine learning, and human computer interface.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Gunhee-Kim.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Gunhee Kim<\/strong>\r\n\r\nSeoul National University\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nGunhee Kim is an associate professor in the Department of Computer Science and Engineering of Seoul National University from 2015. He was a postdoctoral researcher at Disney Research for one and a half years. He received his PhD in 2013 under supervision of Eric P. Xing from Computer Science Department of Carnegie Mellon University. Prior to starting PhD study in 2009, he earned a master\u2019s degree under supervision of Martial Hebert in Robotics Institute, CMU. His research interests are solving computer vision and web mining problems that emerge from big image data shared online, by developing scalable and effective machine learning and optimization techniques. He is a recipient of 2014 ACM SIGKDD doctoral dissertation award, and 2015 Naver New faculty award.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jong-Kim.png\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Jong Kim<\/strong>\r\n\r\nPohang University of Science and Technology (POSTECH)\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nJong Kim is a professor in the Department of Computer Science and Engineering at Pohang University of Science and Technology (POSTECH). He received his Ph.D. degree from Penn. State University in 1991. From 1991 to 1992, he worked at University of Michigan as a Research Fellow. His research interests include dependable computing, hardware security, mobile security, and machine learning security. He has published papers on top security and security conferences including S&amp;P, NDSS, CCS, WWW, Micro, and RTSS.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Min-H.-Kim.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Min H. Kim<\/strong>\r\n\r\nKAIST\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nMin H. Kim is a KAIST-Endowed Chair Professor of Computer Science at KAIST, Korea, leading the Visual Computing Laboratory (VCLAB). Before coming to KAIST, he had been a postdoctoral researcher at Yale University, working on hyperspectral 3D imaging. He received his Ph.D. in computer science from University College London (UCL) in 2010, with a focus on HDR color reproduction for high-fidelity computer graphics. In addition to serving on international program committees, e.g., ACM SIGGRAPH Asia, Eurographics (EG), Pacific Graphics (PG), CVPR, and ICCV, he has worked as an associate editor of ACM Transactions on Graphics (TOG), ACM Transactions on Applied Perception (TAP), and Elsevier Computers and Graphics (CAG). His recent research interests include a wide variety of computational imaging in the field of computational photography, hyperspectral imaging, BRDF acquisition, and 3D imaging.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Heejo-Lee.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Heejo Lee<\/strong>\r\n\r\nKorea University\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nHeejo Lee is a Professor in the Department of Computer Science and Engineering, Korea University (KU), Seoul, Korea and the director of CSSA (Center for Software Security and Assurance). Before joining KU, he was at AhnLab, Inc., the leading security company in Korea, as a CTO from 2001 to 2003. He received his BS, MS, PhD from POSTECH, and worked for Purdue and CMU. He is a recipient of the ISC^2 ISLA award and got the most prestigious recognition of Asia-Pacific community service star in 2016.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Seong-Whan-Lee.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Seong-Whan Lee<\/strong>\r\n\r\nKorea University\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nSeong-Whan Lee is a full professor at Korea University, where he is the head of the Department of Artificial Intelligence and the Department of Brain and Cognitive Engineering.\r\n\r\nA Fellow of the IAPR(1998), IEEE(2009), and Korean Academy of Science and Technology(2009), he has served several professional societies as chairman or governing board member. He was the founding Co-Editor-in-Chief of the International Journal of Document Analysis and Recognition and has been an Associate Editor of several international journals: Pattern Recognition, ACM Trans. on Applied Perception, IEEE Trans. on Affective Computing, Image and Vision Computing, International Journal of Pattern Recognition and Artificial Intelligence, and International Journal of Image and Graphics.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Seung-Ah-Lee.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Seung Ah Lee<\/strong>\r\n\r\nYonsei University\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nSeung Ah Lee is an assistant professor at the Department of Electrical and Electronic Engineering at Yonsei University. Seung Ah joined Yonsei University in Fall 2018, currently leading the Optical Imaging Systems Laboratory. Prior to Yonsei, she was at Verily Life Sciences, a former Google [x] team, between 2015-2018 as a scientist. She received her PhD in Electrical Engineering at Caltech (2014) and a postdoctoral training at Stanford Bioengineering (2014-2015). She completed her BS (2007) and MS (2009) degree in Electrical Engineering at Seoul National University.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Seungyong-Lee.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Seungyong Lee<\/strong>\r\n\r\nPohang University of Science and Technology (POSTECH)\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nSeungyong Lee is a professor of computer science and engineering at Pohang University of Science and Technology (POSTECH), Korea. He received a PhD degree in computer science from Korea Advanced Institute of Science and Technology (KAIST) in 1995. From 1995 to 1996, he worked at City College of New York as a postdoctoral researcher. Since 1996, he has been a faculty member of POSTECH, where he leads Computer Graphics Group. During his sabbatical years, he worked at MPI Informatik (2003-2004) and Creative Technologies Lab at Adobe Systems (2010-2011). His technologies on image deblurring and photo upright adjustment have been transferred to Adobe Creative Cloud and Adobe Photoshop Lightroom. His current research interests include image and video processing, deep learning based computational photography, and 3D scene reconstruction.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jingwen-Leng.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Jingwen Leng<\/strong>\r\n\r\nShanghai Jiao Tong University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nJingwen Leng is an Assistant Professor in the John Hopcroft Computer Science Center and Computer Science &amp; Engineering Department at Shanghai Jiao Tong University. His research focuses on building efficient and resilient architectures for deep learning. He received his Ph.D. from the University of Texas at Austin, where he worked on improving the efficiency and resiliency of general-purpose GPUs.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Cheng-Li.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Cheng Li<\/strong>\r\n\r\nUniversity of Science and Technology of China\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nCheng Li is a research professor at the School of Computer Science and Technology, University of Science and Technology of China (USTC). His research interests lie in various topics related to improving performance, consistency, fault tolerance, and availability of distributed systems. Prior to joining USTC, he was an associated researcher at INESC-ID, Portugal, and a senior member of technical staff at Oracle Labs Swiss. He received his PhD degree from Max Planck Institute for Software Sytems (MPI-SWS) in 2016, and his bachelor degree from Nankai University in 2009. His work has been published in the premier peer-reviewed system research venues such as OSDI, USENIX ATC, EuroSys, TPDS and etc. He is a member of ACM Future Computing Academy. He was a co-chair on the Program Committee of the ACM SOSP 2017 Poster Session and ACM TURC 2018 SIGOPS\/ChinaSys workshop.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Shou-De-Lin.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Shou-De Lin<\/strong>\r\n\r\nNational Taiwan University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nShou-de Lin is currently a full professor in the CSIE department of National Taiwan University. He holds a BS degree in EE department from National Taiwan University, an MS-EE degree from the University of Michigan, an MS degree in Computational Linguistics and PhD in Computer Science both from the University of Southern California. He leads the Machine Discovery and Social Network Mining Lab in NTU. Before joining NTU, he was a post-doctoral research fellow at the Los Alamos National Lab. Prof. Lin's research includes the areas of machine learning and data mining, social network analysis, and natural language processing. His international recognition includes the best paper award in IEEE Web Intelligent conference 2003, Google Research Award in 2007, Microsoft research award in 2008, 2015, 2016 merit paper award in TAAI 2010, 2014, 2016, best paper award in ASONAM 2011, US Aerospace AFOSR\/AOARD research award winner for 5 years. He is the all-time winners in ACM KDD Cup, leading or co-leading the NTU team to win 5 championships. He also leads a team to win WSDM Cup 2016. He has served as the senior PC for SIGKDD and area chair for ACL. He also served as the co-founder and chief scientist of a start-up The OmniEyes.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jiaying-Liu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Jiaying Liu<\/strong>\r\n\r\nPeking University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nJiaying Liu is currently an Associate Professor with the Institute of Computer Science and Technology, Peking University. She received the Ph.D. degree (Hons.) in computer science from Peking University, Beijing China, 2010. She has authored over 100 technical articles in refereed journals and proceedings, and holds 42 granted patents. Her current research interests include multimedia signal processing, compression, and computer vision.\r\n\r\nDr. Liu is a Senior Member of IEEE, CSIG and CCF. She was a Visiting Scholar with the University of Southern California, Los Angeles, from 2007 to 2008. She was a Visiting Researcher with the Microsoft Research Asia in 2015 supported by the Star Track Young Faculties Award. She has served as a member of Multimedia Systems &amp; Applications Technical Committee (MSA TC), Visual Signal Processing and Communications Technical Committee (VSPC TC) and Education and Outreach Technical Committee (EO TC) in IEEE Circuits and Systems Society, a member of the Image, Video, and Multimedia (IVM) Technical Committee in APSIPA. She has also served as the Technical Program Chair of IEEE VCIP-2019\/ACM ICMR-2021, the Publicity Chair of IEEE ICIP-2019\/VCIP-2018\/MIPR 2020, the Grand Challenge Chair of IEEE ICME-2019, and the Area Chair of ICCV-2019. She was the APSIPA Distinguished Lecturer (2016-2017).\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Shixia-Liu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Shixia Liu<\/strong>\r\n\r\nTsinghua University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nShixia Liu is a tenured associate professor at Tsinghua University. Her research interests include explainble machine learning, interative data quality improvement, and visual text analytics. Shixia is an associate Editor-in-Chief of IEEE Transactions on Visualization and Computer Graphics, IEEE Transactions on Big Data, and ACM Transactions on Interactive Intelligent Systems . She was the Papers Co-Chairs of IEEE VAST 2016\/2017 and the program co-chair of PacifcVis 2014.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Youyou-Lu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Youyou Lu<\/strong>\r\n\r\nTsinghua University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nYouyou Lu is an assistant professor in the Department of Computer Science and Technology at Tsinghua University. He obtained his B.S. degree from Nanjing University in 2009 and his Ph.D degree from Tsinghua University in 2015, both in Computer Science, and was a postdoctoral fellow at Tsinghua from 2015 to 2017. His current research interests include file and storage systems spanning from architectural to system levels. His research works have been published at a number of top-tier conferences including FAST, USENIX ATC, SC, EuroSys etc. His research won the Best Paper Award at NVMSA 2014 and was selected into the Best Papers at MSST 2015. He was elected in the Young Elite Scientists Sponsorship Program by CAST (China Association for Science and Technology) in 2015, and received the CCF Outstanding Doctoral Dissertation Award in 2016.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Atsuko-Miyaji.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Atsuko Miyaji<\/strong>\r\n\r\nOsaka University\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nShe received the Dr. Sci. degrees in mathematics from Osaka University, Osaka, Japan in 1997. She joined Panasonic Co., LTD from 1990 to 1998.\r\nShe was an associate professor at the Japan Advanced Institute of Science and Technology (JAIST) in 1998. She joined the UC Davis from 2002 to 2003. She has been a professor at JAIST, a professor at Osaka University, and an Auditor of Information-technology Promotion Agency Japan since 2007, 2015 and 2016 respectively. She has been an editor of ISO\/IEC since 2000.\r\n\r\nShe received Young Paper Award of SCIS'93 in 1993, Notable Invention Award of the Science and Technology Agency in 1997, the IPSJ Sakai Special Researcher Award in 2002, the Standardization Contribution Award in 2003, Engineering Sciences Society: Certificate of Appreciation in 2005, the AWARD for the contribution to CULTURE of SECURITY in 2007, IPSJ\/ITSCJ Project Editor Award in 2007, 2008, 2009, 2010, 2012, 2016, and the Director-General of Industrial Science and Technology Policy and Environment Bureau Award in 2007, DoCoMo Mobile Science Awards in 2008, ADMA 2010 Best Paper Award, Prizes for Science and Technology, The Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology, ATIS 2016 Best Paper Award, IEEE Trustocm 2017 Best Paper Award, and IEICE milestone certification in 2017.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Tadashi-Nomoto.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Tadashi Nomoto<\/strong>\r\n\r\nThe SOKENDAI Graduate School of Advanced Studies\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nTadashi Nomoto is currently an associate professor at Graduate University for Advanced Studies (SOKENDAI) with a joint appointment to National Institute of Japanese Literature. He has been actively engaged in the area of natural language processing and information retrieval for more than a decade, both in academia and in industry. His research interests include computational linguistics, digital library, data mining, machine translation, and quantitative media analysis. He has published extensively in major international conferences (the likes of SIGIR, ACL, ICML, CIKM). He holds an MA in Linguistics from Sophia University, Japan, and a PhD in Computer Science from Nara Institute of Science and Technology located also in Japan.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Sinno-Jialin-Pan.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Sinno Jialin Pan<\/strong>\r\n\r\nNanyang Technological University\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nDr Sinno Jialin Pan is a Provost's Chair Associate Professor with the School of Computer Science and Engineering, and Deputy Director of the Data Science and AI Research Centre at Nanyang Technological University (NTU), Singapore. He received his Ph.D. degree in computer science from the Hong Kong University of Science and Technology (HKUST) in 2011. Prior to joining NTU, he was a scientist and Lab Head of text analytics with the Data Analytics Department, Institute for Infocomm Research, Singapore from Nov. 2010 to Nov. 2014. He joined NTU as a Nanyang Assistant Professor (university named assistant professor) in Nov. 2014. He was named to \"AI 10 to Watch\" by the IEEE Intelligent Systems magazine in 2018. His research interests include transfer learning, and its applications to wireless-sensor-based data mining, text mining, sentiment analysis, and software engineering.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/1998\/02\/asia-slt-tim-pan-1910.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Tim Pan<\/strong>\r\n\r\nMicrosoft Research\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nDr. Tim Pan is the senior director of Outreach of Microsoft Research Asia, responsible for the lab\u2019s academic collaboration in the Asia-Pacific region. He establishes strategies and directions, identifies business opportunities, and designs various programs and projects that strengthen partnership between Microsoft Research and academia.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Xueming-Qian.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Xueming Qian<\/strong>\r\n\r\nXi'an Jiaotong University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nXueming Qian PhD\/Professor, received the B.S. and M.S. degrees in Xi'an University of Technology, Xi'an, China, in 1999 and 2004, respectively, and the Ph.D. degree in the School of Electronics and Information Engineering, Xi'an Jiaotong University, Xi'an, China, in 2008. He was awarded Microsoft fellowship in 2006, outstanding doctoral dissertation of Xi'an Jiaotong University and Shaanxi Province in 2010 and 2011 respectively. He is the director of SMILES LAB. He was a visit scholar at Microsoft research Asia from August 2010 to March 2011. His research interests include social mobile multimedia mining learning and search.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Huamin-Qu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Huamin Qu<\/strong>\r\n\r\nHong Kong University of Science and Technology\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nHuamin Qu is a full professor in the Department of Computer Science and Engineering (CSE) at the Hong Kong University of Science and Technology (HKUST). His main research interests are in data visualization and human-computer interaction, with focuses on explainable AI, urban informatics, social media analysis, E-learning, and text visualization. He has served as paper co-chairs for IEEE VIS\u201914, VIS\u201915, and VIS\u201918 and an associate editor of IEEE Transactions on Visualization and Computer Graphics (TVCG). He received a BS in Mathematics from Xi\u2019an Jiaotong University and a PhD in Computer Science from Stony Brook University.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Junichi-Rekimoto.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Junichi Rekimoto<\/strong>\r\n\r\nThe University of Tokyo\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nJun Rekimoto received his B.A.Sc., M.Sc., and Ph.D. in Information Science from Tokyo Institute of Technology in 1984, 1986, and 1996, respectively. From 1986 to 1994, he worked for the Software Laboratory of NEC. During 1992-1993, he worked in the Computer Graphics Laboratory at the University of Alberta, Canada, as a visiting scientist. Since 1994 he has worked for Sony Computer Science Laboratories (Sony CSL). In 1999 he formed, and has since directed, the Interaction Laboratory within Sony CSL.\r\n\r\nRekimoto's research interests include computer augmented environments, mobile\/wearable computing, virtual reality, and information visualization. He has authored dozens of refereed publications in the area of human-computer interactions, including ACM, CHI, and UIST. One of his publications was recognized with the 30th commemorative papers award from the Information Processing Society Japan (IPSJ) in 1992. He also received the Multi-Media Grand Prix Technology Award from the Multi-Media Contents Association Japan in 1998, the Yamashita Memorial Research Award from IPSJ in 1999, and the Japan Inter-Design Award in 2003. In 2007, He elected to ACM SIGCHI Academy.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Insik-Shin.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Insik Shin<\/strong>\r\n\r\nKAIST\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nInsik Shin is a professor in the School of Computing and a Chief Professor of Graduate School of Information Security at KAIST, Korea. He received a Ph.D. degree from the University of Pennsylvania. His research interests include real-time embedded systems, systems security, mobile computing, and cyber-physical systems. He serves on program committees of top international conferences, including RTSS, RTAS and ECRTS. He is a recipient of several best (student) paper awards, including MobiCom \u201919, RTSS \u201912, RTAS \u201912, and RTSS \u201903, KAIST Excellence Award, and Naver Young Faculty Award.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Jun-Takamatsu.png\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Jun Takamatsu<\/strong>\r\n\r\nNara Institute of Science and Technology\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nJun Takamatsu received a Ph.D. degree in Computer Science from the University of Tokyo, Japan, in 2004. From 2004 to 2008, he was with the Institute of Industrial Science, the University of Tokyo. In 2007, he was with Microsoft Research Asia, as a visiting researcher. From 2008 to now, he joined Nara Institute of Science and Technology, Japan, as an associate professor. He was also with Carnegie Mellon University as a visitor in 2012 and 2013 and with Microsoft as a visiting scientist in 2018. His research interests are in robotics including learning-from-observation, task\/motion planning, and feasible motion analysis, 3D shape modeling and analysis, and physics-based vision.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Mingkui-Tan.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Mingkui Tan<\/strong>\r\n\r\nSouth China University of Technology\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nDr. Mingkui Tan is currently a professor with the School of Software Engineering at South China University of Technology, China. He received his Bachelor Degree in Environmental Science and Engineering in 2006 and Master degree in Control Science and Engineering in 2009, both from Hunan University in Changsha, China. He received the PhD degree in Computer Science from Nanyang Technological University, Singapore, in 2014. From 2014-2016, he worked as a Senior Research Associate on machine learning and computer vision in the School of Computer Science, University of Adelaide, Australia. His research interests include machine learning, sparse analysis, deep learning and large-scale optimization. He has published about 70 research papers in top-tier conferences such as NeurIPS, ICML and KDD and international peer-reviewed journals such as TNNLS, JMLR and TIP.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2016\/03\/avatar_user__1459357947-177x180.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Xin Tong<\/strong>\r\n\r\nMicrosoft Research\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nI am now a principal researcher in Internet Graphics Group of Microsoft Research Asia . I obtained my Ph.D. degree in Computer Graphics from Tsinghua University in 1999. My Ph.D. thesis is about hardware assisted volume rendering. I got my B.S. Degree and Master Degree in Computer Science from Zhejiang University in 1993 and 1996 respectively.\r\n\r\nMy research interests include appearance modeling and rendering, texture synthesis, and image based modeling and rendering. Specifically, my research concentrates on studying the underline principles of material light interaction and light transport, and developing efficient methods for appearance modeling and rendering. I am also interested in performance capturing and facial animation.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Hongzhi-Wang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Hongzhi Wang<\/strong>\r\n\r\nHarbin Institute of Technology\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nHongzhi Wang, Professor, PHD supervisor, Vice Dean of Honors School of Harbin Institute of Technology, the secretary general of ACM SIGMOD China, CCF outstanding member, a member of CCF databases and big data committee. Research Fields include big data management and analysis, database and data quality. He was \u201cstarring track\u201d visiting professor at MSRA. He has been PI for more than 10 projects including NSFC key project, NSFC projects. He also serve as a member of ACM Data Science Task Force. His publications include over 200 papers including VLDB, SIGMOD, SIGIR papers, and 4 books. His papers were cited more than 1000 times. His personal website is http:\/\/homepage.hit.edu.cn\/wang.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Liwei-Wang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Liwei Wang<\/strong>\r\n\r\nPeking University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nProfessor in School of Electronics Engineering and Computer Science, Peking University, researcher in Beijing Institute of Big Data Research, adjunct professor in Institute for Interdisciplinary Information Science, Tsinghua University. He was recognized by IEEE Intelligent Systems as one of AI\u2019s 10 to Watch in 2010, the first Asian scholar since the establishment of the award. He received the NSFC excellent young researcher grant in 2012. He was also supported by program for New Century Excellent Talents in University by the Ministry of Education.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Hiroki-Watanabe.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Hiroki Watanabe<\/strong>\r\n\r\nHokkaido University\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nHiroki Watanabe is an assistant professor at Graduate School of Information Science and Technology, Hokkaido University, Japan. He received B. Eng. and M. Eng. and Ph.D. degrees from Kobe University in 2012, 2014, and 2017, respectively. He is working on wearable computing and ubiquitous computing.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Yonggang-Wen.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Yonggang Wen<\/strong>\r\n\r\nNanyang Technological University\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nDr. Yonggang Wen is the Professor Computer Science and Engineering (SCSE) at Nanyang Technological University (NTU), Singapore. He also serves as the Associate Dean (Research) at the College of Engineering, and the Director of Nanyang Technopreneurship Centre at NTU. He received his PhD degree in Electrical Engineering and Computer Science (minor in Western Literature) from Massachusetts Institute of Technology (MIT), Cambridge, USA, in 2007.\r\n\r\nDr. Wen has worked extensively in learning-based system prototyping and performance optimization for large-scale networked computer systems. In particular, his work in Multi-Screen Cloud Social TV has been featured by global media (more than 1600 news articles from over 29 countries) and received 2013 ASEAN ICT Awards (Gold Medal). His work on Cloud3DView, as the only academia entry, has won 2016 ASEAN ICT Awards (Gold Medal) and 2015 Datacentre Dynamics Awards \u2013 APAC (\u2018Oscar\u2019 award of data centre industry). He is a co-recipient of 2015 IEEE Multimedia Best Paper Award, and a co-recipient of Best Paper Awards at 2016 IEEE Globecom, 2016 IEEE Infocom MuSIC Workshop, 2015 EAI\/ICST Chinacom, 2014 IEEE WCSP, 2013 IEEE Globecom and 2012 IEEE EUC. He was the sole winner of 2016 Nanyang Awards in Entrepreneurship and Innovation at NTU, and received 2016 IEEE ComSoc MMTC Distinguished Leadership Award. He serves on editorial boards for ACM Transactions Multimedia Computing, Communications and Applications, IEEE Transactions on Circuits and Systems for Video Technology, IEEE Wireless Communication Magazine, IEEE Communications Survey &amp; Tutorials, IEEE Transactions on Multimedia, IEEE Transactions on Signal and Information Processing over Networks, IEEE Access Journal and Elsevier Ad Hoc Networks, and was elected as the Chair for IEEE ComSoc Multimedia Communication Technical Committee (2014-2016). His research interests include cloud computing, blockchain, green data centre, distributed machine learning, big data analytics, multimedia network and mobile computing.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Wenfei-Wu-New.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Wenfei Wu<\/strong>\r\n\r\nTsinghua University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nWenfei Wu is an assistant professor in the Institute for Interdisciplinary Information Sciences (IIIS) at Tsinghua University. Wenfei Wu obtained his Ph.D. from the CS department at the University of Wisconsin-Madison in 2015. Dr. Wu's research interests are in networked systems, including architecture design, data plane optimization, and network management optimization. He was awarded the best student paper in SoCC'13. Currently, Dr. Wu is working on model-centric DevOps for network functions, in-network computation for distributed systems (including distributed neural networks and big data systems), and secure network protocol design.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Yingcai-Wu.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Yingcai Wu<\/strong>\r\n\r\nZhejiang University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nYingcai Wu is a National Youth-1000 scholar and a ZJU100 Young Professor at the State Key Lab of CAD &amp; CG, College of Computer Science and Technology, Zhejiang University. He obtained his Ph.D. degree in Computer Science from the Hong Kong University of Science and Technology (HKUST). Prior to his current position, Yingcai Wu was a researcher in the Microsoft Research Asia, Beijing, China from 2012 to 2015, and a postdoctoral researcher at the University of California, Davis from 2010 to 2012. He was a paper co-chair of IEEE Pacific Visualization 2017 and ChinaVis 2016-2017. His main research interests are in visual analytics and human-computer interaction, with focuses on sports analytics, urban computing, and social media analysis. He has published more than 50 refereed papers, including 25 IEEE Transactions on Visualization and Computer Graphics (TVCG) papers. His three papers have been awarded Honorable Mention at IEEE VIS (SciVis) 2009, IEEE VIS (VAST) 2014, and IEEE PacificVis 2016. For more information, visit www.ycwu.org\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Hiroaki-Yamane.png\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Hiroaki Yamane<\/strong>\r\n\r\nRIKEN AIP &amp; The University of Tokyo\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nHiroaki Yamane is a post-doctoral researcher at RIKEN AIP and a visiting researcher at the University of Tokyo. He completed his PhD at Keio University where he proposed slogan generating systems. After PhD acquisition, he was dedicated to brain decoding and currently is working on building machine intelligence for medical engineering at RIKEN AIP. Because he has a strong interest in human intelligence, sensitivity, and health, his research interests include: word embedding on commonsense, sentiment analysis, sentence generation, and domain adaptation. He is more broadly interested in multidisciplinary areas natural language processing, computer vision, cognitive &amp; neuroscience, and AI applications to medical.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Rui-Yan.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Rui Yan<\/strong>\r\n\r\nPeking University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nDr. Rui Yan is an assistant professor at Peking University, an adjunct professor at Central China Normal University and Central University of Finance and Economics, and he was a Senior Researcher at Baidu Inc. He has investigated several open-domain conversational systems and dialogue systems in vertical domains. Till now he has published more than 100 highly competitive peer-reviewed papers. He serves as a (senior) program committee member of several top-tier venues (such as KDD, SIGIR, ACL, WWW, IJCAI, AAAI, CIKM, and EMNLP, etc.).\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Chuck-Yoo.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Chuck Yoo<\/strong>\r\n\r\nKorea University\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nChuck Yoo received B.S. degree from Seoul National University in 1982, and M.S. and Ph.D degrees from University of Michigan, Ann Arbor, Michigan in 1986 and 1990 respectively. From 1990 to 1995, he was with Sun Microsystems, Mountain View, California, working on Sun\u2019s operating systems. In 1995, he joined the computer science department of Korea University and served the dean of the College of Informatics for 5 years until Jan. of 2018.\r\n\r\nHe has been working on virtualization, starting with hypervisor for mobile phones, virtualized automotive platform, integrated SLA (service level agreement) for clouds and network virtualization including virtual routers and SDN. He hosted Xen Summit in Seoul in 2011 and served program committees of various conferences. In addition to publishing quite a number of papers, his research has influenced global industry leaders such as Samsung and LG to inspire and enhance their products.\r\n\r\nRecently, he is working with the College of Medicine for precision medicine and also with the College of Law to bring up new and revised legislative bills for the fourth industrial revolution.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Sung-eui-Yoon.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Sung-eui Yoon<\/strong>\r\n\r\nKAIST\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nSung-Eui Yoon is a professor at Korea Advanced Institute of Science and Technology (KAIST). He received the B.S. and M.S. degrees in computer science from Seoul National University in 1999 and 2001, respectively. He received his Ph.D. degree in computer science from the University of North Carolina at Chapel Hill in 2005. He was a postdoctoral scholar at Lawrence Livermore National Laboratory, USA. His research interests include graphics, vision, and robotics. He has published about 100 technical papers, and gave numerous tutorials on ray tracing, collision detection, and image search in premier conferences like ACM SIGGRAPH, IEEE Visualization, CVPR, ICRA, etc. He served as conf. co-chair and paper co-chair for ACM I3D 2012 and 2013 respectively. At 2008, he published a monograph on real-time massive model rendering with other three co-authors. Recently, we also published an online book on Rendering at 2018. Some of his papers received a test-of-time award, a distinguished paper award, and a few invitations to IEEE Trans. on Visualization and Graphics. He is currently senior members of IEEE and ACM.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Masatoshi-Yoshikawa.png\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Masatoshi Yoshikawa<\/strong>\r\n\r\nKyoto University\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nMasatoshi Yoshikawa received the B.E., M.E. and Ph.D. degrees from Department of Information Science, Kyoto University in 1980, 1982 and 1985, respectively. In 1985, he joined The Institute for Computer Sciences, Kyoto Sangyo University as an Assistant Professor. From April 1989 to March 1990, he has been a Visiting Scientist at the Computer Science Department of University of Southern California (USC). In 1993, he joined Nara Institute of Science and Technology as an Associate Professor of Graduate School of Information Science. From April 1996 to January 1997, he has stayed at Department of Computer Science, University of Waterloo as a Visiting Associate Professor. From June 2002 to March 2006, he served as a professor at Nagoya University. From April 2006, he has been a professor of Graduate School of Informatics, Kyoto University.\r\n\r\nOne of his current research topics is theory and practice of privacy protection. As a basic research, he investigated the potential privacy loss of a traditional Differential Privacy (DP) mechanism under temporal correlations. He is also interested in personal data market. Particularly, he is studying a mechanism for pricing and selling personal data perturbed by DP.\r\n\r\nHe was a General Co-Chair of the 6th IEEE International Conference on Big Data and Smart Computing (BigComp 2019). He is a Steering Committee member of the International Conference on Big Data and Smart Computing (BigComp), He is serving as a PC member of VLDB2020 and ICDE2030. He is member of the IEEE ICDE Steering Committee, Science Council of Japan (SCJ), ACM, IPSJ and IEICE.\r\n\r\n[\/panel] [\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Huanjing-Yue.png\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Huanjing Yue<\/strong>\r\n\r\nTianjin University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nHuanjing Yue received the B.S. and Ph.D. degrees from Tianjin University, Tianjin, China, in 2010 and 2015, respectively. She was an Intern with Microsoft Research Asia from 2011 to 2012, and from 2013 to 2015. She visited the Video Processing Laboratory, University of California at San Diego, from 2016 to 2017. She is currently an Associate Professor with the School of Electrical and Information Engineering, Tianjin University. Her current research interests include image processing and computer vision. She received the Microsoft Research Asia Fellowship Honor in 2013 and was selected into the Elite Scholar Program of Tianjin University in 2017.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Lijun-Zhang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Lijun Zhang<\/strong>\r\n\r\nNanjing University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nLijun Zhang received the B.S. and Ph.D. degrees in Software Engineering and Computer Science from Zhejiang University, China, in 2007 and 2012, respectively. He is currently an associate professor of the Department of Computer Science and Technology, Nanjing University, China. Prior to joining Nanjing University, he was a postdoctoral researcher at the Department of Computer Science and Engineering, Michigan State University, USA. His research interests include machine learning and optimization. He has published 80 academic papers, most of which are on prestigious conferences and journals, such as ICML, NeurIPS, COLT and JMLR. He received the DAMO Academy Young Fellow of Alibaba, and AAAI-12 Outstanding Paper Award.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Min-Zhang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Min Zhang<\/strong>\r\n\r\nTsinghua University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nDr. Min Zhang is a tenured associate professor in the Dept. of Computer Science &amp; Technology, Tsinghua University, specializes in Web search and recommendation, and user modeling. She is the vice director of State Key Lab. of Intelligent Technology &amp; Systems, the executive director of Tsinghua-MSRA Lab on Media and Search. She also serves as the ACM SIGIR Executive Committee member, associate editor for the ACM Transaction of Information Systems (TOIS), Short Paper co-Chair of SIGIR 2018, Program co-Chair of WSDM 2017, etc. She has published more than 100 papers on top level conferences with 4100+ citations. She was awarded Beijing Science and Technology Award (First Prize), etc. She also owns 12 patents. And she has made a lot of cooperation with international and domestic enterprises, such as Microsoft, Toshiba, Samsung, Sogou, WeChat, Zhihu, JD, etc\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Tianzhu-Zhang.png\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Tianzhu Zhang<\/strong>\r\n\r\nUniversity of Science and Technology of China\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nTianzhu Zhang is currently a Professor at the Department of Automation, School of Information Science and Technology, University of Science and Technology of China. His current research interests include pattern recognition, computer vision, multimedia computing, and machine learning. He has authored or co-authored over 80 journal and conference papers in these areas, including over 60 IEEE\/ACM Transactions papers (TPAMI\/IJCV\/TIP) and top-tier conference papers (ICCV\/CVPR\/ACM MM). According to the Google Scholar, his papers have been cited more than 4900 times. His work has been recognized by 2017 China Multimedia Conference Best Paper Award and 2016 ACM Multimedia Conference Best Paper Award (CCF-A). He has got Chinese Academy of Sciences President Award of Excellence in 2011, Excellent Doctoral Dissertation of Chinese Academy of Sciences in 2012, Youth Innovation Promotion Association CAS in 2018, and the Natural Science Award (first Prize) of Chinese Institute of Electronics in 2018. He served\/serves as the Area Chair for CVPR 2020, ICCV 2019, ACM MM 2019, WACV 2018, ICPR 2018, and MVA 2017, the Associate Editor for IEEE T-CSVT and Neurocomputing. He received the outstanding reviewer award in MMSJ, ECCV 2016 and CVPR 2018.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Yu-Zhang.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Yu Zhang<\/strong>\r\n\r\nUniversity of Science &amp; Technology of China\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nYu Zhang is an associate professor in School of Computer Science &amp; Technology, University of Science and Technology of China (USTC). She got her Ph.D. at USTC in Jan. 2005. Her current research interests include programming languages and systems for emerging AI applications, quantum software.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Zhou-Zhao.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Zhou Zhao<\/strong>\r\n\r\nZhejiang University\r\n\r\n[accordion] [panel header=\"Bio\"]\r\n\r\nZhou Zhao received his Ph.D. from the Hong Kong University of Science and Technology in 2015. He subsequently worked at Zhejiang University as an associate professor and doctoral supervisor. Zhao\u2019s main research interests are in natural language processing and multimedia key technology research and development. Zhao is a fellow of the Association for Computing Machinery(ACM),a fellow of the Institute of Electrical and Electronics Engineers(IEEE),and a fellow of the China Computer Federation(CCF).In addition, he release more than sixty papers on the top international conference, such as NIPS, CLR, ICML. Zhao was rewarded the Innovation Award of the Information Department of Zhejiang University the title of the Outstanding Youth in Zhejiang.\r\n\r\n[\/panel][\/accordion]\r\n\r\n<img class=\"avatar avatar-180 photo msr-profile-image alignleft\" style=\"margin-bottom: 10px\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/Wei-Shi-Zheng.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<strong>Wei-Shi Zheng<\/strong>\r\n\r\nSun Yat-sen University\r\n\r\n[accordion][panel header=\"Bio\"]\r\n\r\nDr. Wei-Shi Zheng is now a Professor with Sun Yat-sen University. Dr. Zheng received the PhD degree in Applied Mathematics from Sun Yat-sen University in 2008. He is now a full Professor at Sun Yat-sen University. He has now published more than 100 papers, including more than 80 publications in main journals (TPAMI, TNN\/TNNLS, TIP, TSMC-B, PR) and top conferences (ICCV, CVPR, IJCAI, AAAI). He has joined the organisation of four tutorial presentations in ACCV 2012, ICPR 2012, ICCV 2013 and CVPR 2015. His research interests include person\/object association and activity understanding in visual surveillance, and the related large-scale machine learning algorithm. Especially, Dr. Zheng has active research on person re-identification in the last five years. He serves a lot for many journals and conference, and he was announced to perform outstanding review in recent top conferences (ECCV 2016 &amp; CVPR 2017). He has ever joined Microsoft Research Asia Young Faculty Visiting Programme. He has ever served as a senior PC\/area chair\/associate editor of AVSS 2012, ICPR 2018, IJCAI 2019\/2020, AAAI 2020 and BMVC 2018\/2019. He is an IEEE MSA TC member. He is an associate editor of Pattern Recognition. He is a recipient of Excellent Young Scientists Fund of the National Natural Science Foundation of China, and a recipient of Royal Society-Newton Advanced Fellowship of United Kingdom.\r\n\r\n[\/panel] [\/accordion]"},{"id":4,"name":"Technology Showcase","content":"<h2>Technology Showcase by Microsoft Research Asia<\/h2>\r\n[accordion]\r\n\r\n[panel header=\"AutoSys: Learning based approach for system optimization\"]\r\n<strong>Presenter: <\/strong>Mao Yang, Microsoft Research\r\n\r\nAs computer systems and networking get increasingly complicated, optimizing them manually with explicit rules and heuristics becomes harder than ever before, sometimes impossible. At Microsoft Research Asia, our AutoSys project applies learning to large-scale system performance tuning. Our AutoSys framework (1) defines interfaces to expose system features for learning, (2) introduces monitors to detect learning-induced failures, and (3) runs resource management to support heterogenous requirements of learning-related tasks. Based on AutoSys, we have built a tool to help many crucial system scenarios within Microsoft. These scenarios include multimedia search for Bing (e.g., tail latency reduced by up to ~40%, and capacity increased by up to ~30%), job scheduling for Bing Ads (e.g., tail latency reduced by up to ~13%), and so on.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Dual Learning and Its Applications to Machine Translation and Speech Synthesis\"]\r\n<strong>Presenter: <\/strong>Yingce Xia and Xu Tan, Microsoft Research\r\n\r\nMany AI tasks are emerged in dual forms, e.g., English-to-French translation vs. French-to-English translation, speech recognition vs. speech synthesis, question answering vs. question generation, and image classification vs. image generation. Dual learning is a new learning framework that leverages the primal-dual structure of AI tasks to obtain effective feedback or regularization signals to enhance the learning\/inference process. In this demo, we will show two applications of dual learning: machine translation and speech synthesis.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Fluency Boost Learning and Inference for Neural Grammar Checker\"]\r\n<strong>Presenter: <\/strong>Tao Ge, Microsoft Research\r\n\r\nNeural sequence-to-sequence (seq2seq) approaches have proven to be successful in grammatical error correction (GEC). Based on the seq2seq framework, we propose a novel fluency boost learning and inference mechanism. Fluency boosting learning generates diverse error-corrected sentence pairs during training, enabling the error correction model to learn how to improve a sentence's fluency from more instances, while fluency boosting inference allows the model to correct a sentence incrementally with multiple inference steps. Combining fluency boost learning and inference with conventional seq2seq models, our approach achieves the state-of-the-art performance in the GEC benchmarks.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"OneOCR For Digital Transformation\"]\r\n<strong>Presenter: <\/strong>Qiang Huo, Microsoft Research\r\n\r\nIn Microsoft, we have been developing a new generation OCR engine (aka OneOCR), which can detect both printed and handwritten text in an image captured by a camera or mobile phone, and recognize the detected text for follow-up actions. Our unified OneOCR engine can recognize mixed printed and handwritten English text lines with arbitrary orientations (even flipped), outperforming significantly other leading industrial OCR engines on a wide range of application scenarios. Empowered by OneOCR engine, <a href=\"https:\/\/docs.microsoft.com\/en-us\/azure\/cognitive-services\/computer-vision\/concept-recognizing-text#read-api\">Computer Vision Read<\/a> capability and <a href=\"https:\/\/azure.microsoft.com\/en-us\/services\/search\/\">Cognitive Search capability of Azure Search<\/a> are generally available, and a <a href=\"https:\/\/azure.microsoft.com\/en-us\/services\/cognitive-services\/form-recognizer\/\">Form Recognizer<\/a> with <a href=\"https:\/\/docs.microsoft.com\/en-us\/azure\/cognitive-services\/form-recognizer\/quickstarts\/python-receipts\">Receipt Understanding<\/a> capability is available for preview, all in Azure Cognitive Services, which can power enterprise workflows and Robotic Process Automation (RPA) to spur digital transformation. In this presentation, I will demonstrate the capabilities of Microsoft\u2019s latest OneOCR engine, highlight its core component technologies, and explain the roadmap ahead.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Spreadsheet Intelligence for Ideas in Excel\"]\r\n<strong>Presenter:<\/strong> Shi Han, Microsoft Research\r\n\r\nIdeas in Excel aims at such one-click intelligence\u2014when a user clicks the Ideas button on the Home tab of Excel, the intelligent service will empower the user to understand his or her data via automatic recommendation of visual summaries and interesting patterns. Then the user can insert the recommendations to the spreadsheet to help further analysis or as analysis result directly. To enable such one-click intelligence, there are underlying technical challenges to solve. At the Data, Knowledge and Intelligence group of Microsoft Research Asia, we have long-term research on spreadsheet intelligence and automated insights accordingly. And via close collaboration with Excel product teams, we transferred a suite of technologies and shipped Ideas in Excel together. In this demo presentation, we will show this intelligent feature and introduce corresponding technologies.\r\n[\/panel]\r\n\r\n[\/accordion]\r\n<h2>Technology Showcase by Academic Collaborators<\/h2>\r\n[accordion]\r\n\r\n[panel header=\"3D Caricature Generation from Real Face Images\"]\r\n<strong>Presenter: <\/strong>Yucheol Jung, Wonjong Jang, and Seungyong Lee, POSTECH\r\n\r\nA 3D caricature can be defined as a 3D mesh with cartoon-style shape exaggeration of a face. We present a novel deep learning based framework that generates a 3D caricature for a given real face image. Our approach exploits 3D geometry information in the caricature generation process and produces more convincing 3D shape exaggerations than 2D caricature-based approaches.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"A Co-Training Method towards Machine Reading Comprehension\"]\r\n<strong>Presenter: <\/strong> Minlie Huang, Tsinghua University\r\n\r\nA Co-Training Method towards Machine Reading Comprehension\r\n\r\n[\/panel]\r\n\r\n[panel header=\"A Method for Controlling Human Hearing by Editing the Frequency of the Sound in Real Time\"]\r\n<strong>Presenter: <\/strong> Hiroki Watanabe, Hokkaido University\r\n\r\nA Method for Controlling Human Hearing by Editing the Frequency of the Sound in Real Time\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Abstractive Summarization of Reddit Posts with Multi-level Memory Networks\"]\r\n<strong>Presenter: <\/strong>Gunhee Kim, Seoul National University\r\n\r\nWe address the problem of abstractive summarization in two directions: proposing a novel dataset and a new model. First, we collect Reddit TIFU dataset, consisting of 120K posts from the online discussion forum Reddit. We use such informal crowd-generated posts as text source, in contrast with existing datasets that mostly use formal documents as source such as news articles. Thus, our dataset could less suffer from some biases that key sentences usually locate at the beginning of the text and favorable summary candidates are already inside the text in similar forms. Second, we propose a novel abstractive summarization model named multi-level memory networks (MMN), equipped with multi-level memory to store the information of text from different levels of abstraction. With quantitative evaluation and user studies via Amazon Mechanical Turk, we show the Reddit TIFU dataset is highly abstractive and the MMN outperforms the state-of-the-art summarization models.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Adaptive Graph Structure Learning for Image Sentence Matching\"]\r\n<strong>Presenter: <\/strong> TianZhu Zhang, University of Science and Technology of China\r\n\r\nWe adapt the attention mechanism for visual and semantic elements representation.\r\n\r\nWe adaptively construct graphs and update the features for objects and words, making good use of both the intra modality relationship and inter modality relationship.\r\n\r\nWe consider the structure information across different graphs by proposing a constraint on the semantic element, forcing the semantic element aligning to the corresponded visual element.\r\n\r\nThe proposed model obtains the promising results on dataset Flickr30K and MS-COCO.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Adversarial Attacks and Defenses in Deep Learning\"]\r\n<strong>Presenter: <\/strong> Yinpeng Dong, Tsinghua University\r\n\r\nAdversarial Attacks and Defenses in Deep Learning\r\n\r\n[\/panel]\r\n\r\n[panel header=\"AI+VIS: Automated Visualization Production\"]\r\n<strong>Presenter: <\/strong>Huamin Qu, The Hong Kong University of Science and Technology\r\n\r\nExisting visualization designs are often based on manual design and need lots of human efforts. How can we apply deep learning techniques to automatically generating visualization products? We report our two recent progresses on this direction:\r\n\r\nAutomated Graph Drawing: We propose a graph-LSTM-based model to directly generate graph drawings with desirable visual properties similar to the training drawings, which do not need users to tune different algorithm-specific parameters.\r\n\r\nAutomated Design of Timeline Infographics: We contribute an end-to-end approach to automatically extract an extensible template from a bitmap timeline image. The output can be used to generate new timelines with updated data.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Blockchain-Enabled Incentive and Trading Mechanism Design for AIoT Service Platform\"]\r\n<strong>Presenter: <\/strong> Ai-Chun Pang, National Taiwan University\r\n\r\nEnsure data effectiveness by the blockchain technology so as to hold data properties like immutability and credibility during the whole transaction process.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Bypassing Defense Methods for Neural Network Backdoor\"]\r\n<strong>Presenter: <\/strong> Sangwoo Ji and Jong Kim, POSTECH\r\n\r\nBin Zhu, Microsoft Research\r\n\r\nBypass two backdoor detection method: suspicious data instance detection and backdoor trigger detection.\r\n[\/panel]\r\n\r\n[panel header=\"Can Kernel Networking Become Fast Enough?\"]\r\n<strong>Presenter: <\/strong>Chuck Yoo, Korea University\r\n<ul>\r\n \t<li>Existing network optimizations suffer from poor stability, low resource efficiency, and a need for API changes<\/li>\r\n \t<li>Solution: Kernel-based optimization for high-performance networking<\/li>\r\n \t<li>L3 forwarding achieves performance similar to DPDK<\/li>\r\n \t<li>A virtual switch achieves 67.5% performance of DPDK-OVS and three times greater resource efficiency<\/li>\r\n<\/ul>\r\n[\/panel]\r\n\r\n[panel header=\"CDPN: Coordinates-Based Disentangled Pose Network for Real-Time RGB-Based 6-DoF Object Pose Estimation\"]\r\n<strong>Presenter: <\/strong> Xiangyang Ji, Tsinghua University\r\n\r\nCDPN: Coordinates-Based Disentangled Pose Network for Real-Time RGB-Based 6-DoF Object Pose Estimation\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Commonsense Reasoning with Structured Knowledge\"]\r\n<strong>Presenter: <\/strong> Hongming Zhang, The Hong Kong University of Science and Technology\r\n\r\nUnderstanding human\u2018s language requires complex commonsense knowledge. However, existing large-scale knowledge graphs mainly focus on knowledge about entities while ignoring commonsense knowledge about activities, states, or events, which are used to describe how entities or things act in the real world. To fill this gap, we develop ASER (activities, states, events, and their relations), a large-scale eventuality knowledge graph extracted from more than 11-billion-token unstructured textual data. ASER contains 15 relation types belonging to five categories, 194-million unique eventualities, and 64-million unique edges among them. Both human and extrinsic evaluations demonstrate the quality and effectiveness of ASER.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Complex Correlation Modeling and Analysis Framework for Incomplete, Multimodal and Dynamic Data\"]\r\n<strong>Presenter: <\/strong> Zizhao Zhang, Tsinghua University\r\n\r\nA well constructed hypergraph structure can represent the data correlation accurately, yet leading to better performance.\r\nHow to construct a good hypergraph to fit complex data?\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Concordia: Distributed Shared Memory with In-Network Cache Coherence\"]\r\n<strong>Presenter: <\/strong> Youyou Lu, Tsinghua University\r\n\r\nDivides coherence responsibility between the switch and servers. The switch serializes conflicted requests and forwards them to correct destinations via a lock-check-forward pipeline. Servers execute requester-driven coherence control to reach coherence and transit states.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Continual Learning with Dynamic Network Expansion\"]\r\n<strong>Presenter: <\/strong> Sung Ju Hwang, KAIST\r\n<ul>\r\n \t<li>Perform effective knowledge transfer from earlier tasks to later tasks.<\/li>\r\n \t<li>Prevent catastrophic forgetting, where the earlier task performance gets negatively affected by semantic drift of the representations as the model adapts to later tasks.<\/li>\r\n \t<li>Obtain maximal performance with minimal increase in the network capacity.<\/li>\r\n<\/ul>\r\n[\/panel]\r\n\r\n[panel header=\"Counting Hypergraph Colorings in the Local Lemma Regime\"]\r\n<strong>Presenter: <\/strong> Chao Liao, Shanghai Jiao Tong University\r\n\r\nCounting Hypergraph Colorings in the Local Lemma Regime\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Cross-Lingual Visual Grounding and Multimodal Machine Translation\"]\r\n<strong>Presenter: <\/strong> Chenhui Chu, Osaka University\r\n\r\nCross-Lingual Visual Grounding and Multimodal Machine Translation\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Curiosity-Bottleneck: Exploration by Distilling Task-Specific Novelty\"]\r\n<strong>Presenter: <\/strong>Gunhee Kim, Seoul National University\r\n\r\nExploration based on state novelty has brought great success in challenging reinforcement learning problems with sparse rewards. However, existing novelty-based strategies become inefficient in real-world problems where observation contains not only task-dependent state novelty of our interest but also task-irrelevant information that should be ignored. We introduce an information- theoretic exploration strategy named Curiosity-Bottleneck that distills task-relevant information from observation. Based on the information bottleneck principle, our exploration bonus is quantified as the compressiveness of observation with respect to the learned representation of a compressive value network. With extensive experiments on static image classification, grid-world and three hard-exploration Atari games, we show that Curiosity-Bottleneck learns an effective exploration strategy by robustly measuring the state novelty in distractive environments where state-of-the-art exploration methods often degenerate.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Deep Reinforcement Learning for the Transfer from Simulation to the Real World with Uncertainties for AI Curling Robot System\"]\r\n<strong>Presenter: <\/strong>Dong-Ok Won and Seong-Whan Lee, Korea University\r\n\r\nRecently, deep reinforcement learning (DRL) has even enabled real world applications such as robotics. Here we teach a robot to succeed in curling (Olympic discipline), which is a highly complex real-world application where a robot needs to carefully learn to play the game on the slippery ice sheet in order to compete well against human opponents. This scenario encompasses fundamental challenges: uncertainty, non-stationarity, infinite state spaces and most importantly scarce data. One fundamental objective of this study is thus to better understand and model the transfer from simulation to real-world scenarios with uncertainty. We demonstrate our proposed framework and show videos, experiments and statistics about Curly our AI curling robot being tested on a real curling ice sheet. Curly performed well both, in classical game situations and when interacting with human opponents.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Deep Text Generation: Conversation and Application\"]\r\n<strong>Presenter: <\/strong> Rui Yan, Peking University\r\n\r\nDeep Text Generation: Conversation and Application\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Development of 3D capsule endoscopic system\"]\r\n<strong>Presenter: <\/strong> Ryo Furukawa, Hiroshima City University\r\n\r\nDevelopment of 3D capsule endoscopic system\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Development of automatic Labanotation estimation system from video using Deep Learning\"]\r\n<strong>Presenter: <\/strong> Hiroshi Kawasaki, Kyushu University\r\n\r\nOur project aims to research on human representation and understanding human motion based on vision-based approach and develop new applications.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Dissecting and Accelerating Neural Network via Graph Instrumentation\"]\r\n<strong>Presenter: <\/strong> Jingwen Leng, Shanghai Jiao Tong University\r\n\r\nThe proposed graph instrumentation framework can observe and modify neural networks using user-defined analysis code without changes in source code.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Distant Supervised Domain-Specific Knowledge Base Construction and Population\"]\r\n<strong>Presenter: <\/strong>Lei Chen, The Hong Kong University of Science and Technology\r\n\r\nOur Goal in Domain-Specific KB Construction\r\n<ul>\r\n \t<li>Entity Extraction, Entity Typing and Relation Extraction related to the target domain.<\/li>\r\n \t<li>Training data generation based on distant-supervision without human annotation.<\/li>\r\n<\/ul>\r\n[\/panel]\r\n\r\n[panel header=\"Efficient and Effective Sparse DNNs with Bank-Balanced Sparsity\"]\r\n<strong>Presenter: <\/strong> Shijie Cao, Harbin Institute of Technology\r\n\r\nEfficient and Effective Sparse DNNs with Bank-Balanced Sparsity\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Efficient\u00a0Deep\u00a0Neural\u00a0Networks\u00a0for\u00a0Realistic\u00a0Noise\u00a0Removal\"]\r\n<strong>Presenter: <\/strong> Huanjing Yue, Tianjin University\r\n\r\nWe propose an end-to-end noise estimation and removal network, where the estimated noise map is weighted concatenated with the noisy input to improve the denoising performance.\r\n\r\nThe proposed noise estimation network takes advantage of the Bayer pattern prior of the noise maps, which not only improves the estimation accuracy but also reduces the memory cost.\r\n\r\nWe propose a RSD block to fully take advantage of the spatial and channel correlations of realistic noise. The ablation study demonstrates the effectiveness of the proposed module.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Emoji-Powered Representation Learning for Cross-Lingual Sentiment Analysis\"]\r\n<strong>Presenter: <\/strong> Zhenpeng Chen, Peking University\r\n\r\nEmoji-Powered Representation Learning for Cross-Lingual Sentiment Analysis\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Erebus: A Stealthier Partitioning Attack against Bitcoin Peer-to-Peer Network\"]\r\n<strong>Presenter: <\/strong> Muoi Tran, National University of Singapore\r\n\r\nWe present the\u00a0Erebus\u00a0attack, which allows large malicious Internet Service Providers (ISPs) isolate any targeted public Bitcoin nodes from the Bitcoin peer-to-peer network. The Erebus attack does\u00a0not\u00a0require routing manipulation (e.g., BGP hijacks) and hence it is\u00a0virtually undectable\u00a0to any control-plane and even typical data-plane detectors.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Explaining Word Embeddings via Disentangled Representations\"]\r\n<strong>Presenter: <\/strong> Shou-de Lin, National Taiwan University\r\n\r\nWe propose transforming word embeddings into interpretable representations disentangling explainable factors\r\n\r\nExamples of factors: a) Topical factors: food, location, animal, etc. b) Part-of-Speech factors: noun, adj, verb, etc.\r\n\r\nWe define and propose 4 desirable properties of our disentangled word vectors: a) Modularity, b) Compactness, c) Explicitness, d) Feature preservation\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Free-form Video Inpainting with 3D Gated Conv, TPD, and LGTSM\"]\r\n<strong>Presenter: <\/strong> Winston Hsu, National Taiwan University.\r\n\r\nFree-form Video Inpainting with 3D Gated Conv, TPD, and LGTSM\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Fluid: A Blockchain based Framework for Crowdsourcing\"]\r\n<strong>Presenter: <\/strong>Lei Chen, The Hong Kong University of Science and Technology\r\n\r\nFluid: A Blockchain based Framework for Crowdsourcing\r\n\r\n[\/panel]\r\n\r\n[panel header=\"FLUID: Flexible User Interface Distribution for Ubiquitous Multi-device Interaction\"]\r\n<strong>Presenter: <\/strong> Insik Shin, KAIST\r\n\r\nKey idea: separation between app logic &amp; UI parts\r\n1) Distributing target UI objects to remote devices and rendering them\r\n2) Giving an illusion as if app logic and UI objects were in the same process\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Fuzzing with Interleaving Coverage for Multi-threading Program\"]\r\n<strong>Presenter: <\/strong> Youngjoo Ko and Jong Kim, POSTECH\r\n\r\nBin Zhu, Microsoft Research\r\n\r\nIncrease the performance of fuzzing to discover more bugs in multi-threading programs using interleaving coverage.\r\n[\/panel]\r\n\r\n[panel header=\"Generative Model-based Speech Enhancement for Speech Recognition\"]\r\n<strong>Presenter: <\/strong>Jinyoung Lee and Hong-Goo Kang, Yonsei University\r\n<ul>\r\n \t<li>Remove ambient noise to improve automatic speech recognition performance<\/li>\r\n \t<li>Overcome the problems of conventional masking-based speech enhancement algorithms, e.g. speech signal distortion<\/li>\r\n \t<li>Propose a generative and adversarial model-based approach that effectively utilizes spectro-temporal characteristics of speech and noise components<\/li>\r\n<\/ul>\r\n[\/panel]\r\n\r\n[panel header=\"Global-Local Temporal Representations For Video Person Re-Identification\"]\r\n<strong>Presenter: <\/strong> Shiliang Zhang, Peking University\r\n<ul>\r\n \t<li>Propose Dilated Temporal Convolution (DTC) to learn short-term temporal cues<\/li>\r\n \t<li>Propose Temporal Self Attention (TSA) to learn the long-term temporal cues<\/li>\r\n \t<li>DTC and TSA learn complementary temporal feature<\/li>\r\n<\/ul>\r\n[\/panel]\r\n\r\n[panel header=\"Gradient Descent Finds Global Minima of DNNs\"]\r\n<strong>Presenter: <\/strong> Liwei Wang, Peking University\r\n\r\nGradient Descent Finds Global Minima of DNNs\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Graph Neural Networks for 3D Face Anti-spoofing\"]\r\n<strong>Presenter: <\/strong> Wei HU and Gusi Te, Peking University\r\n\r\nThis project aims to explore the emerging graph neural networks (GNN) based on texture plus depth features to address the problem of 3D face anti spoofing. Various spoofing attacks are growing by presenting a fake or copied facial evidence to obtain valid authentication. While anti spoofing\r\ntechniques using 2D facial data have matured, 3D face anti spoofing hasn\u2019t been studied much, thus allowing advanced spoofing techniques such as 3D masking at large. Hence, we propose to address this problem, based on texture plus depth cues acquired from RGBD cameras, and in the framework of GNN.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Graph-structured Knowledge Base Management and Applications\"]\r\n<strong>Presenter: <\/strong> Hongzhi Wang, Harbin Institute of Technology\r\n\r\nGraph-structured Knowledge Base Management and Applications\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Home Location Selection with Reachability\"]\r\n<strong>Presenter: <\/strong> YingcaiWu, Zhejiang University\r\n\r\nThis study characterizes the problem of reachabilitycentric multi-criteria decision-making for choosing ideal homes.The system can also be adopted in\r\nother location selection scenarios, in which the reachability of locations is considered (e.g., selecting a location for a convenience store).\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Identifying Structures in Spreadsheets\"]\r\n<strong>Presenter: <\/strong> Wensheng Dou, Chinese Academy of Sciences\r\n\r\nIdentifying Structures in Spreadsheets\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Image-to-Image Translation via Group-wise Deep Whitening-and-Coloring Transformation\"]\r\n<strong>Presenter: <\/strong>Jaegul Choo, Korea University\r\n\r\nRecently, unsupervised exemplar-based image-to-image translation has accomplished substantial advancements. In order to transfer the information from an exemplar to an input image, existing methods often use a normalization technique, e.g., adaptive instance normalization, that controls the channel-wise statistics of an input activation map at a particular layer, such as the mean and the variance. Meanwhile, style transfer approaches similar task to image translation by nature, demonstrated superior performance by using the higher-order statistics such as covariance among channels in representing a style. However, applying this approach in image translation is computationally intensive and error-prone due to the expensive time complexity and its non-trivial backpropagation. In response, this paper proposes an end-to-end approach tailored for image translation that efficiently approximates this transformation with our novel regularization methods. We further extend our approach to a group-wise form for memory and time efficiency as well as image quality. Extensive qualitative and quantitative experiments demonstrate that our proposed method is fast, both in training and inference, and highly effective in reflecting the style of an exemplar.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Immersive Biology - An Interactive Microscope for Informal Biology Education\"]\r\n<strong>Presenter: <\/strong>Jaewoo Jung, Kyungwon Lee and Seung Ah Lee, Yonsei University\r\n\r\nWe developed a new hybrid digital-biological system that provide interactive and immersive experiences between humans and biological objects for applications in life science education and research. The scope of this work includes;\r\n<ul>\r\n \t<li>Construction of an automated optical stimulation microscope, which uses light to both image and interface with light-sensitive cells.<\/li>\r\n \t<li>Use of human interaction modalities to convert human\u2019s natural input into stimuli for the microscopic biological objects.<\/li>\r\n \t<li>Comparative user study as a public installation that evaluated user behaviors, user engagement and learning outcomes.<\/li>\r\n<\/ul>\r\nWe expect that this platform will transform microscopes from a passive observation tool to an active interaction medium, assisting scientific research, life science education and clinical interventions.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Improving Join Reorderability with Compensation Operators\"]\r\n<strong>Presenter: <\/strong> TaiNing Wang and Chee-Yong Chan, National University of Singapore\r\n\r\nImproving Join Reorderability with Compensation Operators\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Improving the Performance of Video Analytics Using WIFI Signal\"]\r\n<strong>Presenter: <\/strong>Hai Truong, Rajesh Krishna Balan, Singapore Management University\r\n\r\nAutomatic analysis of the behaviour of large groups of people is an important requirement for a large class of important applications such as crowd management, traffic control, and surveillance. For example, attributes such as the number of people, how they are distributed, which groups they belong to, and what trajectories they are taking can be used to optimize the layout of a mall to increase overall revenue. A common way to obtain these attributes is to use video camera feeds coupled with advanced video analytics solutions. However, solely utilizing video feeds is challenging in high people-density areas, such as a normal mall in Asia, as the high people density significantly reduces the effectiveness of video analytics due to factors such as occlusion. In this work, we propose to combine video feeds with WiFi data to achieve better classification results of the number of people in the area and the trajectories of those people. In particular, we believe that our approach will combine the strengths. of the two different sensors, WiFi and video, while reducing the weaknesses of each sensor. This work has started fairly recently and we will present our thoughts and current results up to now.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Intelligent Action Analytics\"]\r\n<strong>Presenter: <\/strong> Jiaying Liu, Peking University\r\n\r\nIntelligent Action Analytics\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Interactive Methods to Improve Data Quality\"]\r\n<strong>Presenter: <\/strong> Changjian Chen, Tsinghua University\r\n\r\nInteractive Methods to Improve Data Quality\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Inter-learner shadowing framework for comprehensibility-based assessment of learners' speech\"]\r\n<strong>Presenter: <\/strong> Nobuaki MINEMATSU, University of Tokyo\r\n\r\nInter-learner shadowing framework for comprehensibility-based assessment of learners' speech\r\n\r\n[\/panel]\r\n\r\n[panel header=\"IoTcube: An Open Platform for Feedback based Protocol Fuzzing\"]\r\n<strong>Presenter: <\/strong>Heejo Lee, Korea University\r\n\r\nAn open platform for feedback based fuzzing improves its testing performance using two factors: binary feedback and user feedback.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Learning Multi-label Feature for Fine-Grained Food Recognition\"]\r\n<strong>Presenter: <\/strong> Xueming Qian, Xi'an Jiaotong University\r\n\r\n1.We proposed Attention Fusion Network (AFN). it pay attention to food discrimination region against unstru-ctured defeat, and generate the feature embeddings jointly aware the ingredients and food.\r\n\r\n2.We proposed the balance focal loss (BFL) to enhance the joint learning of ingredients and food, optimize feature expression ability for multi-label ingredients\r\n\r\n3. The effectiveness is proved through the comparative experiments.\u00a0 In particular, the use of balance focal loss make the Micro-F1, Macro-F1 and Accuracy of ingredi-ents improved by 5.76%, 12.62% and 5.78%.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"MAP Inference for Customized Determinantal Point Processes via Maximum Inner Product Search\"]\r\n<strong>Presenter: <\/strong> Insu Han, KAIST\r\n\r\nMAP Inference for Customized Determinantal Point Processes via Maximum Inner Product Search\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Minimizing Network Footprint in Distributed Deep Learning\"]\r\n<strong>Presenter: <\/strong> Hong Xu, City University of Hong Kong\r\n\r\nMinimizing Network Footprint in Distributed Deep Learning\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Multilingual End-to-End Speech Translation\"]\r\n<strong>Presenter: <\/strong> Hirofumi Inaguma, Kyoto University\r\n\r\nDirectly translate source speech to target languages with a single sequence-to-sequence (S2S) model\r\n<ul>\r\n \t<li>Many-to-many (M2M)<\/li>\r\n \t<li>One-to-many (O2M)<\/li>\r\n<\/ul>\r\nOutperformed the bilingual end-to-end speech translation (E2E-ST) models\r\n\r\nShared representations obtained from multilingual E2E-ST were more effective than those from the bilingual one for transfer learning to a very low-resource ST task: Mboshi-&gt;French (4.4h)\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Multi-marginal Wasserstein GAN\"]\r\n<strong>Presenter: <\/strong>Mingkui Tan, South China University of Technology\r\n<ul>\r\n \t<li>We propose a novel MWGAN to optimize the multi-marginal distance among different domains.<\/li>\r\n \t<li>We define and analyze the generalization performance of MWGAN for the multiple domain translation task.<\/li>\r\n \t<li>Extensive experiments demonstrate the effectiveness of MWGAN on balanced and imbalanced translation tasks.<\/li>\r\n<\/ul>\r\n[\/panel]\r\n\r\n[panel header=\"NAT: Neural Architecture Transformer for Accurate and Compact Architectures\"]\r\n<strong>Presenter: <\/strong>Mingkui Tan, South China University of Technology\r\n<ul>\r\n \t<li>Propose a novel Neural Architecture Transformer (NAT) to optimize any arbitrary architecture.<\/li>\r\n \t<li>Cast the problem into a Markov Decision Process.<\/li>\r\n \t<li>Employ Graph Convolution Network to learn the policy.<\/li>\r\n<\/ul>\r\n[\/panel]\r\n\r\n[panel header=\"NFD: Using Behavior Models to Develop Cross-Platform NFs\"]\r\n<strong>Presenter: <\/strong> Wenfei Wu, Tsinghua University\r\n\r\nWe propose a new NF development framework named NFD which consists of an NF abstraction layer to develop NF behavior models and a compiler to adapt NF models to specific runtime environments.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Non-factoid Question Answering for Text and Video\"]\r\n<strong>Presenter: <\/strong>Seung-won Hwang, Yonsei University\r\n\r\nQuestion Answering (QA) has been mostly studied in the context of factoid, providing concise facts. In contrast, we study Non-factoid QA, extending to cover more realistic questions such as how- or why-questions with long answers, from long texts or videos. This demo and poster address the following questions:\r\n<ul>\r\n \t<li>Non-factoid QA for text, combining the complementary strength of representation- and interaction-focused approaches [EMNLP19]. Extending this task for video has the opportunity and challenge, coming from multimodality and having no pre-divided answer candidates (e.g. paragraph), which is our ongoing MSRA collaboration.<\/li>\r\n \t<li>Human-in-the-loop debugging for QA Demo [SIGIR19]<\/li>\r\n<\/ul>\r\n[\/panel]\r\n\r\n[panel header=\"NPA: Neural News Recommendation with Personalized Attention\"]\r\n<strong>Presenter: <\/strong> Chuhan Wu, Tsinghua University\r\n<ul>\r\n \t<li>Different users usually have different interests in news.<\/li>\r\n \t<li>Different users may click the same news article due to different interests.<\/li>\r\n \t<li>We need personalized news and user representation!<\/li>\r\n<\/ul>\r\n[\/panel]\r\n\r\n[panel header=\"Numerical\/quantitative system for common sense natural language processing\"]\r\n<strong>Presenter: <\/strong> Hiroaki Yamane, The University of Tokyo\r\n\r\nWe construct methods for converting contextual language to numerical variables for quantitative\/numerical common sense in natural language processing.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Online Convex Optimization in Non-stationary Environments\"]\r\n<strong>Presenter: <\/strong> Shiyin Lu, Nanjing University\r\n\r\nOnline Convex Optimization in Non-stationary Environments\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Optimizing Quality of Experience (QoE) for Adaptive Bitrate Streaming via Deep Video Analytics\"]\r\n<strong>Presenter: <\/strong>Yonggang Wen, Nanyang Technological University\r\n\r\nQoE depending multiple families of Influential Factors (IF), to be optimized jointly for the best user experience.\r\n\r\nHow to develop a unified and scalable framework to optimize QoE for multimedia communications, in the presence of system dynamics?\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Paraphrasing and Simplification with Lean Vocabulary\"]\r\n<strong>Presenter: <\/strong> Tadashi Nomoto, National Institute of Japanese Literature\r\n\r\nThis work explores the impact of the subword representation on paraphrasing and text simplification. Experiments found that when combined with REINFORCE, the subword scheme boosted performance beyond the current state of the art both in paraphrasing and text simplification.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Pick-Carry-Place Household Tasks Using Labanotation for Learning-from-Observation Robots\"]\r\n<strong>Presenter: <\/strong> Jun Takamatsu, Nara Institute of Science and Technology\r\n\r\nPick-Carry-Place Household Tasks Using Labanotation for Learning-from-Observation Robots\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Predicting Future Instance Segmentation with Contextual Pyramid ConvLSTMs\"]\r\n<strong>Presenter: <\/strong>Wei-Shi Zheng, Sun Yat-sen University\r\n\r\nPredicting Future Instance Segmentation\r\n<ul>\r\n \t<li>Given several frames in a video, this task is to predict future instance segmentation before the corresponding frames are observed.<\/li>\r\n \t<li>It is challenging due to the uncertainty in appearance variation caused by object moving, occlusion between objects, and viewpoint changing in videos.<\/li>\r\n<\/ul>\r\n[\/panel]\r\n\r\n[panel header=\"Project Title: Secure and compact elliptic curve cryptosystems\"]\r\n<strong>Presenter: <\/strong>Yaoan Jin and Atsuko Miyaji, Graduate School of Engineering Osaka University\r\n\r\nAny attack based on information, such as timing information and power consumption, gained from the implementation of a cryptosystem.\r\n<ul>\r\n \t<li>Simple Power Analysis (SPA)<\/li>\r\n \t<li>Safe Error Attack<\/li>\r\n<\/ul>\r\n[\/panel]\r\n\r\n[panel header=\"Pruning from Scratch\"]\r\n<strong>Presenter: <\/strong> Hang Su, Tsinghua University\r\n\r\nIn this work, we find that pre-training an over-parameterized model is not necessary for obtaining an efficient pruned structure. We propose a novel network pruning pipeline which allows pruning from scratch.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Recent Progress of Handwritten Mathematical Expression Recognition\"]\r\n<strong>Presenter: <\/strong> Jun Du, University of Science and Technology of China\r\n\r\nRecent Progress of Handwritten Mathematical Expression Recognition\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Recurrent Temporal Aggregation Framework for Deep Video Inpainting\"]\r\n<strong>Presenter: <\/strong> Dahun Kim, KAIST\r\n<ul>\r\n \t<li>To remove unwanted object from a video<\/li>\r\n \t<li>Frame-by-frame image inpainting<\/li>\r\n<\/ul>\r\n[\/panel]\r\n\r\n[panel header=\"Relational Knowledge Distillation\"]\r\n<strong>Presenter: <\/strong>Wonpyo park, Dongju Kim, and Minsu Cho, POSTECH\r\n\r\nYan Lu, Microsoft Research\r\n\r\nKnowledge distillation aims at transferring knowledge acquired in one model (a teacher) to another model (a student) that is typically smaller. Previous approaches can be expressed as a form of training the student to mimic output activations of individual data examples represented by the teacher. We introduce a novel approach, dubbed relational knowledge distillation (RKD), that transfers mutual relations of data examples instead. For concrete realizations of RKD, we propose distance-wise and angle-wise distillation losses that penalize structural differences in relations. Experiments conducted on different tasks show that the proposed method improves educated student models with a significant margin. In particular for metric learning, it allows students to outperform their teachers' performance, achieving the state of the arts on standard benchmark datasets.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Research on Deep Learning Framework for Julia\"]\r\n<strong>Presenter: <\/strong> Yu Zhang, YuxiangZhang, YitongHuang, Xing Guo, University of Science and Technology of China\r\n\r\nResearch on Deep Learning Framework for Julia\r\n\r\n[\/panel]\r\n\r\n[panel header=\"SARA: Self-Replay Augmented Record and Replay for Android in Industrial Cases\"]\r\n<strong>Presenter: <\/strong> Ting Liu, Xi'an Jiaotong University\r\n\r\nSARA: Self-Replay Augmented Record and Replay for Android in Industrial Cases\r\n\r\n[\/panel]\r\n\r\n[panel header=\"secGAN: A Cycle-Consistent GAN for Securely-Recoverable Video Transformation\"]\r\n<strong>Presenter: <\/strong> Fengyuan Xu, Nanjing University\r\n\r\nVideo transformation needs to meet new requirements in actual use, such as privacy protection under surveillance scenarios:\r\n<ul>\r\n \t<li>The transformed video can be restored to the original ones.<\/li>\r\n \t<li>The transformed video only can be restored by the authorized party.<\/li>\r\n<\/ul>\r\nWe need a unified translation style and a unique stenography.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"StyleMe: An AI Fashion Consultant for Personal Shopping and Style Advice\"]\r\n<strong>Presenter: <\/strong> Shintami Chusnul Hidayati, Institut Teknologi Sepuluh Nopember; Wen-Huang Cheng, National Chiao Tung University; Jianlong Fu, Microsoft Research\r\n\r\nStyleMe: An AI Fashion Consultant for Personal Shopping and Style Advice\r\n\r\n[\/panel]\r\n\r\n[panel header=\"System support for designing efficient gradient compression algorithms for distributed DNN training\"]\r\n<strong>Presenter: <\/strong> Cheng Li, University of Science and Technology of China\r\n\r\nSystem support for designing efficient gradient compression algorithms for distributed DNN training\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Temporal Cause and Effect Localization on Car Crash Videos Via Multi-Task Neural Architecture Search\"]\r\n<strong>Presenter: <\/strong>Tackgeun You, POSTECH and Bohyung Han, Seoul National University\r\n<ul>\r\n \t<li>Introduce a benchmark for temporal cause and effect localization on car crash videos.<\/li>\r\n \t<li>Propose a multi-task baseline for simultaneously conducting temporal cause and effect localization.<\/li>\r\n \t<li>Propose a multi-task neural architecture search that decides to share or separate building blocks<\/li>\r\n<\/ul>\r\n[\/panel]\r\n\r\n[panel header=\"Towards a Deep and Unified Understanding of Deep Neural Models in NLP\"]\r\n<strong>Presenter: <\/strong> Chaoyu Guan, Shanghai Jiao Tong University\r\n\r\nA unified information based measure : quantify the information of each input word that is encoded in an intermediate layer of a deep NLP model.\r\n\r\nThe information based measure as a tool\r\n<ul>\r\n \t<li>Evaluating different explanation methods.<\/li>\r\n \t<li>Explaining different deep NLP models<\/li>\r\n<\/ul>\r\nThis measure enriches the capability of explaining DNNs.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Towards Complex Text-to-SQL in Cross-Domain Database with Intermediate Representation\"]\r\n<strong>Presenter: <\/strong> Ting Liu, Xi'an Jiaotong University\r\n\r\nTowards Complex Text-to-SQL in Cross-Domain Database with Intermediate Representation\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Vibration-Mediated Sensing Techniques for Tangible Interaction\"]\r\n<strong>Presenter:<\/strong> Seungmoon Choi and Seungjae Oh, POSTECH\r\n<ul>\r\n \t<li>Recognize contact finger(s) on any rigid surfaces by decoding transmitted frequencies<\/li>\r\n \t<li>Identify a grasped object by visualizing the propagation dynamics of vibration<\/li>\r\n<\/ul>\r\n[\/panel]\r\n\r\n[panel header=\"Video Generation from Natural Language by Decomposing the Components of Video : Background, Object, and Action\"]\r\n<strong>Presenter: <\/strong> Kibeom Hong and Hyeran Byun, Yonsei University\r\n<ul>\r\n \t<li style=\"list-style-type: none\">\r\n<ul>\r\n \t<li>Video can be created by separating Background and Foreground, and Foreground can be divided into Object and Action.<\/li>\r\n \t<li>We can get background and foreground information for video generation from text.<\/li>\r\n \t<li>In the Image domain, previous works[1,2,3] have studied image generation with text extensively, [4,5,6] expanded this idea to video domain.<\/li>\r\n \t<li>In this work, we want to create a video with three components in order to control more realistic and fine-grained parts.<\/li>\r\n<\/ul>\r\n<\/li>\r\n<\/ul>\r\n[\/panel]\r\n\r\n[panel header=\"Video Dialog via Progressive Inference and Cross-Transformer\"]\r\n<strong>Presenter: <\/strong> Zhou Zhao, Zhejiang University\r\n\r\nVideo dialog is a new and challenging task, which requires the agent to answer questions combining video information with dialog history. And different from single-turn video question answering, the additional dialog history is important for video dialog, which often includes contextual information for the question. Existing visual dialog methods mainly use RNN to encode the dialog history as a single vector representation, which might be rough and straightforward. Some more advanced methods utilize hierarchical structure, attention and memory mechanisms, which still lack an explicit reasoning process. In this paper, we introduce a novel progressive inference mechanism for video dialog, which progressively updates query information based on dialog history and video content until the agent think the information is sufficient and unambiguous. In order to tackle the multi- modal fusion problem, we propose a cross-transformer module, which could learn more fine-grained and comprehensive interactions both inside and between the modalities. And besides answer generation, we also consider question generation, which is more challenging but significant for a complete video dialog system. We evaluate our method on two largescale datasets, and the extensive experiments show the effectiveness of our method.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Widar 3.0: Zero-Effort Cross-Domain Gesture Recognition with Wi-Fi\"]\r\n<strong>Presenter: <\/strong> Zheng Yang, Tsinghua University\r\n\r\nWidar 3.0: Zero-Effort Cross-Domain Gesture Recognition with Wi-Fi\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Your Tweets Reveal What You Like: Introducing Cross-media Content Information into Multi-domain Recommendation\"]\r\n<strong>Presenter: <\/strong> Min Zhang, Tsinghua University\r\n\r\nThe key to solving this problem is to conduct better user profiling.\r\n\r\nHow about off-topic features in other platforms, such as tweets?\r\n<ul>\r\n \t<li style=\"list-style-type: none\">\r\n<ul>\r\n \t<li style=\"list-style-type: none\">\r\n<ul>\r\n \t<li>On-topic features are helpful in understanding users\u2019 interests and preference.<\/li>\r\n \t<li>Off-topic features are able to describe users too.<\/li>\r\n<\/ul>\r\n<\/li>\r\n<\/ul>\r\n<\/li>\r\n<\/ul>\r\nWe will try to introduce these off-topic features (tweets) into different rating prediction algorithms.\r\n\r\n[\/panel]\r\n\r\n[\/accordion]"},{"id":5,"name":"Information","content":"<h3><img class=\"alignnone wp-image-293876\" style=\"vertical-align: top\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/icon-address.png\" alt=\"21ccc-icon-5\" width=\"30\" height=\"30\" \/>\u00a0<strong>Microsoft Address<\/strong><\/h3>\r\nVenue: Tower 1-1F, No. 5 Danling Street, Haidian District, Beijing, China\r\n\r\n\u5730\u5740\uff1a\u4e2d\u56fd\u5317\u4eac\u6d77\u6dc0\u533a\u4e39\u68f1\u88575\u53f7\u5fae\u8f6f\u5927\u53a61\u53f7\u697c"},{"id":6,"name":"Image Gallery","content":"[gallery size=\"medium\" columns=\"2\" ids=\"624462,624492,624489,624486,624483,624480,624477,624474,624471,624468,624465,622980,622983,622986,622989,622992,622995,622998,623001,623004,623007,623010,623013,623016,623019,623031,623034,623037,623040,623046,623049,623052,623055,623058,623061,623067,623070,623073,623076,623079,623082,623085,623088,623091,623094,623097,623100,623103,623106,623109,623112,623115,623118,623121,623124,623127,623130,623133,623136,623139,623142,623145,623151,623154,623160,623163,623166,623169,623172,623157,623175,623178,623181,623184,623187,623190,623193,623196,623199,623202,623205,623208,623211,623214,623217,623220,623223,623226,623229,623232,623235,623238,623241,623244,623247,623250,623253,623256,623259,623262,623265,623268,623271,623274,623277,623280,623283,623286,623289,623292,623298,623301,623304,623307,623310,623313,623316,623322,623325,623331,623334,623337,623343,623346,623352,623355,623361,623364,623367,623376,623379,623382,623385,623388,623394,623400,623409,623412,623421,623424,623427,623628,623631,624024,624027,624030,624033,624036,624039,624042,624045,624048,624051,624054,624057,624060,624063,624066,624069,624072,624075,624078,624081,624084,624087,624090,624177,624180,624192,624198,624201,624204,624213,624216,624219,624222,624225,624228,624231,624234,624237,624240,624243,624246,624249,623745,623748,623751,623763\"]"}],"msr_startdate":"2019-11-07","msr_enddate":"2019-11-08","msr_event_time":"","msr_location":"Beijing, China","msr_event_link":"","msr_event_recording_link":"","msr_startdate_formatted":"November 7, 2019","msr_register_text":"Watch now","msr_cta_link":"","msr_cta_text":"","msr_cta_bi_name":"","featured_image_thumbnail":"<img width=\"960\" height=\"390\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/10\/msra-academic-day-2019-banner-4-960x390.jpg\" class=\"img-object-cover\" alt=\"\" decoding=\"async\" loading=\"lazy\" \/>","event_excerpt":"The Academic Day 2019 event brings together the intellectual power of researchers from across Microsoft Research Asia and the academic community to attain a shared understanding of the contemporary ideas and issues facing the field of tech. Together, we will advance the frontier of technology towards an ideal world of computing. Through our Microsoft Research Outreach Programs, Microsoft Research Asia has been actively collaborating with academic institutions to promote and progress further development in computer&hellip;","msr_research_lab":[199560],"related-researchers":[],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-opportunities":[],"related-publications":[],"related-videos":[],"related-posts":[],"_links":{"self":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/613563","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":13,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/613563\/revisions"}],"predecessor-version":[{"id":1147017,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/613563\/revisions\/1147017"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/media\/615285"}],"wp:attachment":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/media?parent=613563"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=613563"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=613563"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=613563"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=613563"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=613563"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=613563"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=613563"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=613563"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}