{"id":455874,"date":"2017-05-10T00:00:41","date_gmt":"2017-05-10T07:00:41","guid":{"rendered":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/?p=455874"},"modified":"2018-01-23T17:47:39","modified_gmt":"2018-01-24T01:47:39","slug":"advancing-machine-comprehension-question-generation","status":"publish","type":"post","link":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/blog\/advancing-machine-comprehension-question-generation\/","title":{"rendered":"Advancing machine comprehension with question generation"},"content":{"rendered":"<h3>Microsoft Research Montreal lab&#8217;s vision is to create machines that can comprehend, reason and communicate with humans.<\/h3>\n<p>We see a future where humans interact with machines just as they would with another human. We could ask a question in natural language and have the machine respond with an appropriate answer.<\/p>\n<p>Yet answering questions is only one part of an interaction. In addition to our work in training machines to\u00a0<a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/blog\/creating-curious-machines-building-information-seeking-agents\/\">seek information<\/a>\u00a0and then\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/www.maluuba.com\/blog\/2016\/11\/23\/epireader-a-2-stage-approach-to-machine-comprehension\" target=\"_blank\" rel=\"noopener noreferrer\">read and reason<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\u00a0upon text and answer questions, we are now training machines to ask questions.<\/p>\n<h2>The importance of questions<\/h2>\n<p>While asking a question may seem straightforward, it is the process of asking the right question that can drive the better understanding of concepts and information. While many QA datasets are geared to training for answering questions \u2013 an\u00a0<em>extractive\u00a0<\/em>task \u2013 the process of asking questions is comparatively\u00a0<em>abstractive<\/em>: it requires the generation of text that may not appear in the context document. Asking \u2018good\u2019 questions involves skills beyond those needed to answer them.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-455883\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2017\/05\/download-3.jpg\" alt=\"\" width=\"1000\" height=\"291\" srcset=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2017\/05\/download-3.jpg 1000w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2017\/05\/download-3-300x87.jpg 300w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2017\/05\/download-3-768x223.jpg 768w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><\/p>\n<p style=\"text-align: center;\"><em>Examples of conditional question generation given a context and an answer<\/em><\/p>\n<p>We propose a recurrent neural model that generates natural-language questions from documents, conditioned on answers.\u00a0We show how to train the model using a combination of supervised and reinforcement learning to improve its performance. After teacher forcing for standard maximum likelihood training, we fine-tune the model using policy gradient techniques to maximize several rewards that measure question quality. Most notably, one of these rewards is the performance of a question-answering system.<\/p>\n<blockquote><p><em>To our knowledge, this is the first end-to-end, text-to-text model for question generation.<\/em><\/p><\/blockquote>\n<p>Our research paper outlines the development of the model, the training used, the results as well as implications and next steps.<\/p>\n\t<iframe\n\t\tsrc=\"https:\/\/www.youtube.com\/embed\/UIzcIC5RQN8\"\n\t\twidth=\"900\"\n\t\theight=\"506\"\n\t\taria-label=\"\"\n\t\tallowfullscreen=\"true\">\n\t<\/iframe>\n\t\n<div style=\"height: 20px;\"><\/div>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1705.02012\" target=\"_blank\" rel=\"noopener noreferrer\">Read the paper ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Microsoft Research Montreal lab&#8217;s vision is to create machines that can comprehend, reason and communicate with humans. We see a future where humans interact with machines just as they would with another human. We could ask a question in natural language and have the machine respond with an appropriate answer. Yet answering questions is only [&hellip;]<\/p>\n","protected":false},"author":39507,"featured_media":455913,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":[],"msr_hide_image_in_river":0,"footnotes":""},"categories":[194467,194456],"tags":[241839,241836,241842,241845,187159,241830,187166,241848],"research-area":[13556,13545],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-455874","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artifical-intelligence","category-natural-language-processing","tag-conditional-question-generation","tag-machine-comprehension","tag-natural-language-questions","tag-policy-gradient-techniques","tag-question-answering","tag-question-asking","tag-question-generation","tag-question-answering-system","msr-research-area-artificial-intelligence","msr-research-area-human-language-technologies","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[437514],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-events":[],"related-researchers":[],"msr_type":"Post","featured_image_thumbnail":"<img width=\"889\" height=\"496\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2017\/05\/question-generation-feature.png\" class=\"img-object-cover\" alt=\"Maluuba Question Generation video\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2017\/05\/question-generation-feature.png 889w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2017\/05\/question-generation-feature-300x167.png 300w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2017\/05\/question-generation-feature-768x428.png 768w\" sizes=\"auto, (max-width: 889px) 100vw, 889px\" \/>","byline":"","formattedDate":"May 10, 2017","formattedExcerpt":"Microsoft Research Montreal lab&#039;s vision is to create machines that can comprehend, reason and communicate with humans. We see a future where humans interact with machines just as they would with another human. We could ask a question in natural language and have the machine&hellip;","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts\/455874","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/users\/39507"}],"replies":[{"embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/comments?post=455874"}],"version-history":[{"count":11,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts\/455874\/revisions"}],"predecessor-version":[{"id":456207,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts\/455874\/revisions\/456207"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/media\/455913"}],"wp:attachment":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/media?parent=455874"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/categories?post=455874"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/tags?post=455874"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=455874"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=455874"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=455874"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=455874"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=455874"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=455874"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=455874"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=455874"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}