{"id":1068642,"date":"2024-08-28T09:00:00","date_gmt":"2024-08-28T16:00:00","guid":{"rendered":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/?p=1068642"},"modified":"2024-08-27T13:49:22","modified_gmt":"2024-08-27T20:49:22","slug":"research-focus-week-of-august-26-2024","status":"publish","type":"post","link":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/blog\/research-focus-week-of-august-26-2024\/","title":{"rendered":"Research Focus: Week of August 26, 2024"},"content":{"rendered":"\n<figure class=\"wp-block-pullquote\"><blockquote><p>Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code\/datasets, new hires and other milestones from across the research community at Microsoft.<\/p><\/blockquote><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1400\" height=\"788\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1.jpg\" alt=\"Decorative graphic with wavy shapes in the background in blues and purples. Text overlay in center left reads: \u201cResearch Focus: August 26, 2024\u201d\" class=\"wp-image-1068687\" srcset=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1.jpg 1400w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1-300x169.jpg 300w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1-1024x576.jpg 1024w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1-768x432.jpg 768w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1-1066x600.jpg 1066w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1-655x368.jpg 655w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1-240x135.jpg 240w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1-640x360.jpg 640w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1-960x540.jpg 960w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1-1280x720.jpg 1280w\" sizes=\"auto, (max-width: 1400px) 100vw, 1400px\" \/><\/figure>\n\n\n\n<div class=\"wp-block-group is-layout-constrained wp-block-group-is-layout-constrained\">\n<h2 class=\"wp-block-heading h6 has-blue-color has-text-color has-link-color wp-elements-b60fbfa117bcce78e07885aa24d19fc7\" id=\"new-research\">EVENT<\/h2>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"heading\">Register now for Research Forum on September 3<\/h2>\n\n\n\n<p>Discover what\u2019s next in the world of AI at <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/researchforum.microsoft.com\/?OCID=msr_researchforum_ep4_RF48_rfhome_2024\" target=\"_blank\" rel=\"noopener noreferrer\">Microsoft Research Forum<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, an event series that explores recent research advances, bold new ideas, and important discussions with the global research community.<\/p>\n\n\n\n<p>In Episode 4, learn about Microsoft\u2019s research initiatives at the frontiers of multimodal AI. Discover novel models, benchmarks, and infrastructure for self-improvement, agents, weather prediction, and more.<\/p>\n\n\n\n<p>Your one-time registration includes access to our live chat with researchers on the event day.<\/p>\n\n\n\n<p>Episode 4 will air Tuesday, September 3 at 9:00 AM Pacific Time.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-outline is-style-outline--1\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/register.researchforum.microsoft.com\/?OCID=msr_researchforum_ep4_RF48_register_2024\" target=\"_blank\" rel=\"noreferrer noopener\">Register now<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n\t<div class=\"border-bottom border-top border-gray-300 mt-5 mb-5 msr-promo text-center text-md-left alignwide\" data-bi-aN=\"promo\" data-bi-id=\"670821\">\n\t\t\n\n\t\t<p class=\"msr-promo__label text-gray-800 text-center text-uppercase\">\n\t\t<span class=\"px-4 bg-white display-inline-block font-weight-semibold small\">Spotlight: Microsoft research newsletter<\/span>\n\t<\/p>\n\t\n\t<div class=\"row pt-3 pb-4 align-items-center\">\n\t\t\t\t\t\t<div class=\"msr-promo__media col-12 col-md-5\">\n\t\t\t\t<a class=\"bg-gray-300 display-block\" href=\"https:\/\/info.microsoft.com\/ww-landing-microsoft-research-newsletter.html\" aria-label=\"Microsoft Research Newsletter\" data-bi-cN=\"Microsoft Research Newsletter\" target=\"_blank\">\n\t\t\t\t\t<img decoding=\"async\" class=\"w-100 display-block\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/09\/Newsletter_Banner_08_2019_v1_1920x1080.png\" alt=\"\" \/>\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t<div class=\"msr-promo__content p-3 px-5 col-12 col-md\">\n\n\t\t\t\t\t\t\t\t\t<h2 class=\"h4\">Microsoft Research Newsletter<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<p id=\"microsoft-research-newsletter\" class=\"large\">Stay connected to the research community at Microsoft.<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<div class=\"wp-block-buttons justify-content-center justify-content-md-start\">\n\t\t\t\t\t<div class=\"wp-block-button is-style-fill-chevron\">\n\t\t\t\t\t\t<a href=\"https:\/\/info.microsoft.com\/ww-landing-microsoft-research-newsletter.html\" aria-describedby=\"microsoft-research-newsletter\" class=\"btn btn-brand glyph-append glyph-append-chevron-right\" data-bi-cN=\"Microsoft Research Newsletter\" target=\"_blank\">\n\t\t\t\t\t\t\tSubscribe today\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div><!--\/.msr-promo__content-->\n\t<\/div><!--\/.msr-promo__inner-wrap-->\n\t<\/div><!--\/.msr-promo-->\n\t\n\n\n<h2 class=\"wp-block-heading h6 has-blue-color has-text-color has-link-color wp-elements-e734c6e9609233ab051742bb3beeed63\" id=\"new-research\">NEW RESEARCH<\/h2>\n\n\n\n<div class=\"wp-block-group is-layout-constrained wp-block-group-is-layout-constrained\">\n<h2 class=\"wp-block-heading\" id=\"can-llms-learn-by-teaching-a-preliminary-study\">Can LLMs Learn by Teaching? A Preliminary Study<\/h2>\n\n\n\n<p>Teaching to improve student models (e.g., knowledge distillation) is an extensively studied methodology in large language models (LLMs). However, for humans, teaching not only improves students but also improves teachers. In a recent paper: <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/publication\/can-llms-learn-by-teaching-a-preliminary-study\/\" target=\"_blank\" rel=\"noreferrer noopener\">Can LLMs Learn by Teaching? A Preliminary Study<\/a>, researchers from Microsoft and external colleagues explore whether that rule also applies to LLMs. If so, this could potentially enable the models to advance and improve continuously without solely relying on human-produced data or stronger models.<\/p>\n\n\n\n<p>In this paper, the researchers show that learning by teaching (LbT) practices can be incorporated into existing LLM training\/prompting pipelines and provide noticeable improvements. They design three methods, each mimicking one of the three levels of LbT in humans: observing students&#8217; feedback; learning from the feedback; and learning iteratively, with the goals of improving answer accuracy without training and improving the models&#8217; inherent capability with fine-tuning. The results show that LbT is a promising paradigm to improve LLMs&#8217; reasoning ability and outcomes on several complex tasks (e.g., mathematical reasoning, competition-level code synthesis). The key findings are: (1) LbT can induce weak-to-strong generalization\u2014strong models can improve themselves by teaching other weak models; (2) Diversity in student models might help\u2014teaching multiple student models could be better than teaching one student model or the teacher itself. This study also offers a roadmap for integrating more educational strategies into the learning processes of LLMs in the future.&nbsp;<\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-fill-github\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/github.com\/imagination-research\/lbt\" target=\"_blank\" rel=\"noreferrer noopener\">Download code<\/a><\/div>\n<\/div>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-dots\"\/>\n\n\n\n<h2 class=\"wp-block-heading h6 has-blue-color has-text-color has-link-color wp-elements-e734c6e9609233ab051742bb3beeed63\" id=\"new-research\">NEW RESEARCH<\/h2>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"heading\">Arena Learning: Building a data flywheel for LLMs post-training via simulated chatbot arena<\/h2>\n\n\n\n<p>Conducting human-annotated competitions between chatbots is a highly effective approach to assessing the effectiveness of large language models (LLMs). However, this process comes with high costs and time demands, complicating the enhancement of LLMs via post-training. In a recent preprint: <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/publication\/arena-learning-build-data-flywheel-for-llms-post-training-via-simulated-chatbot-arena\/\" target=\"_blank\" rel=\"noreferrer noopener\">Arena Learning: Build Data Flywheel for LLMs Post-training via Simulated Chatbot Arena<\/a>, researchers from Microsoft and external colleagues introduce an innovative offline strategy designed to simulate these arena battles. This includes a comprehensive set of instructions for simulated battles employing AI-driven annotations to assess battle outcomes, facilitating continuous improvement of the target model through both supervised fine-tuning and reinforcement learning. A crucial aspect of this approach is ensuring precise evaluations and achieving consistency between offline simulations and online competitions.<\/p>\n\n\n\n<p>To this end, the researchers present\u202f<strong>WizardArena<\/strong>, a pipeline crafted to accurately predict the Elo rankings of various models using a meticulously designed offline test set. Their findings indicate that WizardArena\u2019s predictions are closely aligned with those from the online arena. They apply this novel framework to train a model,\u202f<strong>WizardLM-\u03b2<\/strong>, which demonstrates significant performance enhancements across various metrics. This fully automated training and evaluation pipeline paves the way for ongoing incremental advancements in various LLMs via post-training.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-outline is-style-outline--2\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/publication\/arena-learning-build-data-flywheel-for-llms-post-training-via-simulated-chatbot-arena\/\">Read the paper<\/a><\/div>\n<\/div>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-dots\"\/>\n<\/div>\n\n\n\n<div class=\"wp-block-group is-layout-constrained wp-block-group-is-layout-constrained\">\n<h2 class=\"wp-block-heading h6 has-blue-color has-text-color has-link-color wp-elements-e734c6e9609233ab051742bb3beeed63\" id=\"new-research\">NEW RESEARCH<\/h2>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"heading\">MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention<\/h2>\n\n\n\n<p>Computational challenges of large language model (LLM) inference restrict their widespread deployment, especially as prompt lengths continue to increase. Due to the quadratic complexity of the attention computation, it takes 30 minutes for an 8 billion parameter LLM to process a prompt of 1 million tokens (i.e., the pre-filling stage) on a single NVIDIA A100 graphics processing unit (GPU). Existing methods for speeding up pre-filling often fail to maintain acceptable accuracy or efficiency.<\/p>\n\n\n\n<p>In a recent preprint: <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/publication\/minference-1-0-accelerating-pre-filling-for-long-context-llms-via-dynamic-sparse-attention\/\">MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention<\/a>, researchers from Microsoft introduce a sparse calculation method designed to accelerate pre-filling of long-sequence processing. They identify three unique patterns in long-context attention matrices \u2013<strong> <\/strong>the A-shape, Vertical-Slash, and Block-Sparse \u2013 that can be leveraged for efficient sparse computation on GPUs. They determine the optimal pattern for each attention head offline and dynamically build sparse indices based on the assigned pattern during inference. They then perform efficient sparse attention calculations via optimized GPU kernels to reduce latency in the pre-filling stage of long-context LLMs. The research demonstrates that MInference (million tokens inference) reduces inference latency by up to 10x for pre-filling on an A100, while maintaining accuracy.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-outline is-style-outline--3\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/publication\/minference-1-0-accelerating-pre-filling-for-long-context-llms-via-dynamic-sparse-attention\/\">Read the paper<\/a><\/div>\n\n\n\n<div class=\"wp-block-button is-style-outline is-style-outline--4\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/aka.ms\/MInference\" target=\"_blank\" rel=\"noreferrer noopener\">View GitHub<\/a><\/div>\n<\/div>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-dots\"\/>\n<\/div>\n\n\n\n<div class=\"wp-block-group is-layout-constrained wp-block-group-is-layout-constrained\">\n<h2 class=\"wp-block-heading h6 has-blue-color has-text-color has-link-color wp-elements-e734c6e9609233ab051742bb3beeed63\" id=\"new-research\">NEW RESEARCH<\/h2>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"heading\">Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex Proofs<\/h2>\n\n\n\n<p>Regular expressions (regex) are used to represent and match patterns in text documents in a variety of applications: content moderation, input validation, firewalls, clinical trials, and more. Existing use cases assume that the regex and the document are both readily available to the querier, so they can match the regex on their own with standard algorithms. But what about situations where the document is actually held by someone else who does not wish to disclose to the querier anything about the document besides the fact that it matches or does not match a particular regex? The ability to prove such facts enables interesting new applications.\u00a0<\/p>\n\n\n\n<p>In a recent paper: <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/publication\/reef-fast-succinct-non-interactive-zero-knowledge-regex-proofs\/\" target=\"_blank\" rel=\"noreferrer noopener\">Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex Proofs<\/a>, researchers from Microsoft and the University of Pennsylvania present a system for generating publicly verifiable, succinct, non-interactive, zero-knowledge proofs that a committed document matches or does not match a regular expression. They describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Experimental evaluation confirms that Reef can generate proofs for documents with 32 million characters; the proofs are small and cheap to verify, taking less than one second.<\/p>\n\n\n\n<p>Reef is built on an open-source project from Microsoft Research, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/github.com\/microsoft\/Nova\" target=\"_blank\" rel=\"noopener noreferrer\">Nova: High-speed recursive arguments from folding schemes,<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> which implements earlier research work described in a paper titled <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/eprint.iacr.org\/2021\/370.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">Nova: Recursive Zero-Knowledge Arguments from Folding Schemes<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> by researchers from Microsoft, Carnegie Mellon University, and New York University.&nbsp;&nbsp;<\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-outline is-style-outline--5\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/publication\/reef-fast-succinct-non-interactive-zero-knowledge-regex-proofs\/\">Read the paper<\/a><\/div>\n\n\n\n<div class=\"wp-block-button is-style-outline is-style-outline--6\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/github.com\/microsoft\/Nova\" target=\"_blank\" rel=\"noreferrer noopener\">View GitHub<\/a><\/div>\n<\/div>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-dots\"\/>\n<\/div>\n\n\n\n<div class=\"wp-block-group is-layout-constrained wp-block-group-is-layout-constrained\">\n<h2 class=\"wp-block-heading h6 has-blue-color has-text-color has-link-color wp-elements-e734c6e9609233ab051742bb3beeed63\" id=\"new-research\">NEW RESEARCH<\/h2>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"heading\">HyperNova: Recursive arguments for customizable constraint systems<\/h2>\n\n\n\n<p>Incrementally verifiable computation (IVC) is a powerful cryptographic tool that allows its user to produce<strong> <\/strong>a proof of the correct execution of a \u201clong running\u201d computation in an incremental fashion. IVC enables a wide variety of applications in decentralized settings, including verifiable delay functions, succinct blockchains, rollups, verifiable state machines, and proofs of machine executions.<\/p>\n\n\n\n<p>In a recent paper: <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/publication\/hypernova-recursive-arguments-for-customizable-constraint-systems\/\" target=\"_blank\" rel=\"noreferrer noopener\">HyperNova: Recursive arguments for customizable constraint systems<\/a>, researchers from Microsoft and Carnegie Mellon University introduce a new recursive argument for proving incremental computations whose steps are expressed with CCS, a customizable constraint system that simultaneously generalizes Plonkish, R1CS, and AIR without overheads. HyperNova resolves four major problems in the area of recursive arguments.<\/p>\n\n\n\n<p>First, it provides a folding scheme for CCS where the prover\u2019s cryptographic cost is a single multiscalar multiplication (MSM) of size equal to the number of variables in the constraint system, which is optimal when using an MSM-based commitment scheme. This makes it easier to build generalizations of IVC, such as proof carrying data (PCD). Second, the cost of proving program executions on stateful machines (e.g., EVM, RISC-V) is proportional only to the size of the circuit representing the instruction invoked by the program step. Third, the researchers use a folding scheme to \u201crandomize\u201d IVC proofs, achieving zero-knowledge for \u201cfree\u201d and without the need to employ zero-knowledge SNARKs. Fourth, the researchers show how to efficiently instantiate HyperNova over a cycle of elliptic curves.\u00a0<\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-outline is-style-outline--7\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/publication\/hypernova-recursive-arguments-for-customizable-constraint-systems\/\">Read the paper<\/a><\/div>\n<\/div>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-dots\"\/>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Learn what\u2019s next for AI at Research Forum on Sept. 3; \u202fWizardArena simulates human-annotated chatbot games; MInference speeds pre-filling for long-context LLMs via dynamic sparse attention; Reef: Fast succinct non-interactive zero-knowledge regex proofs.<\/p>\n","protected":false},"author":42735,"featured_media":1068687,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":[{"type":"user_nicename","value":"Zinan Lin","user_id":"42327"},{"type":"user_nicename","value":"Qingfeng Sun","user_id":"40915"},{"type":"user_nicename","value":"Can Xu","user_id":"40108"},{"type":"user_nicename","value":"Pu Zhao","user_id":"38886"},{"type":"user_nicename","value":"Qingwei Lin \u6797\u5e86\u7ef4","user_id":"33318"},{"type":"user_nicename","value":"Weizhu Chen","user_id":"34863"},{"type":"user_nicename","value":"Huiqiang Jiang","user_id":"40807"},{"type":"user_nicename","value":"Chengruidong Zhang","user_id":"42018"},{"type":"user_nicename","value":"Qianhui Wu","user_id":"40741"},{"type":"user_nicename","value":"Xufang Luo","user_id":"40324"},{"type":"user_nicename","value":"Dongsheng Li","user_id":"39402"},{"type":"user_nicename","value":"Chin-Yew Lin","user_id":"31493"},{"type":"user_nicename","value":"Yuqing Yang","user_id":"40654"},{"type":"user_nicename","value":"Lili Qiu","user_id":"41320"},{"type":"user_nicename","value":"Srinath Setty","user_id":"33709"}],"msr_hide_image_in_river":0,"footnotes":""},"categories":[1],"tags":[],"research-area":[13561,13556,13545,13558,13547],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[243984],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-1068642","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-algorithms","msr-research-area-artificial-intelligence","msr-research-area-human-language-technologies","msr-research-area-security-privacy-cryptography","msr-research-area-systems-and-networking","msr-locale-en_us","msr-post-option-blog-homepage-featured"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[199560,199565],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[437022,815140,881388],"related-projects":[1053873],"related-events":[],"related-researchers":[{"type":"user_nicename","value":"Zinan Lin","user_id":42327,"display_name":"Zinan Lin","author_link":"<a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/zinanlin\/\" aria-label=\"Visit the profile page for Zinan Lin\">Zinan Lin<\/a>","is_active":false,"last_first":"Lin, Zinan","people_section":0,"alias":"zinanlin"},{"type":"user_nicename","value":"Pu Zhao","user_id":38886,"display_name":"Pu Zhao","author_link":"<a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/puzhao\/\" aria-label=\"Visit the profile page for Pu Zhao\">Pu Zhao<\/a>","is_active":false,"last_first":"Zhao, Pu","people_section":0,"alias":"puzhao"},{"type":"user_nicename","value":"Qingwei Lin \u6797\u5e86\u7ef4","user_id":33318,"display_name":"Qingwei Lin \u6797\u5e86\u7ef4","author_link":"<a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/qlin\/\" aria-label=\"Visit the profile page for Qingwei Lin \u6797\u5e86\u7ef4\">Qingwei Lin \u6797\u5e86\u7ef4<\/a>","is_active":false,"last_first":"\u6797\u5e86\u7ef4, Qingwei Lin","people_section":0,"alias":"qlin"},{"type":"user_nicename","value":"Weizhu Chen","user_id":34863,"display_name":"Weizhu Chen","author_link":"<a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/wzchen\/\" aria-label=\"Visit the profile page for Weizhu Chen\">Weizhu Chen<\/a>","is_active":false,"last_first":"Chen, Weizhu","people_section":0,"alias":"wzchen"},{"type":"user_nicename","value":"Qianhui Wu","user_id":40741,"display_name":"Qianhui Wu","author_link":"<a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/qianhuiwu\/\" aria-label=\"Visit the profile page for Qianhui Wu\">Qianhui Wu<\/a>","is_active":false,"last_first":"Wu, Qianhui","people_section":0,"alias":"qianhuiwu"},{"type":"user_nicename","value":"Dongsheng Li","user_id":39402,"display_name":"Dongsheng Li","author_link":"<a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/dongsli\/\" aria-label=\"Visit the profile page for Dongsheng Li\">Dongsheng Li<\/a>","is_active":false,"last_first":"Li, Dongsheng","people_section":0,"alias":"dongsli"},{"type":"user_nicename","value":"Chin-Yew Lin","user_id":31493,"display_name":"Chin-Yew Lin","author_link":"<a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/cyl\/\" aria-label=\"Visit the profile page for Chin-Yew Lin\">Chin-Yew Lin<\/a>","is_active":false,"last_first":"Lin, Chin-Yew","people_section":0,"alias":"cyl"},{"type":"user_nicename","value":"Yuqing Yang","user_id":40654,"display_name":"Yuqing Yang","author_link":"<a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/yuqyang\/\" aria-label=\"Visit the profile page for Yuqing Yang\">Yuqing Yang<\/a>","is_active":false,"last_first":"Yang, Yuqing","people_section":0,"alias":"yuqyang"},{"type":"user_nicename","value":"Srinath Setty","user_id":33709,"display_name":"Srinath Setty","author_link":"<a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/srinath\/\" aria-label=\"Visit the profile page for Srinath Setty\">Srinath Setty<\/a>","is_active":false,"last_first":"Setty, Srinath","people_section":0,"alias":"srinath"}],"msr_type":"Post","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1-960x540.jpg\" class=\"img-object-cover\" alt=\"Decorative graphic with wavy shapes in the background in blues and purples. Text overlay in center left reads: \u201cResearch Focus: August 26, 2024\u201d\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1-960x540.jpg 960w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1-300x169.jpg 300w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1-1024x576.jpg 1024w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1-768x432.jpg 768w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1-1066x600.jpg 1066w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1-655x368.jpg 655w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1-240x135.jpg 240w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1-640x360.jpg 640w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1-1280x720.jpg 1280w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/08\/RF48-BlogHeroFeature-1400x788-1.jpg 1400w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","byline":"","formattedDate":"August 28, 2024","formattedExcerpt":"Learn what\u2019s next for AI at Research Forum on Sept. 3; \u202fWizardArena simulates human-annotated chatbot games; MInference speeds pre-filling for long-context LLMs via dynamic sparse attention; Reef: Fast succinct non-interactive zero-knowledge regex proofs.","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts\/1068642","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/users\/42735"}],"replies":[{"embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/comments?post=1068642"}],"version-history":[{"count":25,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts\/1068642\/revisions"}],"predecessor-version":[{"id":1080258,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts\/1068642\/revisions\/1080258"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/media\/1068687"}],"wp:attachment":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1068642"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/categories?post=1068642"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/tags?post=1068642"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1068642"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=1068642"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=1068642"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1068642"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=1068642"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1068642"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=1068642"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=1068642"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}