{"id":799780,"date":"2021-11-29T09:18:08","date_gmt":"2021-11-29T17:18:08","guid":{"rendered":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/?p=799780"},"modified":"2021-11-29T09:18:11","modified_gmt":"2021-11-29T17:18:11","slug":"unlocking-new-dimensions-in-image-generation-research-with-manifold-matching-via-metric-learning","status":"publish","type":"post","link":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/blog\/unlocking-new-dimensions-in-image-generation-research-with-manifold-matching-via-metric-learning\/","title":{"rendered":"Unlocking new dimensions in image-generation research with Manifold Matching via Metric Learning"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/1400x788_Manifold_Matching_no_logo_animation_new_title-1.gif\" alt=\"An image generation model training using MVM. Images show a panda progressively being generated to become more realistic, various paintings generated by the model, an anime character generated to become more realistic, and various cat images generated to become more realistic. \"\/><\/figure>\n\n\n\n<p>Generative image models offer a unique value by creating new images. Such images can be sharp super-resolution versions of existing images or even realistic-looking synthetic photographs. Generative Adversarial Networks (GANs) and their variants have demonstrated pioneering success with the framework of training two networks against each other: a generator network learns to generate realistic fake data that can trick a discriminator network, and the discriminator network learns to correctly tell apart the generated fake data from the real data.<\/p>\n\n\n\n<p>In order to apply the latest innovations in computer vision to GANs, the research community needs to address two challenges. First, GANs model data distributions with statistical measures, such as the mean and the moments, as opposed to geometric measures. Second, traditional GANs represent the loss of the discriminator network only as a 1D scalar value corresponding to the Euclidean distance between the real and the fake data distributions. Because of these two challenges, the research community has been unable to directly apply breakthrough metric learning methods or experiment with novel loss functions and training techniques to continue to improve generative models.<\/p>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"margin-callout\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 annotations__list--right\">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/publication\/manifold-matching-via-deep-metric-learning-for-generative-modeling\/\" data-bi-cN=\"Manifold Matching via Deep Metric Learning for Generative Modeling\" data-external-link=\"false\" data-bi-aN=\"margin-callout\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Manifold Matching via Deep Metric Learning for Generative Modeling<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p>In this work, &#8220;<a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/publication\/manifold-matching-via-deep-metric-learning-for-generative-modeling\/\">Manifold Matching via Deep Metric Learning for Generative Modeling<\/a>,\u201d we propose a new framework for generative models, which we call Manifold Matching via Metric Learning (MvM). In the MvM framework, two networks are trained against each other. The metric generator network learns to define a better metric for the distribution generator network\u2019s manifold matching objective, and the distribution generator network learns to produce more hard negative samples for the metric learning objective of the metric generator network. Through the adversarial training, MvM produces a distribution generator network that can generate fake data distribution that is very close to the real data distribution and a metric generator network that can provide an effective metric for capturing the internal geometric structure of the data distribution. This paper was accepted at the International Conference on Computer Vision (ICCV 2021) in October.<\/p>\n\n\n\n<h2 id=\"comparing-manifold-matching-via-metric-learning-to-generative-adversarial-networks\">Comparing Manifold Matching via Metric Learning to Generative Adversarial Networks<\/h2>\n\n\n\n<figure class=\"wp-block-table aligncenter\"><table><thead><tr><th class=\"has-text-align-center\" data-align=\"center\">Differences<\/th><th class=\"has-text-align-center\" data-align=\"center\">GANs<\/th><th class=\"has-text-align-center\" data-align=\"center\">MvM<\/th><\/tr><\/thead><tbody><tr><td class=\"has-text-align-center\" data-align=\"center\">Main point of view<\/td><td class=\"has-text-align-center\" data-align=\"center\">statistics<\/td><td class=\"has-text-align-center\" data-align=\"center\">geometry<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Matching terms<\/td><td class=\"has-text-align-center\" data-align=\"center\">means, moments, etc.<\/td><td class=\"has-text-align-center\" data-align=\"center\">centroids, <em>p<\/em>-diameters<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Matching criteria<\/td><td class=\"has-text-align-center\" data-align=\"center\">statistical discrepancy<\/td><td class=\"has-text-align-center\" data-align=\"center\">learned distances<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Underlying metric<\/td><td class=\"has-text-align-center\" data-align=\"center\">default Euclidean<\/td><td class=\"has-text-align-center\" data-align=\"center\">learned intrinsic<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Objective functions<\/td><td class=\"has-text-align-center\" data-align=\"center\">one min-max value function<\/td><td class=\"has-text-align-center\" data-align=\"center\">two distinct objectives<\/td><\/tr><\/tbody><\/table><figcaption>Table 1: Comparison between Generative Adversarial Networks (GANs) and Manifold Matching via Metric Learning (MvM)<\/figcaption><\/figure>\n\n\n\n<p>Table 1 summarizes five important differences between GANs and MvM. One useful characteristic of MvM is that it uses a learned intrinsic metric as opposed to the default Euclidean distance to represent the performance of its networks. This unlocks the opportunity for the research community to bring in latest breakthroughs from metric learning directly to the field of training generative models.<\/p>\n\n\n\n<p>MvM is also more interpretable than GANs are. GANs use a single min-max value function as the objective function. The training loss calculated using this objective function goes down when the discriminator network gets better and goes up when the generator network gets better. As a consequence, the training loss fluctuates up and down as the training progresses, giving a human interpreter no interpretable sign of the behavior or the performance of the networks.<\/p>\n\n\n\n<p>Today, a human interpreter is forced to print generated images and make qualitative decisions to deduce this information. In contrast, MvM uses two distinct objective functions. The first objective function calculates the metric learning loss\u2014fluctuating up and down to demonstrate that the two networks are learning adversarial against each other as desired. The other objective function calculates the manifold matching loss. This objective function monotonically decreases over training epochs as the generated fake data distribution become more similar to the real data distribution. This means a human interpreter is able to deduce quantitative conclusions based on the value of manifold matching loss.<\/p>\n\n\n\n<p>Finally, unlike GANs, MvM outputs multi-dimensional representation of images. This enables the research community to experiment with new or existing training frameworks and techniques, such as unsupervised representation learning or various metric learning techniques. This is expected to accelerate generative model research and unlock new potential directions that were previously considered impossible.<\/p>\n\n\n\n<h2 id=\"image-generation-and-image-super-resolution-via-mvm\">Image Generation and Image Super-Resolution via MvM<\/h2>\n\n\n\n<p>To test the effectiveness and the versatility of framework, we applied MvM to two popular image generation tasks: unsupervised image generation and image super-resolution.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full\"><a data-bi-bhvr=\"14\"  data-bi-cn=\"Images generated by StyleGAN2 architecture trained in\u202fMvM\u202fframework on\u202fthe\u202fFlickr-Faces-HQ Dataset\" href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/Figure1.jpg\"><img loading=\"lazy\" decoding=\"async\" width=\"1432\" height=\"564\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/Figure1.jpg\" alt=\"Images generated by StyleGAN2 architecture trained in\u202fMvM\u202fframework on\u202fthe\u202fFlickr-Faces-HQ Dataset\" class=\"wp-image-799801\" srcset=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/Figure1.jpg 1432w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/Figure1-300x118.jpg 300w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/Figure1-1024x403.jpg 1024w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/Figure1-768x302.jpg 768w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/Figure1-240x95.jpg 240w\" sizes=\"auto, (max-width: 1432px) 100vw, 1432px\" \/><\/a><figcaption>Figure 1: Images generated by StyleGAN2 architecture trained in MvM framework on the Flickr-Faces-HQ Dataset<\/figcaption><\/figure><\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"margin-callout\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 annotations__list--left\">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Tool<\/span>\n\t\t\t<a href=\"https:\/\/github.com\/NVlabs\/ffhq-dataset\" data-bi-cN=\"Flickr-Faces-HQ (FFHQ) dataset\" data-external-link=\"false\" data-bi-aN=\"margin-callout\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Flickr-Faces-HQ (FFHQ) dataset<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p>For the unsupervised image generation task, we trained a StyleGAN2 architecture and generated large images (512 pixels by 512 pixels) based on the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/github.com\/NVlabs\/ffhq-dataset\">Flickr-Faces-HQ (FFHQ) dataset<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\u2014a popular benchmark for demonstrating the effectiveness of image generation models. Results are shown in Figure 1, and the examples demonstrate that MvM is indeed effective in generating images that closely resemble real data in the dataset.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full\"><a data-bi-bhvr=\"14\"  data-bi-cn=\"A comparison of\u202fimages created with Bicubic, GAN, and\u202fMVM\u202fagainst the original High Resolution (HR). \u202fUnlike the results from GANs that have a grid-like result,\u202fMvM\u202fleads to a line-like result that is closer to ground truth.\" href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/Figure2.jpg\"><img loading=\"lazy\" decoding=\"async\" width=\"1439\" height=\"656\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/Figure2.jpg\" alt=\"A comparison of\u202fimages created with Bicubic, GAN, and\u202fMVM\u202fagainst the original High Resolution (HR). \u202fUnlike the results from GANs that have a grid-like result,\u202fMvM\u202fleads to a line-like result that is closer to ground truth.\" class=\"wp-image-799798\" srcset=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/Figure2.jpg 1439w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/Figure2-300x137.jpg 300w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/Figure2-1024x467.jpg 1024w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/Figure2-768x350.jpg 768w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/Figure2-240x109.jpg 240w\" sizes=\"auto, (max-width: 1439px) 100vw, 1439px\" \/><\/a><figcaption>Figure 2: Comparison of bicubic, GAN, and MvM against the original high resolution (HR).<\/figcaption><\/figure><\/div>\n\n\n\n<p>For the image super-resolution task, we trained three different generator backbones, ResNet, RDN, and NSRNet, using GAN and MvM. Figure 2 shows the original high-resolution image as the ground truth, a bicubic super-resolution algorithm as the baseline, and two NSRNet generator networks trained respectively with GAN and MvM for comparison. Unlike the results from GANs that have a grid-like result, MvM leads to a line-like result that is closer to ground truth. You can see this is closer to the HR method shown above versus other methods like bicubic (see Figure 2). We qualitatively observe that the generator network trained with MvM surpasses the one trained with GAN in reconstructing fine details like outlines without inaccurate grid-like artifacts. For more examples, details, and benchmark results, please read our <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/publication\/manifold-matching-via-deep-metric-learning-for-generative-modeling\/\">full paper<\/a>.<\/p>\n\n\n\n<h2 id=\"dive-deeper-into-manifold-matching-via-metric-learning\">Dive deeper into Manifold Matching via Metric Learning<\/h2>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"margin-callout\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 annotations__list--right\">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Tool<\/span>\n\t\t\t<a href=\"https:\/\/github.com\/dzld00\/pytorch-manifold-matching\" data-bi-cN=\"Pytorch Manifold Matching\" data-external-link=\"false\" data-bi-aN=\"margin-callout\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Pytorch Manifold Matching<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p>You can download a PyTorch implementation of the Manifold Matching via Metric Learning at our <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/github.com\/dzld00\/pytorch-manifold-matching\">GitHub repository<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>. If the opportunity to build pioneering computer vision models like this excites you, please visit our <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/careers.microsoft.com\/us\/en\/search-results?keywords=bing%20multimedia\">career page<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> to learn about our openings. We thank Haibin Hang, Cynthia Yu, Kun Wu, Meenaz Merchant, Arun Sacheti, and Jordi Ribas for enabling this work.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Generative image models offer a unique value by creating new images. Such images can be sharp super-resolution versions of existing images or even realistic-looking synthetic photographs. Generative Adversarial Networks (GANs) and their variants have demonstrated pioneering success with the framework of training two networks against each other: a generator network learns to generate realistic fake [&hellip;]<\/p>\n","protected":false},"author":39507,"featured_media":800440,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":[{"type":"user_nicename","value":"Mengyu Dai","user_id":"41092"},{"type":"user_nicename","value":"Junwon Park","user_id":"41110"}],"msr_hide_image_in_river":0,"footnotes":""},"categories":[1],"tags":[],"research-area":[13562,13551],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[243984],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-799780","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-computer-vision","msr-research-area-graphics-and-multimedia","msr-locale-en_us","msr-post-option-blog-homepage-featured"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-events":[778099],"related-researchers":[],"msr_type":"Post","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/1400x788_MvM_still_no_logo-960x540.jpg\" class=\"img-object-cover\" alt=\"An image generation model training using MVM. Images show a panda progressively being generated to become more realistic, various paintings generated by the model, an anime character generated to become more realistic, and various cat images generated to become more realistic.\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/1400x788_MvM_still_no_logo-960x540.jpg 960w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/1400x788_MvM_still_no_logo-300x169.jpg 300w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/1400x788_MvM_still_no_logo-1024x576.jpg 1024w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/1400x788_MvM_still_no_logo-768x432.jpg 768w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/1400x788_MvM_still_no_logo-1536x865.jpg 1536w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/1400x788_MvM_still_no_logo-2048x1153.jpg 2048w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/1400x788_MvM_still_no_logo-1066x600.jpg 1066w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/1400x788_MvM_still_no_logo-655x368.jpg 655w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/1400x788_MvM_still_no_logo-343x193.jpg 343w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/1400x788_MvM_still_no_logo-240x135.jpg 240w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/1400x788_MvM_still_no_logo-640x360.jpg 640w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/1400x788_MvM_still_no_logo-1280x720.jpg 1280w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2021\/11\/1400x788_MvM_still_no_logo-1920x1080.jpg 1920w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","byline":"Mengyu Dai and Junwon Park","formattedDate":"November 29, 2021","formattedExcerpt":"Generative image models offer a unique value by creating new images. Such images can be sharp super-resolution versions of existing images or even realistic-looking synthetic photographs. Generative Adversarial Networks (GANs) and their variants have demonstrated pioneering success with the framework of training two networks against&hellip;","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts\/799780","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/users\/39507"}],"replies":[{"embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/comments?post=799780"}],"version-history":[{"count":15,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts\/799780\/revisions"}],"predecessor-version":[{"id":800452,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts\/799780\/revisions\/800452"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/media\/800440"}],"wp:attachment":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/media?parent=799780"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/categories?post=799780"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/tags?post=799780"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=799780"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=799780"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=799780"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=799780"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=799780"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=799780"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=799780"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=799780"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}