{"id":563847,"date":"2019-01-29T09:00:48","date_gmt":"2019-01-29T17:00:48","guid":{"rendered":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/?p=563847"},"modified":"2019-07-08T13:25:58","modified_gmt":"2019-07-08T20:25:58","slug":"microsoft-research-to-present-latest-findings-on-fairness-in-socio-technical-systems-at-fat-2019","status":"publish","type":"post","link":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/blog\/microsoft-research-to-present-latest-findings-on-fairness-in-socio-technical-systems-at-fat-2019\/","title":{"rendered":"Microsoft Research to present latest findings on fairness in socio-technical systems at FAT* 2019"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-564036\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/01\/FAT_AI_Site_01_2019_1400x788-1024x576.png\" alt=\"\" width=\"1024\" height=\"576\" srcset=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/01\/FAT_AI_Site_01_2019_1400x788-1024x576.png 1024w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/01\/FAT_AI_Site_01_2019_1400x788-300x169.png 300w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/01\/FAT_AI_Site_01_2019_1400x788-768x432.png 768w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/01\/FAT_AI_Site_01_2019_1400x788-1066x600.png 1066w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/01\/FAT_AI_Site_01_2019_1400x788-655x368.png 655w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/01\/FAT_AI_Site_01_2019_1400x788-343x193.png 343w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/01\/FAT_AI_Site_01_2019_1400x788.png 1400w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p>Researchers from Microsoft Research will present a series of studies and insights relating to fairness in machine learning systems and allocations at the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/fatconference.org\/2019\/\">FAT* Conference<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\u2014the new flagship conference for fairness, accountability, and transparency in socio-technical systems\u2014to be held from January 29\u201331 in Atlanta, Georgia.<\/p>\n<p>Presented across four papers and covering a broad spectrum of domains, the research is a reflection of the resolute commitment Microsoft Research has made to fairness in automated systems that shape human experience as they become more rapidly adopted in a growing number of contexts in society.<\/p>\n<h3>Bias in bios<\/h3>\n<p>In &#8220;<a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/publication\/bias-in-bios-a-case-study-of-semantic-representation-bias-in-a-high-stakes-setting\/\">Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting,&#8221;<\/a><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/mariadearteaga.com\/\"> Maria De-Arteaga<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"http:\/\/www.cs.uml.edu\/~aromanov\/\">Alexey Romanov<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/wallach\/\">Hanna Wallach<\/a>, <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/jchayes\/\">Jennifer Chayes<\/a>, <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/borgs\/\">Christian Borgs<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"http:\/\/www.andrew.cmu.edu\/user\/achoulde\/\">Alexandra Chouldechova<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.linkedin.com\/in\/sahin-geyik-3099314a\/\"> Sahin Geyik<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"http:\/\/theory.stanford.edu\/~kngk\/\">Krishnaram Kenthapadi<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, and <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/adum\/\">Adam Kalai<\/a> look closely at presumptions and realities regarding gender bias in occupation classification, shedding light on risks inherent in using machine learning in high-stakes settings, as well as on the difficulties that arise when trying to promote fairness by scrubbing explicit gender indicators, such as first names and pronouns, from online bios.<\/p>\n<p>Online recruiting and automated hiring are an enormously impactful societal domain in which the use of machine learning is increasingly popular\u2014and in which unfair practices can lead to unexpected and undesirable consequences. Maintaining an online professional presence has become indispensable for people\u2019s careers, and the data making up that presence often ends up in automated decision-making systems that advertise open positions and recruit candidates for jobs and other professional opportunities. To execute these tasks, a system must be able to accurately assess people\u2019s current occupations, skills, interests, and, more subjective but no less real, their potential.<\/p>\n<p>\u201cAutomated decision-making systems are playing an increasingly active role in shaping our lives\u2014and their predictions today even go as far as to affect the world we will live in tomorrow,\u201d said <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/wallach\/\">Hanna Wallach, Principal Researcher at Microsoft Research New York City<\/a>. \u201cFor example, machine learning is becoming increasingly popular in online recruiting and automated hiring contexts. Many of us have had jobs or other professional opportunities automatically suggested to us based on our online professional presences, and we were curious how much these recommendations might be affected by something like our genders.\u201d<\/p>\n<p>The researchers created a new dataset of hundreds of thousands of online biographies and were able to show that occupation classifiers exhibit significant true positive rate (TPR) gender gaps when using three di\ufb00erent semantic representations\u2014bag-of-words, word embeddings, and deep recurrent neural networks. They were also able to show that the correlation between these TPR gender gaps and existing gender imbalances in occupations may compound the imbalances. They performed simulations demonstrating that imbalances are especially problematic if people repeatedly encounter occupation classifiers because they cause underrepresented genders to become even further underrepresented.<\/p>\n<p>The researchers observed that because biographies are typically written in the third person by their subjects (or people familiar with their subjects) and because pronouns are often gendered in English, they were able to extract subjects\u2019 (likely) self-identified binary genders from the biographies. But they took pains to point out that a binary model of gender is a simplification that fails to capture important aspects of gender and erases people who do not fit within its assumptions.<\/p>\n<p>\u201cWe found that when explicit gender indicators\u2014such as first names and pronouns\u2014are present, machine learning classifiers trained to predict people\u2019s occupations do much worse at correctly predicting the occupations of women in stereotypically male professions and men in stereotypically female professions,\u201d said Wallach.<\/p>\n<p>Even when such gender indicators are scrubbed, these performance differences, though less pronounced, remain. In addition to the realization that scrubbing explicit gender indicators isn\u2019t enough to remove gender bias from occupation classifiers, the researchers discovered that even in the absence of such indicators, TPR gender gaps are correlated with existing gender imbalances in occupations. That is, occupation classifiers may in fact exacerbate existing gender imbalances.<\/p>\n<p>\u201cThese findings have very real implications in that they suggest that machine learning classifiers trained to predict people\u2019s occupations may compound or worsen existing gender imbalances in some occupations,\u201d said Wallach.<\/p>\n<p>The findings also suggested that there are di\ufb00erences between men\u2019s and women\u2019s online biographies other than explicit gender indicators, perhaps because of the varying ways that men and women present themselves or their having di\ufb00erent specializations within various occupations.<\/p>\n<p>\u201cOur paper highlights both the risks of using machine learning in a high-stakes setting and the difficulties inherent in trying to promote fairness by \u2018scrubbing\u2019 sensitive attributes, such as gender,\u201d Wallach said.<\/p>\n<p>Although the researchers focused on gender bias, they noted that other biases, such as those involving race or socioeconomic status, may also be present in occupation classification or in other tasks related to online recruiting and automated hiring.<\/p>\n<h3>Sending signals<\/h3>\n<p>In a world in which personal data drives more and more decision-making, both consequential and routine, there is a growing interest in the ways in which such data-driven decision-making has the potential to reinforce or amplify injustices. In &#8220;<a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/publication\/access-to-population-level-signaling-as-a-source-of-inequality\/\">Access to Population-Level Signaling as a Source of Inequality,<\/a>&#8221; <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/nicimm\/\">Nicole Immorlica<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"http:\/\/www.cs.huji.ac.il\/~katrina\/\">Katrina Ligett<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, and <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"http:\/\/www.its.caltech.edu\/~jziani\/\">Juba Ziani<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> examine the idea of fairness through an economic lens, finding that disparity in the data available to unbiased decision-makers\u2014optimizing determinations to fit their specific needs\u2014results in one population gaining a significant advantage over another.<\/p>\n<p>The researchers studied access to population-level signaling as a source of bias in outcomes. Population-level strategic signalers can serve as advocates for owners of personal data by filtering or noising data in hopes of improving individuals&#8217; prospects by making it more challenging for decision-makers to distinguish between high- and low-quality candidates. An example is high schools that, to increase the chances their students will be admitted to prestigious universities, inflate grades, refrain from releasing data on class rankings, and provide glowing recommendation letters for more than just the top students.<\/p>\n<p>The sophistication of the signaling that a school might engage in\u2014how strategic the school is in its data reporting versus how revealing it is (simply reporting the information it collects on its students directly to a university)\u2014makes an enormous difference in outcomes. As expected, strategic schools with accurate information about their students have a significant advantage over revealing schools\u2014and strategic schools get more of their students, including unqualified ones, admitted by a university.<\/p>\n<p>\u201cOne of the many sources of unfairness is that disadvantaged groups often lack the ability to signal their collective quality to decision-makers, meaning each individual must prove their worth on their own merits,\u201d said <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/nicimm\/\">Principal Researcher Nicole Immorlica of Microsoft Research New England and New York City<\/a>. \u201cIn comparison, members of advantaged groups are often lumped together, causing individuals to acquire the average quality of their peers in the mind of the decision-maker.\u201d<\/p>\n<p>The researchers go on to derive an optimal signaling scheme for a high school and demonstrate that disparities in ability to signal strategically can constitute a significant source of inequality. The researchers also examine the potential for standardized tests to ameliorate the problem, concluding it is limited in its ability to address strategic signaling inequities and may even exacerbate these inequities in some settings.<\/p>\n<p>\u201cBy looking at fairness through an economic lens, we can uncover purely structural sources of unfairness that persist even when unbiased decision-makers act only to maximize their own benefit,\u201d said Immorlica.<\/p>\n<h3>Strategic manipulation<\/h3>\n<p>In &#8220;<a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/publication\/the-disparate-effects-of-strategic-manipulation\/\">The Disparate Effects of Strategic Manipulation,<\/a>&#8221; <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/scholar.harvard.edu\/lilyhu\/home\">Lily Hu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/nicimm\/\">Nicole Immorlica<\/a>, and <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/jenn\/\">Jenn Wortman Vaughan<\/a> show how the expanding realm of algorithmic decision-making can change the way that individuals present themselves to obtain an algorithm\u2019s approval and how this can lead to increased social stratification.<\/p>\n<p>\u201cWe study an aspect of algorithmic fairness that has received relatively little attention: the disparities that can arise from different populations\u2019 differing abilities to strategically manipulate the way that they appear in order to be classified a certain way,\u201d said <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/jenn\/\">Jenn Wortman Vaughan, Senior Researcher at Microsoft Research New York City<\/a>. \u201cTake the example of college admissions. Suppose that admissions decisions incorporate SAT scores as a feature. Knowing that SAT scores impact decisions will prompt students who have the means to do so to boost their scores, say by taking SAT prep courses.\u201d<\/p>\n<p>As <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/scholar.harvard.edu\/lilyhu\/home\">Lily Hu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, a research intern from Harvard and lead author on the paper, put it, \u201cClassifiers don\u2019t just evaluate their subjects, but can animate them, as well.\u201d That is, the very existence of a classifier causes people to react. This becomes a problem when not every person has equal access to resources like test prep classes in the example of college admissions or interview coaching in the domain of automated hiring. Even when an algorithm draws on features that seem to reflect individual merit, these metrics can be skewed to favor those who are more readily able to alter their features.<\/p>\n<p>The researchers believe their work highlights a likely consequence of the expansion of algorithmic decision-making in a world that is marked by deep social inequalities. They demonstrate that the design of classification systems can grant undue rewards to those who appear more meritorious under a particular conception of merit while justifying exclusions of those who have failed to meet those standards. These consequences serve to exacerbate existing inequalities.<\/p>\n<p>\u201cOur game theoretic analysis shows how the relative advantage of privileged groups can be perpetuated in settings like this and that this problem is not so easy to fix,\u201d explained Wortman Vaughan. \u201cFor example, coming back to the college admissions example, we show that providing subsidies on SAT test prep courses to disadvantaged groups can have the counterintuitive effect of making those students worse off since it allows the bar for admissions to be set higher.\u201d<\/p>\n<p>\u201cIt is important to study the impacts of interventions in stylized models in order to illuminate the potential pitfalls,\u201d added fellow researcher Nicole Immorlica.<\/p>\n<h3>Allocating the indivisible<\/h3>\n<p>&#8220;<a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/publication\/fair-allocation-through-competitive-equilibrium-from-generic-incomes\/\">Fair Allocation through Competitive Equilibrium from Generic Incomes<\/a>,&#8221; by <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/moshe\/\">Moshe Babaioff<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"http:\/\/www.cs.huji.ac.il\/~noam\/\">Noam Nisan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, and <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"http:\/\/www.cs.huji.ac.il\/~italgam\/\">Inbal Talgam-Cohen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, examines an underexplored area of theory\u2014that of notions of fairness as applied to the allocation of indivisible items among players possessing different entitlements in settings without money. Imagine a scenario in which there are two food banks catering to populations of different sizes with different needs and that the two food banks must divide between each other a donation of food items. What will constitute a \u201cfair\u201d allocation of the available items?<\/p>\n<p>These scenarios arise frequently in the context of real-life allocation decisions, such as allocating donations to food banks, allocating courses to students, distributing shifts among workers, and even sharing computational resources across a university or company. The researchers sought to develop notions of fairness that apply to these types of settings and opted for an approach that would study fairness through the prism of competitive market equilibrium, even for cases in which entitlements differ.<\/p>\n<p>Focusing on market equilibrium theory for the Fisher market model, the researchers developed new fairness notions through a classic connection to competitive equilibrium. The first notion generalizes to the case of unequal entitlements and indivisible goods the well-known procedure of dividing a cake fairly between two kids: The first kid cuts the cake, and the second picks a piece. The second notion ensures that when we cannot give both what they deserve, we at least give as much as possible to the one that got less than he should.<\/p>\n<p>\u201cOur paper shows that for allocation of goods, market equilibrium ensures some attractive fairness properties even when people have different entitlements for the goods,\u201d said <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/moshe\/\">Moshe Babaioff, Senior Researcher at Microsoft Research<\/a>. \u201cAnd although such market equilibria might fail to exist for some entitlements, we show that this is a knife\u2019s-edge phenomenon that disappears once entitlements are slightly perturbed.\u201d<\/p>\n<h3>Don\u2019t miss the tutorial<\/h3>\n<p>In addition to the papers previewed here, there are many other exciting happenings at FAT* 2019. On the first day of the conference, Microsoft Research attendees, along with researchers from Spotify and Carnegie Mellon University, will be giving a tutorial titled &#8220;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/algorithmicbiasinpractice.wordpress.com\/\">Challenges of Incorporating Algorithmic Fairness into Industry Practice<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.&#8221; The tutorial draws on semi-structured interviews, a survey of machine learning practitioners, and the presenters\u2019 own practical experiences to provide an overview of the organizational and technical challenges that occur when translating research on fairness into practice.<\/p>\n<p>These efforts reflect Microsoft Research\u2019s commitment to fairness, accountability, transparency, and ethics in AI and machine learning systems. The <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/group\/fate\/\">FATE research group<\/a> at Microsoft studies the complex social implications of AI, machine learning, data science, large-scale experimentation, and increasing automation. A relatively new group, FATE is working on collaborative research projects that address these larger issues, including interpretability. FATE publishes across a variety of disciplines, including machine learning, information retrieval, sociology, algorithmic economics, political science, science and technology studies, and human-computer interaction.<\/p>\n<p>We look forward to sharing our work at \u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/fatconference.org\/2019\/\">FAT* 2019<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>. Hope to see you there!<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Researchers from Microsoft Research will present a series of studies and insights relating to fairness in machine learning systems and allocations at the FAT* Conference\u2014the new flagship conference for fairness, accountability, and transparency in socio-technical systems\u2014to be held from January 29\u201331 in Atlanta, Georgia. Presented across four papers and covering a broad spectrum of domains, [&hellip;]<\/p>\n","protected":false},"author":37074,"featured_media":564036,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":[],"msr_hide_image_in_river":0,"footnotes":""},"categories":[194455],"tags":[],"research-area":[13556],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-563847","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-machine-learning","msr-research-area-artificial-intelligence","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[437514],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[372368],"related-projects":[],"related-events":[],"related-researchers":[],"msr_type":"Post","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/01\/FAT_AI_Site_01_2019_1400x788.png\" class=\"img-object-cover\" alt=\"\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/01\/FAT_AI_Site_01_2019_1400x788.png 1400w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/01\/FAT_AI_Site_01_2019_1400x788-300x169.png 300w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/01\/FAT_AI_Site_01_2019_1400x788-768x432.png 768w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/01\/FAT_AI_Site_01_2019_1400x788-1024x576.png 1024w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/01\/FAT_AI_Site_01_2019_1400x788-1066x600.png 1066w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/01\/FAT_AI_Site_01_2019_1400x788-655x368.png 655w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2019\/01\/FAT_AI_Site_01_2019_1400x788-343x193.png 343w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","byline":"","formattedDate":"January 29, 2019","formattedExcerpt":"Researchers from Microsoft Research will present a series of studies and insights relating to fairness in machine learning systems and allocations at the FAT* Conference\u2014the new flagship conference for fairness, accountability, and transparency in socio-technical systems\u2014to be held from January 29\u201331 in Atlanta, Georgia. Presented&hellip;","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts\/563847","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/users\/37074"}],"replies":[{"embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/comments?post=563847"}],"version-history":[{"count":7,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts\/563847\/revisions"}],"predecessor-version":[{"id":564261,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts\/563847\/revisions\/564261"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/media\/564036"}],"wp:attachment":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/media?parent=563847"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/categories?post=563847"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/tags?post=563847"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=563847"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=563847"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=563847"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=563847"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=563847"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=563847"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=563847"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=563847"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}