{"id":305426,"date":"2011-10-17T09:00:14","date_gmt":"2011-10-17T16:00:14","guid":{"rendered":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/?p=305426"},"modified":"2016-10-13T12:20:43","modified_gmt":"2016-10-13T19:20:43","slug":"two-extremes-touch-interaction","status":"publish","type":"post","link":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/blog\/two-extremes-touch-interaction\/","title":{"rendered":"Two Extremes of Touch Interaction"},"content":{"rendered":"<p><em>By Janie Chang, Writer, Microsoft Research<\/em><\/p>\n<p><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/lab\/microsoft-research-redmond\/\" target=\"_blank\">Microsoft Research Redmond<\/a> researchers <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/benko\/\" target=\"_blank\">Hrvoje Benko<\/a> and <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/ssaponas\/\" target=\"_blank\">Scott Saponas<\/a> have been investigating the use of touch interaction in computing devices since the mid-\u201900s. Now, two sharply different yet related projects demonstrate novel approaches to the world of touch and gestures.<\/p>\n<p>Wearable Multitouch Interaction gives users the ability to make an entire wall a touch surface, while PocketTouch enables users to interact with smartphones inside a pocket or purse, a small surface area for touch. Both projects will be unveiled during <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"https:\/\/uist.acm.org\/uist2011\/\" target=\"_blank\">UIST 2012<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, the Association for Computing Machinery\u2019s 24th Symposium on User Interface Software and Technology, being held Oct. 16-19 in Santa Barbara, Calif.<\/p>\n<h2>Make Every Surface a Touch Screen<\/h2>\n<p>Wearable Multitouch Interaction turns any surface in the user\u2019s environment into a touch interface. A paper co-authored by <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/www.chrisharrison.net\/\" target=\"_blank\">Chris Harrison<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, a Ph.D. student at Carnegie Mellon University and a former Microsoft Research intern; Benko; and <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/people\/awilson\/\" target=\"_blank\">Andy Wilson<\/a>\u2014describes a wearable system that enables graphical, interactive, multitouch input on arbitrary, everyday surfaces.<\/p>\n<div id=\"attachment_305450\" style=\"width: 320px\" class=\"wp-caption alignleft\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-305450\" class=\"size-full wp-image-305450\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2011\/10\/Hrvoje-Benko.png\" alt=\"Hrvoje Benko\" width=\"310\" height=\"315\" srcset=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2011\/10\/Hrvoje-Benko.png 310w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2011\/10\/Hrvoje-Benko-295x300.png 295w\" sizes=\"auto, (max-width: 310px) 100vw, 310px\" \/><p id=\"caption-attachment-305450\" class=\"wp-caption-text\">Hrvoje Benko<\/p><\/div>\n<p>\u201cWe wanted to capitalize on the tremendous surface area the real world provides,\u201d explains Benko, of the <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/group\/natural-interaction-research\/\" target=\"_blank\">Natural Interaction Research<\/a> group. \u201cThe surface area of one hand alone exceeds that of typical smart phones. Tables are an order of magnitude larger than a tablet computer. If we could appropriate these ad hoc surfaces in an on-demand way, we could deliver all of the benefits of mobility while expanding the user\u2019s interactive capability.\u201d<\/p>\n<p>The Wearable Multitouch Interaction prototype is built to be wearable, a novel combination of laser-based pico projector and depth-sensing camera. The camera is an advanced, custom prototype provided by PrimeSense. Once the camera and projector are calibrated to each other, the user can don the system and begin using it.<\/p>\n<p>\u201cThis custom camera works on a similar principle to <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/www.xbox.com\/en-US\/xbox-one\/accessories\/kinect\" target=\"_blank\">Kinect<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u201d Benko says, \u201cbut it is modified to work at short range. This camera and projector combination simplified our work because the camera reports depth in world coordinates, which are used when modeling a particular graphical world; the laser-based projector delivers an image that is always in focus, so didn\u2019t need to calibrate for focus.\u201d<\/p>\n<p>The early phases of this work raised some metaphysical questions. If any surface can act as an interactive surface, then what does the user interact with and what is the user interacting on? The team also debated the notion of turning everything in the environment into a touch surface. Sensing touch on an arbitrary deformable surface is a difficult problem that no one has tackled before. Touch surfaces are usually highly engineered devices, and they wanted to turn walls, notepads, and hands into interactive surfaces\u2014while enabling the user to move about. The researchers agree that the first three weeks of the project were the most challenging.<\/p>\n<p>Harrison recalls their early brainstorming sessions.<\/p>\n<div id=\"attachment_305453\" style=\"width: 320px\" class=\"wp-caption alignright\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-305453\" class=\"size-full wp-image-305453\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2011\/10\/Chris-Harrison.png\" alt=\"Chris Harrison\" width=\"310\" height=\"315\" srcset=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2011\/10\/Chris-Harrison.png 310w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2011\/10\/Chris-Harrison-295x300.png 295w\" sizes=\"auto, (max-width: 310px) 100vw, 310px\" \/><p id=\"caption-attachment-305453\" class=\"wp-caption-text\">Chris Harrison<\/p><\/div>\n<p>\u201cWe had to assume it was possible,\u201d he recalls, \u201cthen go about defining the system and its interactions, then conduct initial tests with different technologies to see how we could implement the concept. It was during those initial weeks that we achieved the biggest breakthroughs in our thinking. That was a really exciting stage of research.\u201d<\/p>\n<p>One of the key decisions for Wearable Multitouch Interaction was that the system would interact with fingers. This raised the challenge of finger segmenting: defining to the system what fingers look like so that it could identify fingers or shapes that looked like fingers. Following this decision was the notion that any surface underneath those fingers is potentially a projected surface for touch interaction.<\/p>\n<div id=\"attachment_305456\" style=\"width: 410px\" class=\"wp-caption alignleft\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-305456\" class=\"size-full wp-image-305456\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2011\/10\/Wearable-Multitouch-Interaction-system.png\" alt=\"A prototype of a shoulder-worn Wearable Multitouch Interaction system\" width=\"400\" height=\"204\" srcset=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2011\/10\/Wearable-Multitouch-Interaction-system.png 400w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2011\/10\/Wearable-Multitouch-Interaction-system-300x153.png 300w\" sizes=\"auto, (max-width: 400px) 100vw, 400px\" \/><p id=\"caption-attachment-305456\" class=\"wp-caption-text\">A prototype of a shoulder-worn Wearable Multitouch Interaction system.<\/p><\/div>\n<p>Then came the next problem: click detection. How can the system detect a touch when the surface being touched contains no sensors?<\/p>\n<p>\u201cIn this case, we&#8217;re detecting proximity at a very fine level,\u201d Benko explains. \u201cThe system decides the finger is touching the surface if it\u2019s close enough to constitute making contact. This was fairly tricky, and we used a depth map to determine proximity. In practice, a finger is seen as \u201cclicked\u201d when its hover distance drops to one centimeter or less above a surface, and we even manage to maintain the clicked state for dragging operations.\u201d<\/p>\n<div id=\"attachment_305462\" style=\"width: 321px\" class=\"wp-caption alignright\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-305462\" class=\"size-full wp-image-305462\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2011\/10\/Wearable-touch-screen.png\" alt=\"wearable touch screen\" width=\"311\" height=\"489\" srcset=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2011\/10\/Wearable-touch-screen.png 311w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2011\/10\/Wearable-touch-screen-191x300.png 191w\" sizes=\"auto, (max-width: 311px) 100vw, 311px\" \/><p id=\"caption-attachment-305462\" class=\"wp-caption-text\">The Wearable Multitouch Interaction system enables any surface to be used as a touch screen.<\/p><\/div>\n<p>One of the more interesting discussions during this project was how to determine where to place the interface surface. The team explored two approaches. The first was a classification-driven model in which the system classified specific objects that could be used as a surface: a hand, an arm, a notepad, or a wall. This required creating a machine-learning classifier to learn these objects.<\/p>\n<p>The second approach took a completely user-driven model, enabling the user to finger-draw a working area on any surface in front of the camera\/projector system.<\/p>\n<p>\u201cWe wanted the ability to use any surface,\u201d Benko says. \u201cLet the user define the area of where they want the interface to be, and have the system do its best to track it frame to frame. This creates a highly flexible, on-demand user interface. You can tap on your hand or drag your interface out to specify the top left and bottom right border. All this stems from the main idea that if everything around you is a potential interface, then the first action has to be defining an interface area.\u201d<\/p>\n<p>The team stresses that, although the prototype is not as small as they would like it to be, there are no significant barriers to miniaturization and that it is entirely possible that a future version of Wearable Multitouch Interaction could be the size of a matchbox and as easy to wear as a pendant or a watch.<\/p>\n<h2>PocketTouch: Through-Fabric Input Sensing<\/h2>\n<p><em>PocketTouch: Through-Fabric Capacitive Touch Input\u2014<\/em>written by Saponas, Harrison, and Benko\u2014describes a prototype that consists of a custom, multitouch capacitive sensor mounted on the back of a smartphone. It uses the capacitive sensors to enable eyes-free multitouch input on the device through fabric, giving users the convenience of a rich set of gesture interactions, ranging from simple touch strokes to full alphanumeric text entry, without having to remove the device from a pocket or bag.<\/p>\n<div id=\"attachment_305465\" style=\"width: 320px\" class=\"wp-caption alignleft\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-305465\" class=\"size-full wp-image-305465\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2011\/10\/Scott-Saponas.png\" alt=\"Scott Saponas\" width=\"310\" height=\"315\" srcset=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2011\/10\/Scott-Saponas.png 310w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2011\/10\/Scott-Saponas-295x300.png 295w\" sizes=\"auto, (max-width: 310px) 100vw, 310px\" \/><p id=\"caption-attachment-305465\" class=\"wp-caption-text\">Scott Saponas<\/p><\/div>\n<p>\u201cPeople already try to interact with a computing device through fabric,\u201d says Saponas, of the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/research.microsoft.com\/en-us\/um\/redmond\/groups\/medicaldevices\/\" target=\"_blank\">Computational User Experiences<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> group. \u201cThink of when you try to reach through your pocket to the slider that silences your phone. We wanted to take a different spin by asking: Can we use a higher-bandwidth touch surface to provide a wider range of actual input?\u201d<\/p>\n<p>The challenge was detecting multitouch strokes through fabric in a reliable manner, and one of the key problems to solve was orientation. Harrison recalls the brainstorming around PocketTouch.<\/p>\n<p>\u201cIf you think about a device that is randomly positioned in your pocket or purse,\u201d Harrison explains, \u201cyou really have no idea of user orientation. Sure, you can have a gyroscope tell if the device is facing up or down, but you still wouldn\u2019t know from which side the user is going to approach the device.\u201d<\/p>\n<p>The team resolved this by using an orientation-defining unlock gesture to determine the coordinate plane, thus initializing the device for interaction. Once initialized, user orientation can be from any direction as long as it\u2019s consistent. PocketTouch then separates purposeful finger strokes from background noise and uses them as input.<\/p>\n<p>The next challenge was to process strokes to enable text recognition of characters written over the same small physical area. Happily, the problem of recognizing multistrokes as input turned out to be a matter of adapting existing solutions.<\/p>\n<p>\u201cMicrosoft Windows already contains a very rich and adaptive stroke-recognition engine,\u201d Benko says. \u201cSo if the user is sloppy with strokes\u2014and believe me, when you&#8217;re doing it through the pocket of a jacket, the results are sloppy\u2014these systems have the language model to handle it. That made PocketTouch a lot more robust than one would expect, as you can see from the <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/video\/pockettouch-through-fabric-capacitive-touch-input\/\" target=\"_blank\">video<\/a>.\u201d<\/p>\n<p>Finally, the researchers tested the feasibility of using finger strokes through enclosing material to control a device equipped with capacitive sensing. They tested according to thickness, fiber type, types of garments, and pocket location. The results exceeded expectations.<\/p>\n<div id=\"attachment_305468\" style=\"width: 410px\" class=\"wp-caption alignright\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-305468\" class=\"size-full wp-image-305468\" src=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2011\/10\/PocketTouch-prototype.png\" alt=\"PocketTouch prototype\" width=\"400\" height=\"400\" srcset=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2011\/10\/PocketTouch-prototype.png 400w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2011\/10\/PocketTouch-prototype-150x150.png 150w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2011\/10\/PocketTouch-prototype-300x300.png 300w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2011\/10\/PocketTouch-prototype-180x180.png 180w, https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2011\/10\/PocketTouch-prototype-360x360.png 360w\" sizes=\"auto, (max-width: 400px) 100vw, 400px\" \/><p id=\"caption-attachment-305468\" class=\"wp-caption-text\">The prototype of PocketTouch can be worn and operated in a variety of pockets.<\/p><\/div>\n<p>\u201cWe didn\u2019t think that a heavy fleece or a jacket pocket would provide enough of a signal to the sensor,\u201d Saponas recalls. \u201cWe only included them during testing to demonstrate a full range of options. To our astonishment, they worked anyway. So we knew we had solved the toughest challenge, which was to figure out a reliable way to detect and segment strokes from the capacitive touch sensor through fabric.\u201d<\/p>\n<p>The defining difference of this work is adaptability. Touch devices such as touch screens are carefully engineered, manufactured, and calibrated to give users an optimal experience. PocketTouch is fundamentally different in that, instead of calibrating once for a particular surface, it calibrates continuously, adaptively optimizing the touch experience to account for different surfaces.<\/p>\n<p>\u201cSometimes the best technologies are surprises to the people who build them,\u201d Harrison grins. \u201cThis was an elegant idea that worked much better than we\u2019d hoped. It&#8217;s a blessing that researchers don&#8217;t often get.\u201d<\/p>\n<h2>Continual Evolution of Touch<\/h2>\n<p>Benko also stresses that both Wearable Multitouch Interaction and PocketTouch are evolutionary steps of a larger effort by Microsoft Research to investigate the unconventional use of touch in devices to extend Microsoft\u2019s vision of ubiquitous computing. He notes that PocketTouch has a lineage dating to the <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/publication\/mouse-2-0-multi-touch-meets-the-mouse\/\" target=\"_blank\">Mouse 2.0<\/a> project and work on the <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/publication\/grips-and-gestures-on-a-multi-touch-pen\/\" target=\"_blank\">multitouch pen<\/a>, while Wearable Multitouch Interaction shares concepts in common with <a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/publication\/combining-multiple-depth-cameras-and-projectors-for-interactions-on-above-and-between-surfaces\/\" target=\"_blank\">LightSpace<\/a>.<\/p>\n<p>\u201cIt\u2019s interesting to isolate these projects,\u201d Benko remarks, \u201cBut sometimes, it\u2019s much more interesting to look at them as evolving toward a broader vision. We are trying to push the boundaries of this rich space of touch and gestures, making gestural interactions available on any surface and with any device.\u201d<\/p>\n<h2>Microsoft Research Papers at UIST 2011<\/h2>\n<p>Besides the Wearable Multitouch Interaction and PocketTouch papers, five others from Microsoft Research are being presented during UIST 2012:<\/p>\n<p><strong><em>PocketTouch: Through-Fabric Capacitive Touch Input<\/em><\/strong><br \/>\nT. Scott Saponas, Microsoft Research Redmond; Chris Harrison, Carnegie Mellon University; and Hrvoje Benko, Microsoft Research Redmond.<\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/www.juew.org\/publication\/uist11.pdf\" target=\"_blank\"><strong><em>Pause-and-Play: Automatically Linking Screencast Video Tutorials with Applications<\/em><\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nSuporn Pongnumkul, University of Washington; Mira Dontcheva, Adobe Systems; Wilmot Li, Adobe Systems; Jue Wang, Adobe Systems; Lubomir Bourdev, Adobe Systems; Shai Avidan, Adobe Systems; and Michael Cohen, Microsoft Research Redmond.<\/p>\n<p><a href=\"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-content\/uploads\/2016\/10\/access_overlays.pdf\" target=\"_blank\"><strong><em>Access Overlays: Improving Non-Visual Access to Large Touch Screens for Blind Users<\/em><\/strong><\/a><br \/>\nShaun K. Kane, University of Washington and University of Maryland, Baltimore County; Meredith Ringel Morris, Microsoft Research Redmond; Annuska Perkins, Microsoft; Daniel Wigdor, University of Toronto and Microsoft Research Redmond; Richard E. Ladner, University of Washington; and Jacob O. Wobbrock, University of Washington.<\/p>\n<p><strong><em>Portico: Tangible Interaction on and Around a Tablet<\/em><\/strong><br \/>\nDaniel Avrahami, Intel and University of Washington; Jacob O. Wobbrock, University of Washington; and Shahram Izadi, Microsoft Research Cambridge.<\/p>\n<p><strong><em>KinectFusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera<\/em><\/strong><br \/>\nShahram Izadi, Microsoft Research Cambridge; David Kim, Microsoft Research Cambridge; Otmar Hilliges, Microsoft Research Cambridge; David Molyneaux, Microsoft Research Cambridge; Richard Newcombe, Imperial College London; Pushmeet Kohli, Microsoft Research Cambridge; Jamie Shotton, Microsoft Research Cambridge; Steve Hodges, Microsoft Research Cambridge; Dustin Freeman, University of Toronto; Andrew Davison, Imperial College London; and Andrew Fitzgibbon, Microsoft Research Cambridge.<\/p>\n<p><strong><em>Vermeer: Direct Interaction with a 360-degree Viewable 3D Display<\/em><\/strong><br \/>\nAlex Butler, Microsoft Research Cambridge; Otmar Hilliges, Microsoft Research Cambridge; Shahram Izadi, Microsoft Research Cambridge; Steve Hodges, Microsoft Research Cambridge; David Molyneaux, Microsoft Research Cambridge; David Kim, Microsoft Research Cambridge; and Danny Kong, Microsoft Research Cambridge.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>By Janie Chang, Writer, Microsoft Research Microsoft Research Redmond researchers Hrvoje Benko and Scott Saponas have been investigating the use of touch interaction in computing devices since the mid-\u201900s. Now, two sharply different yet related projects demonstrate novel approaches to the world of touch and gestures. Wearable Multitouch Interaction gives users the ability to make [&hellip;]<\/p>\n","protected":false},"author":39507,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":[],"msr_hide_image_in_river":0,"footnotes":""},"categories":[194476],"tags":[214178,187348,214169,214175,214163,214172,213998,214001,214166],"research-area":[13552],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-305426","post","type-post","status-publish","format-standard","hentry","category-devices-and-hardware","tag-depth-sensing-camera","tag-kinect","tag-pockettouch","tag-primesense","tag-touch-interaction","tag-touch-interface","tag-uist-2012","tag-user-interface-software-and-technology","tag-wearable-multitouch-interaction","msr-research-area-hardware-devices","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[199565],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-events":[],"related-researchers":[],"msr_type":"Post","byline":"","formattedDate":"October 17, 2011","formattedExcerpt":"By Janie Chang, Writer, Microsoft Research Microsoft Research Redmond researchers Hrvoje Benko and Scott Saponas have been investigating the use of touch interaction in computing devices since the mid-\u201900s. Now, two sharply different yet related projects demonstrate novel approaches to the world of touch and&hellip;","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts\/305426","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/users\/39507"}],"replies":[{"embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/comments?post=305426"}],"version-history":[{"count":3,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts\/305426\/revisions"}],"predecessor-version":[{"id":305489,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts\/305426\/revisions\/305489"}],"wp:attachment":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/media?parent=305426"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/categories?post=305426"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/tags?post=305426"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=305426"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=305426"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=305426"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=305426"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=305426"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=305426"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=305426"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=305426"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}