{"id":4,"date":"2018-12-02T17:25:37","date_gmt":"2018-12-02T17:25:37","guid":{"rendered":"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/?p=4"},"modified":"2019-09-16T02:02:35","modified_gmt":"2019-09-16T01:02:35","slug":"will-technology-ever-fully-capture-context-and-does-it-matter","status":"publish","type":"post","link":"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/2018\/12\/02\/will-technology-ever-fully-capture-context-and-does-it-matter\/","title":{"rendered":"Will technology ever fully capture context and does it matter?"},"content":{"rendered":"<h2><strong>Abstract<\/strong><\/h2>\n<p><em>To supply the reader with the ability to answer this question the article will aim to clarify some of the important aspects of the context in the field of technology. <\/em><\/p>\n<ul>\n<li><em>What is context and how it is being captured?\u00a0<\/em><\/li>\n<li><em>Will technology be able to fully capture the context? <\/em><\/li>\n<li><em>Does it matter if technologies fail to fully capture context?<\/em><\/li>\n<\/ul>\n<p><em>To give an objective overview this article will illustrate some already implemented technologies that capture context and will also briefly review their functionality.<\/em><\/p>\n<h2><strong>What is the context in technology?<\/strong><\/h2>\n<p>A general notion of context can be obtained from Albrecht Schmidt and Michael Beigl, where they define context as \u2013 \u201cThat which surrounds and gives meaning to something else\u201d[1]. Another interpretation from Anind K. Dey and Gregory D. Abowd describes context as \u2013 &#8220;Any information that can be used to characterise the situation of an entity. &#8220;[2].<\/p>\n<p>Those definitions are fairly ambiguous and do not give much information towards the true nature of context in terms of ubiquitous computing. To get a common feeling of it we would have to dig deeper into the topic.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-38 aligncenter\" src=\"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/Pasted_Image_2_4_15__9_09_AM-300x138.png\" alt=\"\" width=\"300\" height=\"138\" srcset=\"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/Pasted_Image_2_4_15__9_09_AM-300x138.png 300w, https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/Pasted_Image_2_4_15__9_09_AM-768x352.png 768w, https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/Pasted_Image_2_4_15__9_09_AM-1024x470.png 1024w, https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/Pasted_Image_2_4_15__9_09_AM.png 1500w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/p>\n<p>Currently, software is pretty dull in capturing context. A bright example would be that a smartphone will ring whether or not you participate in a meeting. The problem comes from the fact that our present context requires a deeper understanding of our dynamically changing environment rather than just \u201cyou\u2019re not in a meeting\u201d, doesn\u2019t it?<\/p>\n<p>The concept of context has been of a huge interest in the computer science community. In the last four decades, different researchers and scientist have been trying &#8220;to relate information processing and communication to aspects of the situations\u00a0in which such processing occurs&#8221;[1]. Due to that a new field of computing science was emerged \u2013 context-aware computing.<\/p>\n<p><iframe loading=\"lazy\" title=\"TEDxRyersonU - Hossein Rahnama - Ubiquitous Systems: Evolution of Context Aware Computing\" width=\"525\" height=\"295\" src=\"\/\/www.youtube.com\/embed\/i4TBHBLVMvw?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>Context-aware computing is essentially the notion of having computer systems that are aware and are capable to sense the context of the real world we live in. To achieve that pervasive approach software systems have to use certain hardware sensors and act upon the stimuli which they get from the physical world. \u201cIt is essentially giving the eyes and ears of the computer to act and interact upon\u201c[3].<\/p>\n<h2><strong>How context is being captured?<\/strong><\/h2>\n<p>Context-aware systems could be characterised as sensors integrated in a physical device (middleware), which device is\u00a0extensively being used by certain context-aware applications. Each of the components in this architecture is inter-dependent with the others and\u00a0the context data is only useful when it is processed and elucidated in a way that the user can understand it.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-106 size-full\" src=\"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/121115_1211_ContextAwar3.png\" alt=\"\" width=\"331\" height=\"193\" srcset=\"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/121115_1211_ContextAwar3.png 331w, https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/121115_1211_ContextAwar3-300x175.png 300w\" sizes=\"auto, (max-width: 331px) 100vw, 331px\" \/><\/p>\n<p>A vivid example of sensor capture data would be\u00a0\u2013 <span style=\"font-size: 1rem\">\u201c<\/span>A retina sensor that automatically turns on the screen when it detects a retina and turns off the screen otherwise in order to optimise battery consumption.<span style=\"font-size: 1rem\">\u201d<\/span> [7]<\/p>\n<p>Depending on the context aimed to be captured when developing such applications it inevitably requires deriving information from different kind of sensors. Some of which are:\u00a0<a href=\"https:\/\/www.electronics-tutorials.ws\/io\/io_3.html\">Temperature sensor<\/a>, <a href=\"https:\/\/www5.epsondevice.com\/en\/information\/technical_info\/gyro\/\">Gyroscope sensor<\/a>, <a href=\"https:\/\/en.wikipedia.org\/wiki\/Pressure_sensor\">Pressure sensor<\/a>, <a href=\"https:\/\/en.wikipedia.org\/wiki\/Proximity_sensor\">Proximity sensor<\/a>, <a href=\"https:\/\/www.elprocus.com\/infrared-ir-sensor-circuit-and-working\/\">Infrared sensor<\/a>, <a href=\"https:\/\/www.elprocus.com\/optical-sensors-types-basics-and-applications\/\">Optical sensor,<\/a> etc.<\/p>\n<p>According to the source of the context a device&#8217;s context-awareness can be categorised as:<\/p>\n<ul>\n<li>Direct\u00a0\u2013 When the context is captured by sensors<\/li>\n<li>Indirect\u00a0\u2013 When context is gathered from other sources<\/li>\n<\/ul>\n<p>More complex picture of the architecture of context-aware computation can be obtained from an image taken from Dr.\u00a0Jianhua Ma lecture on\u00a0\u2013\u00a0<span style=\"font-size: 1rem\">\u201c<\/span>Context-Aware Technologies, Systems and Applications<span style=\"font-size: 1rem\">\u201d<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-187 size-full\" src=\"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/architecture.png\" alt=\"\" width=\"1600\" height=\"1162\" srcset=\"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/architecture.png 1600w, https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/architecture-300x218.png 300w, https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/architecture-768x558.png 768w, https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/architecture-1024x744.png 1024w\" sizes=\"auto, (max-width: 706px) 89vw, (max-width: 767px) 82vw, 740px\" \/><\/p>\n<p><strong style=\"color: #666666;font-size: 1.25rem\">Context-aware applications\u00a0<\/strong><\/p>\n<p>First approaches in context-aware computing were mainly focused on location based services. <span style=\"font-size: 1rem\">\u201c<\/span>It was the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Global_Positioning_System\">GPS<\/a> network of the 1990&#8217;s that led to satellite navigation that we have in every vehicle nowadays<span style=\"font-size: 1rem\">\u201d[4].<\/span>\u00a0Using GPS released the user from the effort of positioning himself on the map. Instead, the context assigned by location was carrying that out on the behalf of the user, but context can be more than just the present location.<\/p>\n<p>According to\u00a0Dey and Abowd context can be categorised in four main types\u00a0\u2013\u00a0<span style=\"font-size: 1rem\">\u201c<\/span><strong>location, identity, activity and<\/strong> <strong>time<\/strong>.<span style=\"font-size: 1rem\">\u201d<\/span>[2]. GPS as already reviewed is essentially an example of the <strong>location<\/strong> type category.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-184\" src=\"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/slide_3.jpg\" alt=\"\" width=\"1060\" height=\"795\" srcset=\"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/slide_3.jpg 960w, https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/slide_3-300x225.jpg 300w, https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/slide_3-768x576.jpg 768w\" sizes=\"auto, (max-width: 706px) 89vw, (max-width: 767px) 82vw, 740px\" \/><\/p>\n<p>The video bellow taken from MIT Living Labs, demonstrates an implementation of context-aware dynamic lighting. The way it works is including proximity sensors being embedded in different location points in the room detecting presence. Once detected the lighting in the room can be adjusted to suit the current <strong>activity<\/strong> of a person (desk\/studying, sofa\/taking a break etc.).<\/p>\n<p><iframe loading=\"lazy\" title=\"MIT Living Labs: Context Aware Dynamic Lighting\" width=\"525\" height=\"295\" src=\"\/\/www.youtube.com\/embed\/TUHFe_YoZCk?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>Another illustration of context-aware applications can be found in field of privacy. It is not uncommon to find credentials being offered on the virtual black market. It also won&#8217;t be a surprise if we say that the traditional methods of verifying\u00a0<strong>identity\u00a0<\/strong>using\u00a0\u2013 username and password seem more and more outdated nowadays.<\/p>\n<p>To offer a reasonable solution the innovation driven approaches at Google found a fix. As announced on the annual &#8220;Google Next&#8221; conference Google is currently working on innovative software that will be providing context-aware access &#8220;that looks beyond your credentials&#8221;[12] to determine identity. Essentially, context-aware access will be allowing administrators to specify a set of information (location\/IP address\/local time etc) that will be assisting to improve the identification process accuracy, in case someone is trying to steal your data.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-214 size-full\" src=\"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/sssssss.png\" alt=\"\" width=\"580\" height=\"445\" srcset=\"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/sssssss.png 580w, https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/sssssss-300x230.png 300w\" sizes=\"auto, (max-width: 580px) 100vw, 580px\" \/><\/p>\n<p>&#8220;The idea flips the notion of security responsibility on its head. Instead of requiring the user to be completely responsible for proving who they are, it puts the burden (and control) in the hands of the administrator where it makes more sense.&#8221;[12]<\/p>\n<h2><strong>Will technology be able to fully capture the context?<\/strong><em><br \/>\n<\/em><\/h2>\n<p>Many companies are striving to implement innovative context-aware systems to enhance human-computer interaction. With the increasing availability of\u00a0the Internet and decreasing cost of new devices, context-aware systems will certainly become more and more common in many aspects of our lives.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-116 size-full alignright\" src=\"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/Typical-human-computer-interaction-by-symbolic-encoding-Analog-channels-are-not-seen-by.png\" alt=\"\" width=\"850\" height=\"757\" srcset=\"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/Typical-human-computer-interaction-by-symbolic-encoding-Analog-channels-are-not-seen-by.png 850w, https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/Typical-human-computer-interaction-by-symbolic-encoding-Analog-channels-are-not-seen-by-300x267.png 300w, https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/Typical-human-computer-interaction-by-symbolic-encoding-Analog-channels-are-not-seen-by-768x684.png 768w\" sizes=\"auto, (max-width: 706px) 89vw, (max-width: 767px) 82vw, 740px\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>This suggests that the scope of the context being covered and information being stored will multiple through time. In return it will offer various services that utilise numerous sensor processed data.<\/p>\n<p><span style=\"font-size: 1rem\">It should be clear that context differentiates from data in many ways. Unprocessed data might be as good as useless depending on the required context, but the context might be constantly changing when dealing with people. <\/span><span style=\"font-size: 1rem\">The key point here is that certain information given us from\u00a0 context devices might be useful at the very current moment, but become out of context in the next minute. You don&#8217;t want your navigation system to tell you to take the left turn of the crossroad that you have just passed, right?\u00a0<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-50\" src=\"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/funny.gif\" alt=\"\" width=\"400\" height=\"524\" \/><\/p>\n<p><span style=\"font-size: 1rem\">To be successful at fully capturing the context, contextual systems should go beyond the idea of collecting dynamic information and focus more on teaching technology how to differentiate similar activities like (skating\/biking, speaking\/shouting etc.). Furthermore, researchers have to think about how you capture context that remains hidden for sensors like\u00a0\u2013 human feelings, emotions, different judgement of people etc. The main issue is that\u00a0\u2013 \u201cCurrently technology can only capture context when it is obvious and statistically\/numerically greatly different to other contexts\u201d[6]<\/span><\/p>\n<p>However, individual character uniqueness of people is another property that should be tackled somehow by context-aware systems if they want to fully capture context. In the context of security each person will define distinctive aspects of his life as private and confidential, compared to other people. You can not capture that by just using sensors.\u00a0That highlights an issue where context-aware systems should be able to define the privacy level of each user, which is essentially part of his\/her unique personality.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-164 size-full alignright\" src=\"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/type-unique-personality-750x430.jpg\" alt=\"\" width=\"750\" height=\"430\" srcset=\"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/type-unique-personality-750x430.jpg 750w, https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/type-unique-personality-750x430-300x172.jpg 300w\" sizes=\"auto, (max-width: 706px) 89vw, (max-width: 767px) 82vw, 740px\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>To conclude I must state that having researched the topic I could not come up with a correct answer to this question.<\/p>\n<p>On the one hand, human emotions, moods, behaviour and judgement are part of the constantly changing context which makes human-computer interaction always requiring some form of a user input in order to be accurate.<\/p>\n<p>On the other hand, if we simply go back in time when the mobile phones consisted of one massive briefcase no one ever imagined that this technology could become ubiquitous evolving to smartphones able to fit in our pockets.<\/p>\n<p>After all, who knows what is yet to come. What if soon computers will be doing everything for us?<\/p>\n<p>This surely will be a dream come true for Dr. Mark Weiser and his idea of calm-computing.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-90 size-full\" src=\"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/weiser.jpg\" alt=\"\" width=\"850\" height=\"400\" srcset=\"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/weiser.jpg 850w, https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/weiser-300x141.jpg 300w, https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/files\/2018\/12\/weiser-768x361.jpg 768w\" sizes=\"auto, (max-width: 706px) 89vw, (max-width: 767px) 82vw, 740px\" \/><\/p>\n<h1>References<\/h1>\n<p>[1]\u201c<a href=\"http:\/\/citeseerx.ist.psu.edu\/viewdoc\/download;jsessionid=35713B275F811BD471D4089C474C80D9?doi=10.1.1.37.2933&amp;rep=rep1&amp;type=pdf\">There is more to Context than Location<\/a>\u201d, Albrecht Schmidt, Michael Beigl, and Hans-W. Gellersen.\u00a0[Accessed 3 Dec. 2018].<\/p>\n<p>[2]\u201c<a href=\"ftp:\/\/ftp.cc.gatech.edu\/pub\/gvu\/tr\/1999\/99-22.pdf\">Towards a Better Understanding of Context and<\/a><br \/>\nContext-Awareness\u201d, Anind K. Dey and Gregory D. Abowd [Accessed 3 Dec. 2018].<\/p>\n<p>[3]<a href=\"http:\/\/youtube.com\/watch?v=jYnViOb2K4A\">\u201cWhat is context aware computing\u201d<\/a>\u00a0, Albrecht Schmidt [Accessed 3 Dec. 2018].<\/p>\n<p>[4]<a href=\"https:\/\/www.interaction-design.org\/literature\/article\/a-brief-introduction-to-context-aware-computing\">\u201cA Brief Introduction to Context Aware Computing\u201d<\/a> ,\u00a0\u00a0Keith Cheverst\u00a0[Accessed 4 Dec. 2018].<\/p>\n<p>[5]<a href=\"https:\/\/www.youtube.com\/watch?v=i4TBHBLVMvw\">\u201cUbiquitous Systems: Evolution of Context Aware Computing\u201d<\/a>,\u00a0Hossein Rahnama [Accessed 4 Dec. 2018].<\/p>\n<p>[6]<a href=\"https:\/\/nebulalabs.co.uk\/will-technology-ever-fully-capture-context-and-does-this-matter\/\">\u201c<\/a><i><a href=\"https:\/\/nebulalabs.co.uk\/will-technology-ever-fully-capture-context-and-does-this-matter\/\">Will technology ever fully capture context and does this matter ?\u201d<\/a>,\u00a0Dylan McKee\u00a0<\/i>[Accessed 6 Dec. 2018].<\/p>\n<p>[7]\u00a0<a href=\"http:\/\/resources.intenseschool.com\/a-beginners-guide-to-context-aware-systems\/?fbclid=IwAR17RLlqeSc53FvPBNwP_T_axPIxGimo4kbPYcr6QaPpfAROJ3O9b3KPhTs\">\u201cA Beginners Guide to Context Aware Systems<i>\u201d<\/i><\/a>, James Olorunosebi\u00a0[Accessed 7 Dec. 2018].<\/p>\n<p>[8]\u00a0<a href=\"https:\/\/www.finoit.com\/blog\/top-15-sensor-types-used-iot\/\">\u201cTop 15 Sensor Types Being Used in IoT<i>\u201d<\/i><\/a>,\u00a0\u00a0Rita Sharma [Accessed 8 Dec. 2018].<\/p>\n<p>[9]\u201c<a href=\"https:\/\/jianhua.cis.k.hosei.ac.jp\/course\/ubi\/Lecture09.pdf\">Context-Aware Technologies, Systems and Applications<\/a><i>\u201d<\/i>,\u00a0Jianhua Ma [Accessed 8 Dec. 2018].<\/p>\n<p>[10]\u201c<a href=\"https:\/\/www.youtube.com\/watch?v=TUHFe_YoZCk\">MIT Living Labs: Context Aware Dynamic Lighting<i>\u201d<\/i><\/a>, Kent Larson [Accessed 8 Dec. 2018].<\/p>\n<p>[11]<a href=\"https:\/\/techcrunch.com\/2018\/07\/25\/google-introduces-context-aware-access-to-supplement-traditional-logons\/\">\u201cGoogle introduces &#8216;Context-aware&#8217; identification to supplement traditional logons<i>\u201d<\/i>,<\/a> Ron Miller [Accessed 8 Dec. 2018]<\/p>\n<h2><strong>Word count without references &#8211; 1150<\/strong><\/h2>\n","protected":false},"excerpt":{"rendered":"<p>Abstract To supply the reader with the ability to answer this question the article will aim to clarify some of the important aspects of the context in the field of technology. What is context and how it is being captured?\u00a0 Will technology be able to fully capture the context? Does it matter if technologies fail &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/2018\/12\/02\/will-technology-ever-fully-capture-context-and-does-it-matter\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Will technology ever fully capture context and does it matter?&#8221;<\/span><\/a><\/p>\n","protected":false},"author":7868,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":true,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-4","post","type-post","status-publish","format-standard","hentry","category-uncategorised"],"_links":{"self":[{"href":"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/wp-json\/wp\/v2\/posts\/4","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/wp-json\/wp\/v2\/users\/7868"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/wp-json\/wp\/v2\/comments?post=4"}],"version-history":[{"count":206,"href":"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/wp-json\/wp\/v2\/posts\/4\/revisions"}],"predecessor-version":[{"id":12,"href":"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/wp-json\/wp\/v2\/posts\/4\/revisions\/12"}],"wp:attachment":[{"href":"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/wp-json\/wp\/v2\/media?parent=4"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/wp-json\/wp\/v2\/categories?post=4"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.ncl.ac.uk\/igeorgiev1\/wp-json\/wp\/v2\/tags?post=4"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}