{"id":334,"date":"2020-05-22T16:54:27","date_gmt":"2020-05-22T15:54:27","guid":{"rendered":"https:\/\/blog.wnohang.net\/?p=334"},"modified":"2020-05-22T20:28:07","modified_gmt":"2020-05-22T19:28:07","slug":"gossips-in-distributed-systems-physalia","status":"publish","type":"post","link":"https:\/\/blog.wnohang.net\/index.php\/2020\/05\/22\/gossips-in-distributed-systems-physalia\/","title":{"rendered":"Gossips in Distributed Systems:  Physalia"},"content":{"rendered":"<span class=\"span-reading-time rt-reading-time\" style=\"display: block;\"><span class=\"rt-label rt-prefix\">Reading Time: <\/span> <span class=\"rt-time\"> 6<\/span> <span class=\"rt-label rt-postfix\">minutes<\/span><\/span>\n<p class=\"has-background has-very-light-gray-background-color\"><em>I often take notes and jot down observations when I read academic\/industry papers. \u00a0 Thinking of a name for this series \u2018<strong>Gossips in Distributed Systems<\/strong>\u2019 seemed apt to me, inspired by the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Gossip_protocol\">gossip protocol<\/a> with which peers in these systems communicate with each other which mimics the spread of ideas and technologies among practitioners and people alike. The goal of this series would be to do a round-up of any new concepts or papers in computer science (often in distributed systems but not always) and share my thoughts<\/em> and observations.<\/p>\n\n\n\n<p class=\"has-drop-cap\">Today, we are going to talk about the Physalia paper from AWS: \u201c<a href=\"https:\/\/assets.amazon.science\/c4\/11\/de2606884b63bf4d95190a3c2390\/millions-of-tiny-databases.pdf\">Millions of Tiny Databases<\/a>\u201d.&nbsp; This is inspired by Physalia or&nbsp; <a href=\"https:\/\/en.wikipedia.org\/wiki\/Portuguese_man_o%27_war\">Portuguese man-of-war<\/a> (pictured), a siphonophore, or a colony of organisms. &nbsp; Overall, the paper, even though slightly on the longer side, is chock full of details and best practices pertaining to design, architecture, and testing of distributed systems. <\/p>\n\n\n\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\n<div class=\"wp-block-image is-style-default\"><figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/upload.wikimedia.org\/wikipedia\/commons\/c\/c3\/Portuguese_Man-O-War_%28Physalia_physalis%29.jpg\" alt=\"\" width=\"224\" height=\"307\"\/><figcaption>Credits: <a href=\"https:\/\/en.wikipedia.org\/wiki\/Portuguese_man_o%27_war\">https:\/\/en.wikipedia.org\/wiki\/Portuguese_man_o%27_war<\/a><\/figcaption><\/figure><\/div>\n<\/div><\/div>\n\n\n\n<p>Given the size of the paper and the wide gamut of topics that it touches, we will be discussing only a few aspects of the paper in this post along with some observations. In subsequent sequels, we will go into others in further detail.&nbsp;<\/p>\n\n\n\n<p>Before proceeding ahead, the present EBS architecture with Physalia has a primary EBS volume (connected to EC2 instance) and a secondary replica, and data flowing from instance to primary and replica in that order. Also, this <a href=\"http:\/\/dsrg.pdos.csail.mit.edu\/2013\/08\/08\/chain-replication\/\">chain replication<\/a> is strictly within an Availability Zone (AZ) mainly due to inter-AZ latencies being prohibitive. The pre-Physalia architecture had a similar replication chain but with the control plane also being part of EBS itself rather than a separate database (which we will soon find out was not a good idea).<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blog.wnohang.net\/wp-content\/uploads\/2020\/05\/Screen-Shot-2020-05-22-at-4.40.40-PM-1024x982.png\" alt=\"\" class=\"wp-image-336\" width=\"358\" height=\"343\" srcset=\"https:\/\/blog.wnohang.net\/wp-content\/uploads\/2020\/05\/Screen-Shot-2020-05-22-at-4.40.40-PM-1024x982.png 1024w, https:\/\/blog.wnohang.net\/wp-content\/uploads\/2020\/05\/Screen-Shot-2020-05-22-at-4.40.40-PM-300x288.png 300w, https:\/\/blog.wnohang.net\/wp-content\/uploads\/2020\/05\/Screen-Shot-2020-05-22-at-4.40.40-PM-768x737.png 768w, https:\/\/blog.wnohang.net\/wp-content\/uploads\/2020\/05\/Screen-Shot-2020-05-22-at-4.40.40-PM.png 1132w\" sizes=\"auto, (max-width: 358px) 100vw, 358px\" \/><figcaption>Credits: Screenshot of figure in the Physalia paper<\/figcaption><\/figure><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">Raison d&#8217;\u00eatre<\/h3>\n\n\n\n<p>All good-to-great systems have a story that necessitated their existence. In this case, it was an outage of the us-east-1 region in 2011 caused by overload and subsequent cascading failure which necessitated a more robust control plane for failure handling. The postmortem of that outage is <a href=\"https:\/\/aws.amazon.com\/message\/65648\/\">here<\/a>, it is quite long and wordy, so I will summarize it here.&nbsp;<\/p>\n\n\n\n<!--more-->\n\n\n\n<p>In short, it started with a network change that turned off the primary network and overwhelmed the secondary one. This introduced a network partition in the EBS cluster, causing a sharp spike in re-mirroring requests. Re-mirroring involves re-designation of primary replica based on consensus between EC2 instance, volumes, and control plane. Even after the network partition was restored, there was a sudden spike (\u201cthundering herd\u201d) in re-mirroring requests from \u201cstuck volumes\u201d. Due to a bug in EBS code dealing with a large number of requests and lack of backoff in request retries, there was a sudden shortage of EBS space availability since nodes started failing due to the bug. It also seems like the number of \u201cstuck\u201d volumes increased due to the control plane getting overwhelmed.<\/p>\n\n\n\n<p>Had it stopped at this, this would not have caused a region failure but the only failure of single AZ (which is often acceptable for well-architected systems). Remember, the pre-Physalia control plane spanned multiple AZs. Due to this storm, the control plane queue was saturated with a large number of long-timeout requests. This meant any EBS API requests (such as <a href=\"https:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/ebs-creating-volume.html\">CreateVolume<\/a> used by new instance launches) from other AZs also started to fail or suffer high latencies. To restore order, the affected AZ was <a href=\"https:\/\/en.wikipedia.org\/wiki\/Fencing_(computing)\">fenced<\/a> from the EBS control plane, and degraded EBS clusters were isolated in that AZ too to prevent the AZ degrading further. Interestingly, it seems like the EBS monitoring didn\u2019t alert about EC2 instance launch errors in the time since it was drowned in alerts from degraded EBS clusters. In addition to EC2 and EBS, this also affected RDS which uses EBS internally.<\/p>\n\n\n\n<p>Note that when a volume is \u201cstuck\u201d, I\/O bound processes on the system will be blocked on I\/O and processes can often end up in \u2018D\u2019 state\u2019 (uninterruptible sleep on Linux).<\/p>\n\n\n\n<p>The postmortem action items delve into various design changes for the long term, many of which (to quote one \u201can opportunity to push more of our EBS control plane into per-EBS cluster services\u201d) culminated in the design of Physalia.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">High Level Architecture<\/h3>\n\n\n\n<p>Driven by the above desire to localize the control plane, reduction of blast radius seems to be the topmost priority in Physalia, and accordingly, Physalia lies at cross proximity to its clients.&nbsp;&nbsp;<\/p>\n\n\n\n<p>Physalia is a collection of millions of databases, each of which is a transactional key-value store dealing with a partition key corresponding to a single EBS volume and provides an API with <a href=\"https:\/\/jepsen.io\/consistency\/models\/strict-serializable\">strict serializability<\/a> for reads and writes. Also, it is <strong>infrastructure\/topology-aware<\/strong> &#8211; racks, datacenter, power domains &#8211; and is also placed in close proximity to EBS primary and secondary replicas relying on it. The primary goal behind this proximity is to reduce the impact of network partitions while maintaining <strong>strong consistency<\/strong>. The focus is also on reducing <strong>blast radius<\/strong> <strong>without decreasing<\/strong> <strong>availability<\/strong>. The load profile on this is asymmetrical, ie. when things are good there isn\u2019t much traffic but during large-scale failures, there is a bursty latency-critical workload. In addition to other goals, <a href=\"https:\/\/en.wikipedia.org\/wiki\/AES-GCM-SIV\">misuse resistance<\/a> &#8211; ensuring that the system cannot be misused and to limit damage under misuse &#8211; is also deemed important.<\/p>\n\n\n\n<p>Zooming into each Physalia, there is a colony of \u201ccells\u201d sharing various caches. Each cell (a logical construct) serves one EBS volume and is 7 ways replicated (empirically determined in the paper) with Paxos protocol between the nodes (nodes correspond to servers). Each node can have different types of cells corresponding to different EBS volumes, this ensures node failures do not bring down any volume.&nbsp;&nbsp;<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blog.wnohang.net\/wp-content\/uploads\/2020\/05\/Screen-Shot-2020-05-22-at-4.42.25-PM-1024x549.png\" alt=\"\" class=\"wp-image-337\" width=\"437\" height=\"234\" srcset=\"https:\/\/blog.wnohang.net\/wp-content\/uploads\/2020\/05\/Screen-Shot-2020-05-22-at-4.42.25-PM-1024x549.png 1024w, https:\/\/blog.wnohang.net\/wp-content\/uploads\/2020\/05\/Screen-Shot-2020-05-22-at-4.42.25-PM-300x161.png 300w, https:\/\/blog.wnohang.net\/wp-content\/uploads\/2020\/05\/Screen-Shot-2020-05-22-at-4.42.25-PM-768x412.png 768w, https:\/\/blog.wnohang.net\/wp-content\/uploads\/2020\/05\/Screen-Shot-2020-05-22-at-4.42.25-PM.png 1156w\" sizes=\"auto, (max-width: 437px) 100vw, 437px\" \/><figcaption>Credits: Screenshot of figure in the Physalia paper<\/figcaption><\/figure><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Deployment and Operational Concerns<\/h2>\n\n\n\n<p>In addition to this, to protect against <a href=\"https:\/\/www.merriam-webster.com\/dictionary\/iatrogenic\">iatrogenic<\/a> causes (IOW, issues caused by interventions such as deployments, patching, etc.) of outage and downtime, nodes are assigned different colors and ones of different colors don\u2019t talk to each other. Cells are also constructed from nodes of the same color. To isolate from any failures during deployment within a datacenter, deployments proceed color-by-color. Since colors are assigned randomly to cells, this insulates against any specific software failure or hot-spots prevalent in these systems (80% of the load from 20% of clients or similar tailed distribution).&nbsp;<\/p>\n\n\n\n<p>Since Physalia cannot distribute across multiple DCs or regions, crash-safety is a strong requirement. This reminds me of <a href=\"https:\/\/en.wikipedia.org\/wiki\/Crash-only_software\">crash-only software<\/a>. Hence, a custom implementation written in Java is used which keeps state both in memory and disk, probably like ZooKeeper which this paper credits as well. It would be interesting to see if non-GC heavy languages were considered given the mentions of partitions from GC pauses and workarounds for it. There may have been other constraints behind this decision.<\/p>\n\n\n\n<p>In terms of queuing, as noted earlier, batching and pipelining take place within each cell. To avoid hitting performance degradation from increased contention and coherence, taking a leaf from <a href=\"http:\/\/www.perfdynamics.com\/Manifesto\/USLscalability.html\">Universal Scalability Law<\/a> (USL), requests seem to be rejected outright. This dropping behavior reminded me of the <a href=\"https:\/\/en.wikipedia.org\/wiki\/CoDel\">CoDel algorithm<\/a> prevalent in networking systems. On the client front, it seems now (as opposed to pre-outage or during the outage) they implement jittered exponential backoff (one of the paper\u2019s authors has a blog post on <a href=\"https:\/\/aws.amazon.com\/blogs\/architecture\/exponential-backoff-and-jitter\/\">Exponential Backoff And Jitter<\/a> which I highly recommend). With this, they are keeping the load bounded at cost of higher latency (though I presume this needs to be bounded too given tight constraints on Physalia latency).<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Store and API<\/h3>\n\n\n\n<p>In terms of API, it offers a typed key-value store with support for strict serializable reads, conditional writes, atomic increments, and so on. There is also the provision to batch reads\/writes in a single transaction. Much like Cassandra, any API only allows addressing one partition key at a time which is an acceptable optimization.<\/p>\n\n\n\n<p>&nbsp;Interesting to note that floating-point is not supported in data types due to the non-portability of floating types across hardwares and versions. This is not new and has been seen in other database systems too in the past (MySQL <a href=\"https:\/\/bugs.mysql.com\/bug.php?id=87794\">bug#87794<\/a>). But, I was surprised to see this in portability-focussed Java VM.&nbsp;<\/p>\n\n\n\n<p>Unsurprisingly, SQL is also not supported. In my opinion, there is simply no need to add SQL support if the client types fall into a very narrow band like here. Maybe if this is open-sourced in the future, then a SQL interface may be necessitated by other use cases.<\/p>\n\n\n\n<p>A linearizable and serializable consistency is provided by EBS clients. For caches, monitoring and reporting an eventual consistency mode (with monotonic reads and consistent prefix) is provided. Note that, not all eventually consistent modes are the same, and there is a lot of nuances involved. I strongly recommend checking the <a href=\"https:\/\/jepsen.io\/consistency\">consistency model tree<\/a> from Jepsen.<\/p>\n\n\n\n<p>Time-based leases (which I gather are similar to etcd or zookeeper leases) are provided but are used in non-critical paths due to clock skew. Clock skew can introduce latency in quorum based systems, the chapter on Unreliable clocks in <a href=\"https:\/\/dataintensive.net\/\">Designing Data-Intensive Applications<\/a> book elucidates this further.&nbsp;<\/p>\n\n\n\n<p>We have reached the end of this installment of Gossips in Distributed Systems. Next time, we will look at the reconfiguration of Physalia (which is important since even in normal conditions, EC2 instances have a shorter life than volumes implying detach\/attach of volumes between instances), topology awareness, and placement, poison pills,&nbsp; testing with TLA+ and so on.<\/p>\n","protected":false},"excerpt":{"rendered":"<p><span class=\"span-reading-time rt-reading-time\" style=\"display: block;\"><span class=\"rt-label rt-prefix\">Reading Time: <\/span> <span class=\"rt-time\"> 6<\/span> <span class=\"rt-label rt-postfix\">minutes<\/span><\/span>I often take notes and jot down observations when I read academic\/industry papers. \u00a0 Thinking of a name for this series \u2018Gossips in Distributed Systems\u2019 seemed apt to me, inspired by the gossip protocol with which peers in these systems communicate with each other which mimics the spread of ideas and technologies among practitioners and &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/blog.wnohang.net\/index.php\/2020\/05\/22\/gossips-in-distributed-systems-physalia\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Gossips in Distributed Systems:  Physalia&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1],"tags":[75,69,74,65,80,70,71,77,81,79,76,78,68,72],"class_list":["post-334","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-availability","tag-aws","tag-consistency","tag-distributed-systems","tag-distsys","tag-ebs","tag-ec2","tag-gossip","tag-incident","tag-network","tag-paper","tag-partition","tag-resilience-engineering","tag-storage"],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p3AlYV-5o","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"jetpack-related-posts":[{"id":319,"url":"https:\/\/blog.wnohang.net\/index.php\/2020\/05\/12\/circuit-breakers-stock-markets-and-distributed-systems\/","url_meta":{"origin":334,"position":0},"title":"Circuit Breakers:  Stock Markets and Distributed Systems","author":"Raghavendra","date":"May 12, 2020","format":false,"excerpt":"There are many parallels between the stock markets and the distributed systems in computer science. This post, in particular, is about circuit breakers prevalent in them for better resilience against\u00a0irrational exuberance\u00a0and upstream service errors respectively. In particular, this is about exploring breakers in stock markets from a distributed systems perspective.\u00a0\u2026","rel":"","context":"In &quot;musings&quot;","block_context":{"text":"musings","link":"https:\/\/blog.wnohang.net\/index.php\/category\/musings\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.wnohang.net\/wp-content\/uploads\/2020\/05\/Circuit_Breaker_115_kV.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.wnohang.net\/wp-content\/uploads\/2020\/05\/Circuit_Breaker_115_kV.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.wnohang.net\/wp-content\/uploads\/2020\/05\/Circuit_Breaker_115_kV.jpg?resize=525%2C300&ssl=1 1.5x"},"classes":[]},{"id":146,"url":"https:\/\/blog.wnohang.net\/index.php\/2014\/05\/04\/slides-plmce-2014-breakout-session\/","url_meta":{"origin":334,"position":1},"title":"Slides from PLMCE 2014 breakout session","author":"Raghavendra","date":"May 4, 2014","format":false,"excerpt":"As many of you already know, PLMCE is an annual MySQL community conference and Expo organized by Percona in the month of April (usually). It is a great conference, not only to meet new and eminent people in MySQL and related database fields, but also to attend interesting talks, and\u2026","rel":"","context":"In \"ACID\"","block_context":{"text":"ACID","link":"https:\/\/blog.wnohang.net\/index.php\/tag\/acid\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":59,"url":"https:\/\/blog.wnohang.net\/index.php\/2014\/04\/30\/saving-form-data\/","url_meta":{"origin":334,"position":2},"title":"Saving form data in firefox","author":"Raghavendra","date":"April 30, 2014","format":false,"excerpt":"When commenting on sites, I have sometimes, seen that the commenting system just swallows the comment, or there is a browser crash, or a system one. In these cases it would be great if you can recover it somehow, particularly when you typed quite a bit. There are plugins for\u2026","rel":"","context":"Similar post","block_context":{"text":"Similar post","link":""},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":349,"url":"https:\/\/blog.wnohang.net\/index.php\/2022\/12\/11\/weekend-with-chatgpt\/","url_meta":{"origin":334,"position":3},"title":"Weekend with ChatGPT","author":"Raghavendra","date":"December 11, 2022","format":false,"excerpt":"A few days ago, OpenAI released a chat-based model called\u00a0ChatGPT\u00a0and provided an interface for users to interact with. ChatGPT is a form of conversational AI where you can ask questions or have a conversation with a bot backed by a model. As per the announcement - The dialogue format makes\u2026","rel":"","context":"In \"ai\"","block_context":{"text":"ai","link":"https:\/\/blog.wnohang.net\/index.php\/tag\/ai\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.wnohang.net\/wp-content\/uploads\/2022\/12\/Screenshot-2022-12-11-at-20.11.10.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.wnohang.net\/wp-content\/uploads\/2022\/12\/Screenshot-2022-12-11-at-20.11.10.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.wnohang.net\/wp-content\/uploads\/2022\/12\/Screenshot-2022-12-11-at-20.11.10.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.wnohang.net\/wp-content\/uploads\/2022\/12\/Screenshot-2022-12-11-at-20.11.10.png?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.wnohang.net\/wp-content\/uploads\/2022\/12\/Screenshot-2022-12-11-at-20.11.10.png?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/blog.wnohang.net\/wp-content\/uploads\/2022\/12\/Screenshot-2022-12-11-at-20.11.10.png?resize=1400%2C800&ssl=1 4x"},"classes":[]},{"id":294,"url":"https:\/\/blog.wnohang.net\/index.php\/2020\/05\/09\/haiku-and-muffin-top\/","url_meta":{"origin":334,"position":4},"title":"Haiku and Muffin Top","author":"Raghavendra","date":"May 9, 2020","format":false,"excerpt":"My interest in haikus was recently rekindled by James May\u2019s Our Man in Japan series in which he frequently bookends the episodes with a haiku of his own. Accordingly, I started searching for a haiku ebook on Libby (which if you are not using, you should give it a try!)\u2026","rel":"","context":"In &quot;musings&quot;","block_context":{"text":"musings","link":"https:\/\/blog.wnohang.net\/index.php\/category\/musings\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.wnohang.net\/wp-content\/uploads\/2020\/05\/Haiku-and-the-muffin-top-e1588958394533.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.wnohang.net\/wp-content\/uploads\/2020\/05\/Haiku-and-the-muffin-top-e1588958394533.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.wnohang.net\/wp-content\/uploads\/2020\/05\/Haiku-and-the-muffin-top-e1588958394533.jpg?resize=525%2C300&ssl=1 1.5x"},"classes":[]}],"_links":{"self":[{"href":"https:\/\/blog.wnohang.net\/index.php\/wp-json\/wp\/v2\/posts\/334","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.wnohang.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.wnohang.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.wnohang.net\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.wnohang.net\/index.php\/wp-json\/wp\/v2\/comments?post=334"}],"version-history":[{"count":3,"href":"https:\/\/blog.wnohang.net\/index.php\/wp-json\/wp\/v2\/posts\/334\/revisions"}],"predecessor-version":[{"id":339,"href":"https:\/\/blog.wnohang.net\/index.php\/wp-json\/wp\/v2\/posts\/334\/revisions\/339"}],"wp:attachment":[{"href":"https:\/\/blog.wnohang.net\/index.php\/wp-json\/wp\/v2\/media?parent=334"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.wnohang.net\/index.php\/wp-json\/wp\/v2\/categories?post=334"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.wnohang.net\/index.php\/wp-json\/wp\/v2\/tags?post=334"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}