  {"id":3311,"date":"2021-07-22T15:55:00","date_gmt":"2021-07-22T14:55:00","guid":{"rendered":"https:\/\/www.gironi.it\/blog\/?p=3311"},"modified":"2024-11-14T16:00:10","modified_gmt":"2024-11-14T15:00:10","slug":"multiple-regression-analysis-explained-simply","status":"publish","type":"post","link":"https:\/\/www.gironi.it\/blog\/en\/multiple-regression-analysis-explained-simply\/","title":{"rendered":"Multiple Regression Analysis, Explained Simply"},"content":{"rendered":"\n<p>The phenomena we observe and wish to study in order to deepen our understanding rarely present themselves so simply as to be defined by only two variables: one <strong>predictive<\/strong> (<em>independent<\/em>) and one <strong>response<\/strong> (<em>dependent<\/em>).<\/p>\n\n\n\n<p>Therefore, while <a href=\"https:\/\/www.gironi.it\/blog\/regressione-lineare-semplice\/\" data-type=\"post\" data-id=\"1807\">simple linear regression analysis<\/a> holds fundamental theoretical importance, in practice it provides little more information than simply studying the correlation coefficient.<\/p>\n\n\n\n<!--more-->\n\n\n\n<p>This is why, in scientific literature, <strong>multiple regression<\/strong> plays a predominant role, offering a comprehensive set of tools to explain the variation of our dependent variable for each predictive variable present in the model, as well as for the interactions among the independent variables.<\/p>\n\n\n\t\t\t\t<div class=\"wp-block-uagb-table-of-contents uagb-toc__align-left uagb-toc__columns-1  uagb-block-6b5b67d3      \"\n\t\t\t\t\tdata-scroll= \"1\"\n\t\t\t\t\tdata-offset= \"30\"\n\t\t\t\t\tstyle=\"\"\n\t\t\t\t>\n\t\t\t\t<div class=\"uagb-toc__wrap\">\n\t\t\t\t\t\t<div class=\"uagb-toc__title\">\n\t\t\t\t\t\t\tWhat We Will Discuss\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"uagb-toc__list-wrap \">\n\t\t\t\t\t\t<ol class=\"uagb-toc__list\"><li class=\"uagb-toc__list\"><a href=\"#the-multiple-regression-equation\" class=\"uagb-toc-link__trigger\">The Multiple Regression Equation<\/a><li class=\"uagb-toc__list\"><a href=\"#what-insights-can-i-gain\" class=\"uagb-toc-link__trigger\">What Insights Can I Gain?<\/a><li class=\"uagb-toc__list\"><a href=\"#a-few-prerequisites-to-start\" class=\"uagb-toc-link__trigger\">A Few Prerequisites to Start<\/a><li class=\"uagb-toc__list\"><a href=\"#how-to-proceed-practically\" class=\"uagb-toc-link__trigger\">How to Proceed Practically?<\/a><li class=\"uagb-toc__list\"><a href=\"#lets-get-started\" class=\"uagb-toc-link__trigger\">Let&#039;s Get Started!<\/a><li class=\"uagb-toc__list\"><a href=\"#how-valid-is-my-model\" class=\"uagb-toc-link__trigger\">How valid is my model?<\/a><li class=\"uagb-toc__list\"><a href=\"#final-summary\" class=\"uagb-toc-link__trigger\">Final summary<\/a><li class=\"uagb-toc__list\"><a href=\"#free-resources-for-further-learning\" class=\"uagb-toc-link__trigger\">Free resources for further learning<\/a><\/ol>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\n\n\n<h2 class=\"wp-block-heading\">The Multiple Regression Equation<\/h2>\n\n\n\n<p>Let&#8217;s start with the multiple regression equation, which is essentially an &#8220;expansion&#8221; of the simple linear regression equation and has this general form:<\/p>\n\n\n\n\\(\n\ny = a_1 x_1 + a_2 x_2 + &#8230; + a_i x_i + b \\\\ \\)\n\n\n\n<p>where <br>y is the response variable (note: it is <strong>single<\/strong>)<br>a<sub>1,<\/sub>a<sub>2<\/sub>&#8230;a<sub>i<\/sub> are the regression coefficients for the predictive variables<br>x<sub>1<\/sub>,x<sub>2<\/sub>&#8230;x<sub>i<\/sub> are the predictive variables<br>b is the intercept (also <strong>single<\/strong>)<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">What Insights Can I Gain?<\/h2>\n\n\n\n<p>First, I need to <strong>understand how significantly my predictive variables, when combined, are related to the dependent variable, and what proportion of the outcome is explained by the combination of predictive variables used in the model.<\/strong><\/p>\n\n\n\n<p>Next, I need to understand how each predictive variable is linked to the dependent variable, while controlling for the other independent variable (assuming, for simplicity, that there are only two predictive variables).<\/p>\n\n\n\n<p>Then, I must determine which of my two predictors has the strongest impact in estimating the dependent variable.<\/p>\n\n\n\n<p>These and many other insights can be obtained through the tools provided by multiple regression analysis, and these initial steps already demonstrate the power and practical utility of this technique.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">A Few Prerequisites to Start<\/h2>\n\n\n\n<p>Multiple regression analysis is a powerful and widely used technique, but to use it correctly, certain fundamental assumptions must be met:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>The dependent variable must be measured on an interval or ratio scale, and the predictive variable must also be interval\/ratio or dichotomous.<\/li>\n\n\n\n<li>There must be a linear relationship between the predictive variable and the dependent variable.<\/li>\n\n\n\n<li>All variables in the regression analysis should exhibit a normal distribution.<\/li>\n\n\n\n<li>The predictive variables should not be highly correlated with each other (known as <a href=\"https:\/\/www.gironi.it\/blog\/multicollinearita-eteroschedasticita-autocorrelazione-tre-concetti-dai-nomi-difficili-spiegati-semplici\/\" target=\"_blank\" data-type=\"post\" data-id=\"2404\" rel=\"noreferrer noopener\">multicollinearity<\/a>). Prediction errors should be independent of each other.<\/li>\n\n\n\n<li>It is assumed that <a href=\"https:\/\/www.gironi.it\/blog\/multicollinearita-eteroschedasticita-autocorrelazione-tre-concetti-dai-nomi-difficili-spiegati-semplici\/\" target=\"_blank\" data-type=\"post\" data-id=\"2404\" rel=\"noreferrer noopener\">heteroscedasticity<\/a> is present. In other words, the errors in predicting Y should be approximately the same in magnitude and direction across all levels of X.<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\">How to Proceed Practically?<\/h2>\n\n\n\n<p>My suggestion for gaining proficiency is to proceed step by step. Here\u2019s my recommended approach:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Plot scatterplots for each predictive variable against the response variable to assess the presence of a correlation.<\/li>\n\n\n\n<li>Calculate the multiple regression equation.<\/li>\n\n\n\n<li>Verify that the assumptions of normality and homoscedasticity are met.<\/li>\n\n\n\n<li>Analyze the coefficient of determination.<\/li>\n<\/ol>\n\n\n\n<p>For each of these steps, R proves to be, as usual, a highly reliable companion, capable of providing the necessary calculations with just a few commands.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Let&#8217;s Get Started!<\/h2>\n\n\n\n<p>Let\u2019s start R and dive into a practical example, keeping it as simple as possible. We\u2019ll only be scratching the surface here. The goal is to lay the foundation for exploring a vast topic, leaving the reader to delve deeper. So, without further ado, let\u2019s pick a sample dataset. <br>Among others, R provides <em>Longley\u2019s Economic Regression Data<\/em>, which I\u2019ll use here.<br>What is it? It\u2019s a <em>data frame<\/em> containing seven economic variables from 1947 to 1962.<\/p>\n\n\n\n<p>Let\u2019s open R Studio, load the dataset, and take a first look:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">data(longley) dim(longley) head(longley)<\/pre>\n\n\n\n<p>Now, let\u2019s examine the metrics. For our example, we\u2019ll use the number of employed individuals as the response variable and the Gross National Product (GNP) and population as the independent predictive variables.<\/p>\n\n\n\n<p>Here\u2019s how the scatterplots look:<br><\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">plot(longley$Employed, longley$GNP)<\/pre>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"855\" height=\"540\" src=\"https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/impiegati-gnp.png\" alt=\"\" class=\"wp-image-2240\" srcset=\"https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/impiegati-gnp.png 855w, https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/impiegati-gnp-300x189.png 300w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><figcaption class=\"wp-element-caption\">employees \/ gross national product<\/figcaption><\/figure>\n<\/div>\n\n\n<pre class=\"wp-block-preformatted\">plot(longley$Employed, longley$Population)<\/pre>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"855\" height=\"540\" src=\"https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/impiegati-popolazione.png\" alt=\"Residuals of multiple regression\" class=\"wp-image-2241\" srcset=\"https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/impiegati-popolazione.png 855w, https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/impiegati-popolazione-300x189.png 300w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><figcaption class=\"wp-element-caption\">employees \/ population<\/figcaption><\/figure>\n<\/div>\n\n\n<p>The growing linear correlation is evident. Let\u2019s proceed with multiple regression. We already know the command: it&#8217;s lm(). The syntax for multiple predictor variables is as follows:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">regression = lm(Employed ~ GNP + Population, data=longley)\nsummary(regression)<\/pre>\n\n\n\n<p>The result is a wealth of information!<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">Call:\nlm(formula = Employed ~ GNP + Population, data = longley)\n\nResiduals:\n     Min       1Q   Median       3Q      Max \n-0.80899 -0.33282 -0.02329  0.25895  1.08800 \n\nCoefficients:\n            Estimate Std. Error t value Pr(&gt;|t|)    \n(Intercept) 88.93880   13.78503   6.452 2.16e-05 ***\nGNP          0.06317    0.01065   5.933 4.96e-05 ***\nPopulation  -0.40974    0.15214  -2.693   0.0184 *  \n---\nSignif. codes:  0 \u2018***\u2019 0.001 \u2018**\u2019 0.01 \u2018*\u2019 0.05 \u2018.\u2019 0.1 \u2018 \u2019 1\n\nResidual standard error: 0.5459 on 13 degrees of freedom\nMultiple R-squared:  0.9791,\tAdjusted R-squared:  0.9758 \nF-statistic: 303.9 on 2 and 13 DF,  p-value: 1.221e-11<\/pre>\n\n\n\n<p>First, let\u2019s note the p-value of the F-Statistic. It&#8217;s very small (1.211e-11), highly significant. Therefore, at least one of our predictor variables is statistically significantly related to the outcome variable. So, let\u2019s proceed.<\/p>\n\n\n\n<p>Let\u2019s also plot the residuals:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">plot(longley$Year, summary(regression)$res, type='o')\nabline(h=0, lty=2, col=\"red\", lwd=2)<\/pre>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"855\" height=\"540\" src=\"https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/res.png\" alt=\"\" class=\"wp-image-2242\" srcset=\"https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/res.png 855w, https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/res-300x189.png 300w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><\/figure>\n\n\n\n<p>Ideally, the sum of the residuals, or the differences between the actual and predicted values, should tend toward zero, or at least be as low as possible. A look at this section of the summary output tells us that this condition is met in this case:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"377\" height=\"83\" src=\"https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/residuals.png\" alt=\"\" class=\"wp-image-2247\" srcset=\"https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/residuals.png 377w, https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/residuals-300x66.png 300w\" sizes=\"auto, (max-width: 377px) 85vw, 377px\" \/><figcaption class=\"wp-element-caption\">the median of the residuals is close to zero, isn\u2019t it?<\/figcaption><\/figure>\n<\/div>\n\n\n<p>Let\u2019s visually check the assumption of normality for the residuals:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">hist(resid(regression),main='Histogram of Residuals',xlab='Standardized Residuals',ylab='Frequency')<\/pre>\n\n\n\n<p>The resulting plot shows that the normality assumption is met:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"607\" height=\"383\" src=\"https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/istogramma_dei_residui.png\" alt=\"\" class=\"wp-image-2291\" srcset=\"https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/istogramma_dei_residui.png 607w, https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/istogramma_dei_residui-300x189.png 300w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 984px) 61vw, (max-width: 1362px) 45vw, 600px\" \/><figcaption class=\"wp-element-caption\">the residuals meet the normality assumption<\/figcaption><\/figure>\n\n\n\n<p>It&#8217;s time to look in detail at what the summary output of our regression tells us.<\/p>\n\n\n\n<p>The median is close to zero, and the residuals are approximately normally distributed. So we can proceed.<\/p>\n\n\n\n<p>The values of the coefficients for our two predictor variables and the intercept can be found in our output:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"528\" height=\"155\" src=\"https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/coefficienti.png\" alt=\"\" class=\"wp-image-2248\" srcset=\"https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/coefficienti.png 528w, https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/coefficienti-300x88.png 300w\" sizes=\"auto, (max-width: 528px) 85vw, 528px\" \/><figcaption class=\"wp-element-caption\">R gives us&#8230; everything we need!<\/figcaption><\/figure>\n<\/div>\n\n\n<p>The regression line is thus:<\/p>\n\n\n\n\\(\n\\\\\ny = 0.06317x_1 &#8211; 0.40974 x_2 + 88.9388\n\\\\\n\\)\n\n\n<p>We pay attention to the p-values concerning the slope coefficients. The output shows that our independent variables are significant in predicting the value of the dependent variable, with p-values below the standard 0.05 threshold. The gross national product even has a value below the 0.001 level. <strong>We can therefore reject the null hypothesis (predictor variables are not significant) and validate our model, observing that GNP is the most \u201creliable\u201d element for estimation compared to the population<\/strong>. The asterisk notation in the output is also very useful, giving us a quick visual insight into the results.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/pvalue-1.png\" alt=\"\" class=\"wp-image-2251\" width=\"528\" height=\"156\" srcset=\"https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/pvalue-1.png 528w, https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/pvalue-1-300x89.png 300w\" sizes=\"auto, (max-width: 528px) 85vw, 528px\" \/><figcaption class=\"wp-element-caption\">Watch the asterisks!<\/figcaption><\/figure>\n<\/div>\n\n\n<p>We can also ask R to calculate the confidence interval for our model with the command:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">confint(regression)<\/pre>\n\n\n\n<p>In our example, we obtain this output:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">                  2.5 %       97.5 %\n(Intercept) 59.15805807 118.71953854\nGNP          0.04017053   0.08617434\nPopulation  -0.73841447  -0.08107138<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">How valid is my model?<\/h2>\n\n\n\n<p>At this point, a key player we have encountered in previous articles comes into play: the <strong>coefficient of determination<\/strong>, or r<sup>2<\/sup>.<\/p>\n\n\n\n<p>It provides essential information about how close the data points are to our regression line. In practical terms, it indicates what percentage of the &#8220;movements&#8221; in the dependent variable are explained by our predictor variables. The value is standardized between 0 and 1, and it\u2019s clear that our model becomes more useful the closer this value gets to 1.<\/p>\n\n\n\n<p>R, as always remarkable, has already provided us with this useful value in the output. Here it is:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"494\" height=\"24\" src=\"https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/coefficiente_di_determinazione.png\" alt=\"\" class=\"wp-image-2255\" srcset=\"https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/coefficiente_di_determinazione.png 494w, https:\/\/www.gironi.it\/blog\/wp-content\/uploads\/2021\/07\/coefficiente_di_determinazione-300x15.png 300w\" sizes=\"auto, (max-width: 494px) 85vw, 494px\" \/><figcaption class=\"wp-element-caption\">when two variables explain almost everything\u2026<\/figcaption><\/figure>\n<\/div>\n\n\n<p>Okay, but what is the Adjusted R-squared value? This is the value to consider, as it solves a paradox with the r<sup>2<\/sup> value, which always increases as the number of variables increases (even if those variables aren\u2019t significant at all). The adjusted R-squared corrects this anomaly, providing a perfectly usable value (always lower than R-squared).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Final summary<\/h2>\n\n\n\n<p>I&#8217;ll keep it short: <strong>a high R-squared value and a very low, close-to-zero residual value indicate a good model<\/strong>.<\/p>\n\n\n\n<p>This is not everything; in fact, it\u2019s just the beginning. But it\u2019s the first step in mastering a tool like multiple linear regression analysis, which has great practical utility. I hope to leave you curious to explore further and go beyond (and below are some useful free online resources). Good luck!<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Free resources for further learning<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>You can <a href=\"https:\/\/hastie.su.domains\/ISLR2\/ISLRv2_website.pdf\" target=\"_blank\" rel=\"noopener\">freely download<\/a> the second edition of the excellent book &#8220;An Introduction to Statistical Learning&#8221; by <a href=\"http:\/\/faculty.marshall.usc.edu\/gareth-james\/\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Gareth James<\/strong><\/a> and others (Springer). It\u2019s a hefty PDF file of over 600 pages, in English.<\/li>\n\n\n\n<li>You can access the &#8220;<a href=\"http:\/\/www.cookbook-r.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Cookbook for R<\/a>&#8221; site, which also has sections dedicated to regression analyses.<\/li>\n\n\n\n<li>On the <a href=\"https:\/\/www.r-bloggers.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">R-bloggers<\/a> website, you can find articles and insights on any statistical and ML topic, with examples and R code.<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>The phenomena we observe and wish to study in order to deepen our understanding rarely present themselves so simply as to be defined by only two variables: one predictive (independent) and one response (dependent). Therefore, while simple linear regression analysis holds fundamental theoretical importance, in practice it provides little more information than simply studying the &hellip; <a href=\"https:\/\/www.gironi.it\/blog\/en\/multiple-regression-analysis-explained-simply\/\" class=\"more-link\">Leggi tutto<span class=\"screen-reader-text\"> &#8220;Multiple Regression Analysis, Explained Simply&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_uag_custom_page_level_css":"","footnotes":""},"categories":[161],"tags":[284],"class_list":["post-3311","post","type-post","status-publish","format-standard","hentry","category-statistics","tag-regressione-multipla"],"lang":"en","translations":{"en":3311,"it":2225},"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false,"post-thumbnail":false},"uagb_author_info":{"display_name":"paolo","author_link":"https:\/\/www.gironi.it\/blog\/author\/paolo\/"},"uagb_comment_info":4,"uagb_excerpt":"The phenomena we observe and wish to study in order to deepen our understanding rarely present themselves so simply as to be defined by only two variables: one predictive (independent) and one response (dependent). Therefore, while simple linear regression analysis holds fundamental theoretical importance, in practice it provides little more information than simply studying the&hellip;","_links":{"self":[{"href":"https:\/\/www.gironi.it\/blog\/wp-json\/wp\/v2\/posts\/3311","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.gironi.it\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.gironi.it\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.gironi.it\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.gironi.it\/blog\/wp-json\/wp\/v2\/comments?post=3311"}],"version-history":[{"count":6,"href":"https:\/\/www.gironi.it\/blog\/wp-json\/wp\/v2\/posts\/3311\/revisions"}],"predecessor-version":[{"id":3317,"href":"https:\/\/www.gironi.it\/blog\/wp-json\/wp\/v2\/posts\/3311\/revisions\/3317"}],"wp:attachment":[{"href":"https:\/\/www.gironi.it\/blog\/wp-json\/wp\/v2\/media?parent=3311"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.gironi.it\/blog\/wp-json\/wp\/v2\/categories?post=3311"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.gironi.it\/blog\/wp-json\/wp\/v2\/tags?post=3311"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}