easier R than SPSS with Rcmdr : Contents
ch.17 ROC
Let’s open the aSAH data.
The most desirable outcome, expressed as 0 and 1, is sometimes as Good/Poor or Yes/No as it is now. At this time, you have to be very careful with your interpretation. You can think of anything in alphabetical order as being followed by 1 and anything before it as 0. Of Good/Poor, ‘Poor’ becomes 1, and of Yes/No ‘Yes’ becomes 1. In other words, in this case, you can think of outcome as data to diagnose Poor.
Specifies the menu to compare 2 ROC urves.
Specify the result variable, and specify the two diagnostic variables that you want to compare. That is, it specifies three variables.
Graphically represent two ROC curves.
When the two ROCs were compared using DeLong’s test, p = 0.1643 indicates no statistical significance. It also calculates the AUC of the two ROCs.
Let’s specify 2 other variables.
Judging from the plot, the curve of ‘wfns’ is positioned further upwards.
When comparing the two curves, p = 0.001694 confirms that there is statistical significance.
Let’s reuse the ‘dataKM’ data that we used before.
Click ‘logistic regression’.
Use the 6 variables and select ‘Make propensity score variable’ as shown.
Now I use 5 variables to do the same thing over and over again. In other words, with ‘marker’ and without ‘marker’, we created a probability model for each.
The red part on the left is when there is ‘marker’, and the green part on the right is when there is no ‘marker’. Now the material is ready.
If you have performed logistic regression, you can create ROC to determine the diagnostic accuracy of the model, and other diagnostic models besides logistic regression can also plot the ROC in this way.
First, select ‘ROC ~’. This is to draw and analyze one ROC curve, and the ‘compare ROC’ below it is to compare the two ROCs.
Select an event that is the actual value and one variable that is the predicted probability by the model.
It draws sensitivity and specificity according to the Threshold.
ROC also draws. The point of the ROC, called the optimal threshold or cutoff value, shows one of 2 choices (the red square on the front page).
Area under the curve (called AUC or C-statics for short) is the area under the ROC curve, which 95% confidence interval.
Now let’s compare the 2 ROCs.
Select event and 2 predictors.
The two ROC curves have been drawn, and they don’t seem to make much difference to the naked eye .
DeLong’s test for two ROC curves also tells us that there is no statistical difference (p = 0.4493).
On the other hand, this data means that there is no difference in the prediction with ‘marker’ between without ‘marker’. Therefore, it is thought that the ‘marker’ is not valuable as a predictor of the prognosis.
So far,
the ROC has described one diagnostic method in detail, or compared the two. Or
we compared two logistic regression models.
easier R than SPSS with Rcmdr : Contents
=================================================
- R statisics portal https://tinyurl.com/stat-portal
- R data visualization book 1 https://tinyurl.com/R-plot-I (chart)
- R data visualization book 2
https://tinyurl.com/R-plot-II-3-4 many variables / map
https://tinyurl.com/R-plot-II-5-6 time related / statistics related
https://tinyurl.com/R-plot-II-7-8 others / reactive chart
- R data visualization book 3 https://tinyurl.com/R-data-Vis3
- R data visualization book 4 R 데이터 시각화 4권
- Meata Analysis book 1 https://tinyurl.com/MetaA-portal
- Meata Analysis book 2 https://tinyurl.com/MetaA-portal(2)
- Preciction Model and Machine Learning https://tinyurl.com/Machine-Learning-EZ
- Sample Size Calculations https://tinyurl.com/MY-sample-size
- Sample data https://tinyurl.com/data4edu
댓글 없음:
댓글 쓰기