instruction
stringlengths
9
38.7k
I need to analyse the results of a messaging trial where the value of interest the the click through rate (CTR) on social media ads. I tested 8 ads and a control and I have the CTR for each one (e.g. message one had a CTR of 1%, message two had a CTR of 2%). The issue is I don't know if just using an ANOVA will be ok because I literally just have nine figures (the CTRs) and not a full dataset with multiple rows that the ANOVA would use to take an average and then compare that average to see if there is a statistically significant difference. Can anyone weigh in on how I can understand whether there is a difference between my groups?
I am trying to analyze some preliminary data for a conference. I currently have two animal participants completing a series of tasks. The tasks use different types of feedback (4 levels), which vary in the order they are completed (4 levels) that they occur. One animal has completed 20 tasks and the other has only completed 12, so I have unequal sizes. My dependent variable is the total number of trials it takes them to complete the task (the criterion is when they reach 80% accuracy). My independent variable is the feedback type (categorical variable). I need to include the order as a covariate. Because I have only 2 participants, I am reading that I cannot (or rather, should not) run an ANCOVA. So I am looking for an alternative test that will work for such a small sample size with uneven groups. Any advice would be greatly appreciated. I am working in SPSS if that helps.
We know that it's better to standardization the training data (i.e. X_train) before fitting a LASSO model, especially when features are not in the same scale (Ref. Is standardisation before Lasso really necessary?). But after fitting a LASSO model, when doing the prediction with the new data, do we still need to rescale the testing data (i.e. X_test)? And if so, how to properly to rescale the unseen future data?
I fitted my data in a GLMM model ("poisson" family) in R, and I got a z values of z value = 2.278. But after I put the model into post-hoc analysis ("Dunn" adjustment) with "emmeans", the z value became z value = -3.350. Why would the z value change? Thanks!
Suppose that for $i=1,2, \ldots, N$ and $j=1, \ldots, n$, the r.v.'s $Y_{i j} \sim \mathcal{N}\left(\vartheta_i, \sigma^2\right)$ are mutually independent, where the parameters $\{\vartheta_i\}_{i=1}^N$ and $\sigma^2$ are unknown. Problem 1: If $n=2$ and $N$ gets large, show that the MLE $\hat{\sigma}^2$ converges in probability but is not consistent. I've shown that $\hat{\sigma}^2 = \frac{1}{nN}\sum_{i=1}^N \sum_{j=1}^n (Y_{ij} - \hat{\vartheta_i})^2$, and so plugging n = 2 and simplifying I get: $\hat{\sigma}^2 = \frac{1}{4N} (Y_{i1} - Y_{i2})^2$. I was thinking about showing what it converges to in probability, and then if it is not $\sigma^2$, then it is not consistent. But I am a little confused on how to go about showing what it converges to in probability. Could I use LLN and CLT in some way? Problem 2: Is the MLE $\hat{\sigma}^2$ for $\sigma^2$ consistent if $n=1+[\log (N)]$ and $N \rightarrow \infty$? I have no clue how to begin with this one because the log term is throwing me off.
I have an imbalanced fraud dataset with ~$1.4$% fraudulent samples across 50,000 rows with 600 columns. I'm performing a binary classification task on this dataset. I've performed an EDA; some columns are $99$% the same value. The $1$% of entries that are not the same have a lower fraud rate of ~$1$%. I believe that makes this column predictive of "non-fraud". But is that useful to me when the dataset's average is so strongly predictive of fraud anyway? My intuition is no, because of the imbalance. I think it's okay to drop these columns from the feature selection process. Is that a sensible practice when dealing with a highly imbalanced dataset? Edit: This is a follow up to this question: What should you do with near constant columns? I consider this question different because I'm asking about a specific subset of near-constant columns that I believe should be removed, given the imbalanced nature of my dataset. Instead of asking about near-constant columns being removed in general.
I run competitive events. In our normal event, we have 8 adjudicators split between to categories. Skill and Artistry. For each category we throw out the high and low scores and average the remaining two scores for the final result. This helps eliminate bias. I'm trying to find a similar method when there are only 6 judges, three in each category. I've looked at standard deviation and Quartiles to remove outliers but I'm not satisfied that either is the best choice. My concern is that the Stdev still uses the original mean to determine what is an outlier. Part of me feels that the outlier should be determined based on the "new mean" but then the calculations become so opaque that my team can't follow/duplicate for future events. With the 1st and 3rd quartile, it seems even more arbitrary to toss the top and bottom 25% but that may be my own cognitive bias. I'm curious... how would you do this if you were me. Note: In the small groups that I've tested I've found that the results don't vary from taking the simple mean of the three results unless extreme examples are tried... perhaps this isn't worth the effort? I use googlesheets for the calculations, if that's at all helpful.
Does any of the YOLO models use NMS during training? From going through the papers they only explicitly state that NMS is only applied during inference, unlike Faster-RCNN. Does any YOLO (or other computer vision) algorithms filter out low confidence boxes during training based on some threshold before computing the loss? Are there plus / minuses for doing so?
I have some categories in a fraud dataset I'm working on that are composed of 99% one category and 1% the other. On inspection these categories contain the exact same percentage of frauds ~1.4%, which is inline with my dataset's average. If the categories present nearly the same information in terms of the target, are they meaningfully different?
Scikit learn allows us to fit Gaussian processes $GP(0,K(.,.))$ such that $K:T\times T \to \mathbb{R}$ is a covariance function (kernel), however it doesn't let us specify a mean function $m: T\times \mathbb{R}$. I'm trying to work solely with scikit-learn, so working with only the tools it provides. Is it a reasonable approach to fit a mean function using another method such as the multiple linear regression and then fit a Gaussian process to the residuals? Should I do something like Iterated Weighted Least Squares (IWLS) where you alternate between fitting the fixed effects under the current estimate of the covariance matrix and then reestimate the covariance function parameters using the current residuals?
I am currently working on my thesis and I have run into some issues regarding interpretation of my analysis. I wanted to find out whether relationship between my IV and DV differs based on gender (two categories - male and female). In statistics classes I have been advised to do quick t-test before running the regression analysis to find out if the two groups (male and female) differ in the DV. So I have done the t-test and it is nonsignificant. Then I ran the moderated regression and gender did not significantly predict DV and also the interaction was not significant. But also I expected small gender differences based on literature and I do not have the appropriate sample size (N=212 and I would need twice that much) so that nonsignificant interaction might have happened due to that. This brings me to the t-test - can I still consider the t-test results to be valid when suggesting there are no gender differences in DV? In the discussion I would probably conclude that the interaction did not occur due to lacking in sample size but can I still interpret the t-test that is suggesting that there are no gender differences?
I am new to power analysis in multi-level models. I am looking for a possibility to do a power analysis for the following 2-level model: Y = y00 + y10D1 + y20D2+y01Z +y11D1Z+y21*D2Z. In this model, I investigate the effect of time (D1 and D2) and an experimental condition as well as their interaction effect on my outcome variable. The time is measured three times and integrated as dummy-coded contrasts in the model (D1 and D2). The experimental condition is also dummy-coded. I tried to work with the instruction for a power analysis in 2-level models by Trend & Schäfer (2019) (see R code attached). However, I do not know how create the conditional variances for my model and I think there must be a mistake in the model . I would be very happy to get your advice. Thanks a lot! R-Code: #Specifying standardized input parameters alpha.S <- .05 #Alpha level Size.clus <- 3 #L1 sample size N.clus <- 200 #L2 sample size L1_DE_standardized <- .30 #L1 direct effects L2_DE_standardized <- .50 #L2 direct effect CLI_E_standardized <- .50 #CLI effects ICC <- .50 #ICC rand.sl <- .09 #Standardized random slope #Creating variables for power simulation in z-standardized form #Creates a dataset with two L1-predictor x and one L2-predictor Z; all predictors are dichotomous Size.clus <- 3 #L1 sample size N.clus <- 200 #L2 sample size EG<-rep(c(0,1),each=300) x<- scale(rep(1:Size.clus)) g <- as.factor(1:N.clus) X <- cbind(expand.grid("x"=x, "g"=g)) X <- cbind(X, EG) X$D1<- recode(var = X$x, recodes = "-1 = 0; 0 = 1; 1 = 0") X$D2<- recode(var = X$x, recodes = "-1 = 0; 0 = 0; 1 = 1") #Adapting the standardized parameters varL1 <- 1 #L1 variance component varL2 <- ICC/(1-ICC) #L2 variance component varRS1 <- rand.sl*varL1 #Random slope variance tau 11 varRS2 <- rand.sl*varL1 #Random slope variance tau 22 L1_DE <- L1_DE_standardized*sqrt(varL1) #L1 direct effect L2_DE <- L2_DE_standardized*sqrt(varL2) #L2 direct effect CLI_E <- CLI_E_standardized*sqrt(varRS) #CLI effect #Creating conditional variances #I don’t know how to calculate this conditional variance with two L1 predictor s <- sqrt((varL1)*(1-(L1_DE_standardized^2))) #L1 variance V1 <- varL2*(1-(L2_DE_standardized^2)) #L2 variance rand_sl.con <- varRS1*(1-(CLI_E_standardized^2)) #Random slope variance #Creating a population model for simulation b <- c(0, L1_DE, L1_DE, L2_DE, CLI_E,CLI_E) #vector of fixed effects (fixed intercept, L1.1. direct, L1.2. direct, L2 direct, CLI.1 effect, CLI.2 effect) V2 <- matrix(c(V1,0,0, 0,rand_sl.con,0, 0,0,rand_sl.con), 3) #Random effects covariance matrix with covariances set to 0 # there must be a mistake some steps before that the model doesn't work model <- makeLmer(y ~ D1 + D2 + EG + D1:EG + D2:EG +(D1+D2 | g), fixef = b, VarCorr = V2, sigma = s, data = X) #Model creation
Consider independent observations ${(y_i, x_{1i}, x_{2i}) : 1 ≤ i ≤ n}$ from the regression model $y_i = β_1x_{1i} + β_2x_{2i} + e_i, i = 1, . . . , n$ ,where $x_{1i}$ and $x_{2i}$ are scalar covariates, $β_1$ and $β_2$ are unknown scalar coefficients, and $e_i$ are uncorrelated errors with mean $0$ and variance $\sigma^2 > 0$. Instead of using the correct model, we obtain an estimate $\hat{β_1} $ of $β_1$ by minimizing $\sum_{i=1}^n (Y_i -\beta_1X_{1i})^2$.Find the bias and mean squared error of $\hat{β_1} $. By minimizing $\sum_{i=1}^n (Y_i -\beta_1X_{1i})^2$ I found that $\hat{β_1}=\frac{\sum_{i=1}^n y_ix_{1i}}{\sum_{i=1}^n x_{1i}^2}$. Now, Bias of $\hat{β_1}$ is $E(\hat{β_1})-\beta_1$. I'm not sure how to find $E(\frac{\sum_{i=1}^n y_ix_{1i}}{\sum_{i=1}^n x_{1i}^2})$. Can someone please suggest any reading material to learn how to find this or provide hints?
How to compute the conditional variance of a sum of three normally distributed random variables given two other random variables? Assume pairwise correlations exist and the following joint distributions: $X\sim \mathcal N(\mu_X,\sigma_X^2)$; $Y\sim \mathcal N(\mu_Y,\sigma_Y^2)$; $Z\sim \mathcal N(\mu_Z,\sigma_Z^2)$; $\theta_1\sim \mathcal N(\mu_{\theta_1},\sigma_{\theta_1}^2)$; $\theta_2\sim \mathcal N(\mu_{\theta_2},\sigma_{\theta_2}^2)$; The multidimensional linear projection theorem can be directly applied to find the conditional mean and variance of random variable/partition of random variables A given random variable/partition of random variables B, but what if A is a combination of three random variables? I am new to the community, so please do not close the question without detailing what else is needed. Thanks.
I'd like to estimate an integral of the following form using Monte Carlo method: $$ \int_{t_1}^{t_2} g(t) \left[ \int_{- \infty}^{\infty} f(t, u) du \right] ^\gamma dt$$ In case of $\gamma$ being a positive integer (say, $2$) I can rewrite it as follows: $$ \int_{t_1}^{t_2} \int_{- \infty}^{\infty} \int_{- \infty}^{\infty} g(t) f(t, u_1) f(t, u_2) du_1 du_2 dt$$ That is, I can simply take two independent unbiased estimates of the inner integral and multiply them. My question is: could something similar be done in case of fractional $\gamma$?
I am learning gam by myself. I tried to find posts that can help me to finding and understanding smooth functions i.e., basis functions from gam using mgcv. From a post I found I can use smoothCon to estimate basis functions without explicitly from the model. However when I did so, I observed smoothCon and mgcv$smooth doesnt match. Here an reproducible example of what I did: library(mgcv) set.seed(1) x.1=runif(80) x.2=runif(80) fx.1=sin(x.1*.2) fx.2=cos(x.2) fx=fx.1+fx.2 y=rbinom(80,1,fx/(1+exp(fx))) gam.m= gam(y~s(x.1,bs="cr",k=4)+s(x.2,bs="cr",k=4), family=binomial, method="REML") Now, I extracted basis functions smooth.mgcv= predict.gam(gam.m,type="lpmatrix") smooth.x.1= smooth.mgcv[,2:4] #basis functions of x.1# then from smooth.Con smooth.create=smoothCon(s(x.1,bs='cr',k=4),data=data.frame(x.1=x.1),knots=NULL) , but this two are giving different output. How do this basis functions work?
If we have a recorded percentage (statistic) on a population, if we take random samples we might not encounter the percentage till e.g. even we exhaust the population. E.g. a box with $500000$ balls of which $25000$ ($5$%) of them are red and the rest are all white. My question is what is min/max number of a sample to consider we will be able to see that $5$% of red balls we know exists in the larger population? And not specifically for $500000$ but any population larger than $50000$. Are there any online tools/tables already available that can provide the sizes for any number $n$ of a population? If there is also a margin of error that is ok too as long as it is configurable/reported
I have made several models (RF, XGB and GLM) to predict a binary outcome and they all achieved an AUC of approximately 0.8 and Brier scores 0.1-0.15. Test set is fairly small (n= 350), cases with outcome are (n=50). I am trying to create calibration plots in RStudio and I am getting results that I don't understand. At first I tried predtools, classifierplots and runway as in the example below and got results that all looked like the plot below: Model: RF_model <- randomForest(outcome ~ ., data = TRAIN_data) RF_prediction$pred <- predict(RF_model, TEST_data, type = "prob")[,"no"] and for the calibration plot (with the "probably" package): RF_prediction %>% cal_plot_breaks (outcome, pred) The sudden dive towards zero looked wrong to me.. I tried searching for more information and after reading the excellent(https://towardsdatascience.com/introduction-to-reliability-diagrams-for-probability-calibration-ed785b3f5d44) I realized I was probably using an incorrect data format and tried using relative frequencies instead. This created a nice S-shaped curve that looked too perfect and uniform to be believed. Finally I found (Create calibration plot in R with vectors of predicted and observed values) on this site and ended up with the curves below after using the rms package and following syntax: RF_model <- randomForest(outcome ~ ., data = TRAIN_data) TEST_data$pred <- predict(RF_model, TEST_data, type = "prob")[,"no"] plot <- val.prob(TEST_data$pred, TEST_data$outcome) with the curve for the RF model as above: together with the curves for the other two models: I need help understanding the following: Do the syntax and the plots seem correct? How do I interpret the way the curves “stop” at different predicted probabilities? How to remove the annoying “overall” legend by the curves..?? I need to write the names of the models instead! (I managed to get rid all statistic data text with “logistic.cal = FALSE, statloc = FALSE” on the val.prob command and “flag = 0” on the plot.) and on a more general note I have seen the terms reliability diagram and calibration plot used interchangeably. Are they the same thing with different names or is there some subtle difference that is lost on me..?
I am reading this blog post where the author talks about diffusion models. Let's keep diffusion out of the conversation for now. The author showcased that we can parameterize a Gaussian distribution by a desired standard deviation sigma as shown below: I didn't get how the sigma value is inserted here. Can someone please elaborate this in detail?
I'm currently researching an moderation model in SPSS with Gender as Moderator. When I'm running PROCESS number 1 (moderation) I choose the option “only continuous variables that define products”. Result: interaction effect not significant & main-effect not significant. When running the model with "All variables that define product" the result is: interaction effect not significant & main effect significant. How??? I think the best option is only continuous variables because of the categorical moderator... Hope someone can help me with the choice for only continuous variables vs. all variables. Thanks in advance!
When we have response variable that is of binary type and our interest is to know how probability is associated with covariates then we use logistic regression. But I specifically want to know about logistic generalized additive models (gam). Suppose, I have a data with binary response. How can I know whether here I need a linear or a gam logistic? As for other data type, I can plot scatter plot of response vs covariate and if I see nonlinear pattern I can use gam. But for binary response, I can not have that choice. (Adding mgcv tag as anyone using it may have understanding on this issue).
For a data matrix $X$ of dimension $n \times p$ where $p > n$ and corresponding label vector $y$ of dimension $n$, the standard least squares fit, $\hat{\beta} = (X^TX)^{-1}X^Ty$, is underdetermined. A typical approach is to use the Moore-Penrose inverse to find the min-norm solution by solving $\hat{\beta}^{\text{mn}} = X^T(XX^T)^{-1}y$. Specifically, this finds $\min_\hat{\beta} ||\hat{\beta}||^2_2$ such that $X \hat{\beta} = y$. My question is if the min-norm solution described above for a data matrix $X$ is equivalent to solving standard least squares for some $n \times n$ basis of $X$? In other words, if we call this basis $B$ could we solve $\hat{\beta}^{\text{basis}} = B^T(BB^T)^{-1}y$ such that $\hat{\beta}^{\text{basis}}$ and $\hat{\beta}^{\text{mn}}$ are equivalent even for a test point not in $X$?
As far as I know [source], $$t_{\widehat{\beta}} = \frac{\widehat{\beta}}{\widehat{SE_{\beta}}}.$$ It means the sign of the t-value should be the same as the sign of beta. In Table S1 of Shen (2018), the signs are different. Why? Did I miss something? I have noticed the passage referenced by user @utobi before. But (1) the first row in Table S1: N17_N15, Beta: 0.054, SE: 0.016, t.value: -3.403, Valence of connection: + clearly did not followed the pattern. (2)More importantly, a comparison of Figure 1 and Table S1 indicate the beta values in Table S1 means connection strength already, so there is no need to multiply sign one more time. For example, Table S1: N24_N4 -0.066 (sign of mean value connection is negative) Figure 1: Note the caption: “Red lines are the connections where strength was positively associated with cognitive performance, and blue lines denote negative associations with cognitive performance” (3) t-value is used to calculate p-value, and their relationship is symmetry around 0. So changing the sign would mean nothing. Also, if we interpret "Valence of connection" and "95% CI of value of connection" in Table S1-S3 as the sign and CI for the values of connection, they should be the same across the 3 tables. However, they sometimes agree and sometimes not. Table S1 N45_N15 + 1.233 1.291 Table S2 N45-N15 + 1.233 1.291 Table S1 N17_N15 + 1.215 1.275 Table S2 N17-N15 - -0.825 -0.784 Table S1 N24_N4 - -1.136 -1.075 Table S2 N24-N4 + 0.588 0.651
Description of background Consider a 2d random walk with drift: $$X(t) = \sum_{k=1}^t X_k \\ Y(t) = \sum_{k=1}^t Y_k$$ where each $X_k$ and $Y_k$ are independently exponentially distributed with rate $\lambda = 0.5$. Let's define $Y^\star$ as $Y(t^\star)$, where $t^\star$ is the time when $X(t)$ passes some boundary line. This looks as following The curve added to the histogram is the density of a non-central chi-squared distribution. A motivation for this random walk is in the question: Compound Poisson Distribution with sum of exponential random variables Question My question is whether there is an intuitive explanation for why $Y^\star$ is non-central chi-squared distributed (with 2 degrees of freedom and non-centrality parameter equal to the boundary value). Code for plot ### prepare empty plot plot(-10,-10, xlim = c(0,30), ylim = c(0,30), xlab = "x(t)", ylab = "y(t)", main = "random walks") ### prepare empty variable ystar = rep(0,n) ### compute random walks and add them to the plot ### also, compute ystar for (i in 1:n) { x = cumsum(rexp(m,0.5)) y = cumsum(rexp(m,0.5)) lines(x,y, col = rgb(0,0,0,0.01)) hit = min(which(x>bound)) ystar[i] = c(0,y)[hit] } lines(c(bound,bound),c(-100,100), col = 2) ### add boundary line ### create histogram hist(ystar, freq = F, breaks = seq(-0.5,max(ystar)+1.5,0.5), xlim = c(0,40), main = "histogram of y*", xlab = "y*") ### add a curve for the non-central chi-squared distribution zs = seq(0,50,0.1) lines(zs,dchisq(zs,0,bound))
Let $X_i$ be an iid sample from $X\sim N(\mu,\sigma^2)$. I try to find the generalized likelihood ratio test of $H_0: \sigma^2=\sigma_0^2$ v.s. $H_1: \sigma^2\neq \sigma_0^2$ with $\mu$ unknown. My work: I try to find the likelihood ratio statistic: \begin{align} \lambda(x) &= \frac{\sup_{\theta=\theta_0}L(\theta\mid X)}{\sup_{\theta\neq\theta_0}L(\theta\mid X)} \end{align} For the global MLE case, I know that $$ \sup_{\theta\neq\theta_0}L(\theta\mid X)=L(\hat{\mu},\hat{\sigma}^2)=(\frac{1}{\sqrt{2\pi\hat{\sigma}^2}})^{n} \exp[-\frac{1}{2\hat{\sigma}^2}\sum_{i=1}^n (X_i-\bar{X})^2] $$ where $\hat{\mu}$ is the sample mean and $\hat{\sigma}^2=\frac{1}{n}\sum_i (X_i-\bar{X})^2$. But for the restricted MLE, I am a little bit confused. Since $\theta=\theta_0$ means $(\mu,\sigma^2)=(\mu, \sigma_0^2)$, then $$ \sup_{\theta=\theta_0}L(\theta\mid X)=\sup L(\mu_0, \sigma_0^2)? $$ Is $\mu_0=\bar{X}$ and $\sigma_0^2=\frac{1}{n}\sum_i (X_i-\bar{X})^2$? So this will be the same as in the global case...
I have run a Kruskal Wallis test in my dataset and have a significant overall result (p<<0.001). However, since the medians for my categories are the same, I can't work out the direction of the trends. Post hoc tests have revealed which groups are different, but I'm not sure how to work which groups are affected more by my treatments than the others - is there any way to work this out? Thank you in advance!
I'm running a bayesian paired t-test using JASP. In the software, it says that one of the assumptions is that "The difference scores are normally distributed in the population". I'm a little confused as to why this is true since the prior is a Cauchy distribution, not normal. Please let me know if you know why! Thanks!
If we have a dataset $D := f(x_i, y_i)^n_{i=1}$ where $x_i = [x_{i_1}, x_{i_2}, ... , x_{i_p}]^T$ is a p-dimensional predictor and $y_i \in R$ is the response to $x_i$. Now, shall we select as our parameter vector   $\beta = [\beta_1, ... , \beta_p]$ and choose our model to be $\hat{y}_i =  \beta^Tx_i$ or select as our parameter vector $\beta^{'}_i = [1, \beta_{i_1}, , ... ,\beta_{i_p}]$ and choose our model to be $\hat{y}_i =  \beta^{'T}x^{'}_i$ where $x^{'}_i = [1, x_{i_1}, ... ,x_{i_p}]^T$. I think that padding 1 at the first element of $\beta$ vector is wrong, this will make the bias term 1 all the time which should not be the case.
I measured changes in the appearance of chronic wounds. There are four levels of this variable that can be ordered from the worst to best as necrotic < sloughy < granulating < epithelialising. There are proportions of the set of patients with corresponding levels such as: The wounds were measured at the baseline on whole set of patients (n = 57), after 2 weeks (n = 56), 4 wks (n = 54), 6 wks (n = 45), ...12 wks (n = 18). The patients healed throughout the study, and therefore n decreases in time. How to test, if the variable at the particular time improved relatively to the baseline?
I need to analyse one dataset composed of two experiments. The goal is to identify sick individuals (animals) based on different parameters. These parameters have been collected on different farms (one time per farm), during two data recollection campaigns. I have very limited information on one of the campaign, and I have some reasons to belive that the method used in some farms (but I don't know which) may have biased the results. I identified two problems : Sick animals are expected to have extreme values in most of the parameters, and are rare. So if I search for outliers, I will probably find only sick animals. It is completely normal if some farms have more sick animals than others (they are just more at risk) I need to have an idea if there are bias in some farms. I know it sounds tricky, but does someone have a solution for this ?
Given the constants $\{a,b,c,d,e,f$}, I want to compute the conditional mean $\text{E}[Z|S_1,S_2]$ and the conditional variance $\text{Var}[Z|S_1,S_2]$, with: $Z=a+bX_1+cX_2+dY_1+eY_2+fY_3$ Is the following true? $\text{E}[Z|S_1,S_2]=a+b\text{E}[X_1|S_1,S_2]+c\text{E}[X_2|S_1,S_2]$ and $\text{Var}[Z|S_1,S_2]=b^2\text{Var}[X_1|S_1,S_2]+c^2\text{Var}[X_2|S_1,S_2]+d^2\sigma_{Y_1}^2+e^2\sigma_{Y_2}^2+f^2\sigma_{Y_3}^2+bc\text{Cov}[X_1,X_2|S_1,S_2]+2de\text{Cov}[Y_1,Y_2]+2df\text{Cov}[Y_1,Y_3]+2ef\text{Cov}[Y_2,Y_3]$ where $\text{Cov}[X_1,X_2|S_1,S_2]=\text{E}[X_1X_2|S_1,S_2]-\text{E}[X_1|S_1,S_2]\text{E}[X_2|S_1,S_2]$ Assume $S_1=X_1+\epsilon_{X_1}, S_2=X_2+\epsilon_{X_2}$ (where $\epsilon_{X_1}\sim \mathcal N(0,\sigma_{\epsilon_{X_1}}^2)$ and $\epsilon_{X_2}\sim \mathcal N(0,\sigma_{\epsilon_{X_2}}^2)$) and the following joint distributions: $\begin{pmatrix} X_1 \\ X_2 \end{pmatrix}$ $\sim \mathcal N$ $\bigg(\begin{pmatrix} \mu_{X_1} \\ \mu_{X_2} \end{pmatrix}, \begin{pmatrix} \sigma_{X_1}^2 & \rho_{X_1X_2}\sigma_{X_1}\sigma_{X_2}\\ * & \sigma_{X_2}^2 \end{pmatrix}\bigg)$ $\begin{pmatrix} Y_1 \\ Y_2 \\ Y_3 \end{pmatrix}$ $\sim \mathcal N$ $\Bigg(\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}, \begin{pmatrix} \sigma_{Y_1}^2 & \rho_{Y_1Y_2}\sigma_{Y_1}\sigma_{Y_2} & \rho_{Y_1Y_3}\sigma_{Y_1}\sigma_{Y_3}\\ * & \sigma_{Y_2}^2 & \rho_{Y_2Y_3}\sigma_{Y_2}\sigma_{Y_3}\\ * & * & \sigma_{Y_3}^2 \end{pmatrix}\Bigg)$ Assume also that $(X_1,X_2)$ and $(S_1,S_2)$ are independent from $(Y_1,Y_2,Y_3)$.
I am considering the model: $$ y_t = \beta_0\left(\Pi_{i=0}^{K}x_{i,t}^{\beta_i}\right)\left(\Pi_{j = K+1}^{L}e^{\beta_{j}x_{j,t}}\right) $$ where we want to have multiplicative effect between variables and linear for the others. My question is the following: in this model, how can we interprate and estimate the coefficients ? I mean we cannot keep the interpretation from the simple linear regression model where the coefficient $\beta_i$ represents the impact of the explanatory variable $x_{i,t}$ on the explained variable $y_t$. The advantage of a such model is to separate explanatory variables in two sets in order to have something realistic about the model when we should have $y_t = 0$ because in the set of fundamental explanatory variables we have one variable $x_{i,t} = 0$ so there is a kind of ''interaction" between variables.
Say I have a set of measurement values $y_\text{m} = (y_{\text{m},1}, \dots y_{\text{m},N}) $, and compare these with some ground truth $y = (y_1, \dots y_N)$. Then, if I understood correctly, I can estimate the (sample) variance of the error $(y_\text{m} - y)$ in my measurement values as \begin{equation} \sigma^2 = \frac{\sum_{i=1}^N (y_{\text{m},i} - y_i)^2}{N-1} \end{equation} (and, if I'm working with a fit instead of ground truth, this changes to the mean square error, with $N-2$ in the denominator). According to the explanation here, with a sufficiently well-behaved error, I could even compute a confidence interval for $\sigma^2$. Now, if I'm interested in the relative error, expressed as a percentage, $100 \times (y_\text{m} - y)/y$, how do these estimates for $\sigma^2$ and MSE change? And can I still compute a confidence interval in that case? EDIT: Sorry, I just realised that in the ground truth case the variance doesn't change, because ground truth is not a stochastic variable. So my question only pertains to the MSE case. EDIT 2: Doing a further search based on @Demetri Pananos's comments, it looks like the answers here, here, and here may help me get started
If I were estimating the treatment effect in a linear regression model. Assuming there isn't a risk of collider bias and multicollinearity, would it be acceptable to add in as many covariates as possible to control for the remaining variation unexplained by the treatment effect variable? Or would the risk of including variables that are spuriously correlated invalidate any inference made on the adjusted treatment effect variable? Also, in a non-linear model, because of non-collapsibility, is it in fact preferred to do this to control for all unexplained variation that biases the treatment effect variable from the lack of an error term? Any help would be really appreciated, thanks!
Can anyone explain me about how sample size selection happens which is generally used as $20\times (p+q)$? Here $p$ is the number of parameters in the final model and $q $ is the number of parameters that may have been examined but discarded along the way. Any credible reference is appreciated.
Below you can view the univariate sales dataset for a particular product with x3 promotional interventions/campaigns (highlighted in grey and green); each promotion campaign stretched for a length of 14 days. My main objective is to eventually measure the post-period uplift of all 3 campaigns separately, but to keep complexity at a minimum, I figured it might be better to focus this question on only analysing 1 campaign correctly and then just apply the logic to the other two campaigns afterwards. The promotion campaign I want to analyse with the causalimpact-library, is highlighted in green (campaign #2). Questions: How long should the pre-period and post-period be to analyze promo campaign #2 in the best possible way? More specifically, which of the 4 options is the best to use (if at all)? Note that: Option 1 considers shorter time-frames, but excludes possible adverse effects due to previous campaigns being run. If I do consider to use longer pre-period ranges (Option 3 & 4), should I include the data as is or should I recreate a baseline by handling the outliers and promo spikes the same way as for example the FB Prophet library before building a forecasting model, by either: Drop the data for the outlier & promo campaign 1, and create a new baseline for the pre-period? Set the data for the outlier & promo campaign 1 to zero, and create a new baseline for the pre-period? Set the data for the outlier & promo campaign 1 to none, and create a new baseline for the pre-period? Use another approach, one which I have not considered above? Thanks in advance!
I am a bit lost on the correct approach to this Statistical problem. Let me explain the context of this problem: I have a cohort population with longitudinal measurements during a period of several years and in this population, a subsample of its, has been wearing an accelerometer to measure Physical Activity (considered as the gold standard) and all of the other subjects have been answering a questionnaire regarding Physical Activity. My goal is to predict or set a rule to approach the accelerometer using the questionnaire data and the set of predictors (like socio-economical variables, etc...) of each participant. The outcome variable is a score of Physical Activity derived from both Accelerometer or Questionnaire. Any ideas on how to assess this problem in a cross sectional or longitudinal form? I was thinking about Multivariant linear mixed models, but I feel like some option of training with data and validating with the Accelerometer could be a good option also. Anything will be very welcome!
I wonder if anyone can help me to decide what steps to undertake in making a mixed-effects model in R. My data consists of 5 treatments of which the effects are being measured in 10 consecutive time points. All subjects (20) receive 4 of the 5 possible treatments in a block-randomized fashion. I want to construct a model in lme4. How would I go about this?
I have been taught that if an estimator is unbiased, then its convergence in probability can be proven by taking the limit of its variance as the sample size grows to infinity and showing it is equal to 0. However, considering the definition of the variance and the fact the estimator is unbiased, this would also prove mean square convergence, which is a stronger form of convergence. I was therefore wondering if the two concepts are equivalent if the estimator is unbiased, or the method I outlined above is just a sufficient but not necessary condition for convergence in probability of an unbiased estimator. Basically, is it possible to define an unbiased estimator whose variance does not tend to 0, but nonetheless converges in probability?
The context is about the use of a given model deviance (often referred to as “Residual deviance” in R) and that of its “Null deviance” to calculate D2, the deviance explained for models with non-normal error. This metric (D2) is somehow an approximation of what a coefficient of determination (R2) is for a linear model, providing an approximate idea of the variation explained by your model, although it is probably better described as a value reflecting how close its fit is from being perfect when compared to a saturated model, see: https://bookdown.org/egarpor/PM-UC3M/glm-deviance.html See also p. 166-167 of Guisan & Zimmermann (2000) for the calculation of D2 and its related adjusted version: https://www.wsl.ch/staff/niklaus.zimmermann/papers/ecomod135_147.pdf With most packages, such as “stats” and its glm() function, “MASS” and glm.nb(), and “mgcv” and gam(), those two deviance-related values can easily be obtained with "deviance" and "null.deviance" (preceeded by a dollar symbol - see further below). The gam() function actually also provides D2 directly in its output. For the glmmTMB package and its glmmTMB() function, I haven't been able to extract the residual deviance nor the null deviance to calculate D2. Does anyone has an idea of how to achieve this? Here's an example of what I'm looking for, using real count data from a sample of walleyes through a monitoring program. The idea is to extract the deviance-related information for a same model using each function/package when fitting these with the same error structure (NB2, link=log) and see how they compare, especially when contrasted to glmmTMB (not being able to extract such information from this package). Walleye catch curve example: From the "descending limb of a catch curve" in fisheries, one can model the rate at which counts decrease with age to estimate the instantaneous mortality (Z: the absolute value of the age coefficient) on the log scale from a sample of randomly-captured fish. The age-frequencies data are as follow: age<-seq(1,15,by=1) count<-c(151,56,117,10,12,21,8,2,2,1,2,0,1,1,2) walleye<-data.frame(age,count) These data are over-dispersed (variance > mean) and as such, the Poisson family distribution is inadequate. Using the glm.nb() function of the "MASS" package allows to model the variance in "extra" as follow: summary(m.walleye.nb2<-glm.nb(count~age,data=walleye)) Call: glm.nb(formula = count ~ age, data = walleye, init.theta = 3.114212171, link = log) Deviance Residuals: Min 1Q Median 3Q Max -1.6522 -0.8761 -0.2121 0.5188 1.8561 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 5.21804 0.36336 14.361 < 2e-16 *** age -0.42792 0.05395 -7.931 2.17e-15 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for Negative Binomial(3.1142) family taken to be 1) Null deviance: 109.841 on 14 degrees of freedom Residual deviance: 15.903 on 13 degrees of freedom AIC: 95.102 Number of Fisher Scoring iterations: 1 Theta: 3.11 Std. Err.: 1.58 2 x log-likelihood: -89.102 Note that both the Null deviance (109.841) and the Residual deviance (15.903) are provided directly in the output and already tell me that the predicted values of the considered model adjust pretty well to the observed (count) data given the large difference between the Null and Residual deviances. These two deviance-related values can also be directly extracted as follow: m.walleye.nb2$null.deviance [1] 109.8413 m.walleye.nb2$deviance [1] 15.90327 Using first the hnp() function of the hnp package (Moral et al. 2017) I can tell that the Pearson residuals of my model are behaving as expected given the distributional assumptions of this Poisson extension, i.e. the negative binomial type II (nb2). This model is thus adequate (i.e., goodness-of-fit), but I'm interested too in obtaining a "calibration" metric to get an idea of how well the predictions adjust to the observed data. That's where D2 is useful, and more so from an "explanatory power" than "adequacy" perspective: D2<-100*(1-m.walleye.nb2$deviance/m.walleye.nb2$null.deviance) D2 [1] 85.52159 This quite high value is not unexpected, as counts are deacreasing with age (walleyes die as they age) and despite the high variation observed in these age-frequencies data, the .nb2 model is capturing most of the signal for the central tendency and its associated variance. If I use the gam() function of the mgcv package, I can run the exact same model when not recoursing to a smoothing function s() and specifying the argument method="ML" as REML would otherwise be used by default: library(mgcv) summary(m.walleye.nb2.GAM<-gam( count~age,family=nb(theta=NULL),method="ML",data=walleye)) Family: Negative Binomial(3.114) Link function: log Formula: count ~ age Parametric coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 5.21804 0.36336 14.361 < 2e-16 *** age -0.42792 0.05395 -7.931 2.17e-15 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = 0.774 Deviance explained = 85.5% -ML = 44.551 Scale est. = 1 n = 15 Note that the parameter estimates are identical to those obtained with glm.nb() of the "MASS" package. The Deviance explained provided in the output corresponds to the one that was calculated for the previous .nb2 model. If we calculate D2 by hand for this nb2.GAM model: D2<-100*(1-m.walleye.nb2.GAM$deviance/m.walleye.nb2.GAM$null.deviance) D2 [1] 85.52159 Identical. Now, fitting the same model in glmmTMB using .NB2 instead of .nb2 to differentiate it from the one obtained with glm.nb(), we get: library(glmmTMB) summary(m.walleye.NB2<-glmmTMB(count~age,family=nbinom2,data=walleye)) Family: nbinom2 ( log ) Formula: count ~ age Data: walleye AIC BIC logLik deviance df.resid 95.1 97.2 -44.6 89.1 12 Dispersion parameter for nbinom2 family (): 3.11 Conditional model: Estimate Std. Error z value Pr(>|z|) (Intercept) 5.21804 0.34947 14.931 < 2e-16 *** age -0.42792 0.05485 -7.802 6.09e-15 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 The parameter estimates are again identical to those of .nb2 and .nb2.GAM and the deviance provided is of 89.1. All models have the same log-likelihood: logLik(m.walleye.nb2) 'log Lik.' -44.55119 (df=3) logLik(m.walleye.nb2.GAM) 'log Lik.' -44.55119 (df=3) logLik(m.walleye.NB2) 'log Lik.' -44.55119 (df=3) As indicated by @Ben Bolker, the deviance reported in glmmTMB is: deviance<-(-2*-44.55119) deviance [1] 89.10238 The "deviance" here is obviously the same for all three models (89.10238) and is used for instance in the calculation of the AIC = -2(log-likelihood) + 2K where K is the number of parameters (df=3 from above): 89.10238+(2*3) [1] 95.10238 In the end, the more specific question would be: How can I extract the deviance-related information (null and residual deviances) from a glmmTMB object for the computation of D2 in such a simple model (one predictor, fixed effect)? Although not of prime importance, this metric nonetheless helps to provide an idea of how "good" is your model at explaining the variation in your data, knowing that a top-ranking, adequate model may have a low explanatory power while still being useful. As such, knowing this information is desirable to sometimes tone down a statement related to the model predictions. I guess that Simon Wood has not provided D2 in the gam() function output of his "mgcv" package for no reason.
Sometimes, in a meta-analysis, it happens that studies include more than one independent sample and multiple component outcomes are reported for each sample. Which model would be more reasonable to use, three-level or four-level meta-analysis? I have read the two nice papers “Three-level meta-analysis of dependent effect sizes” and “Meta-analysis of multiple outcomes: a multilevel approach” by Prof. Wim Van den Noortgate, et al. In three-level meta-analysis, for dependences like multiple outcomes from the same study (i.e., correlated effects), the dependence was modelled as hierarchical effects, that is, the three-level model for correlated effects was: level 1-sample, level 2-outcome, level 3-study, whereas for hierarchical effects, level 1-sample, level 2-sub-study, level 3-study. My question is: if the two types of dependence occur in the same meta-analysis, would it be considered reasonable to conduct a four-level meta-analysis, i.e., level 1–sample, level 2–outcome, level 3–sub-study, and level 4–study? Or conduct a three-level meta-analysis, i.e., level 1–sample, level 2–outcome/sub-study, and level 3–study? Would there be a general requirement for the number of units in each level? If using three-level model, it seems difficult to interpret the variance estimate on level 2.
Is it possible to include mixed effects into a random forest model in R? I know about the lmer (from lme4) and randomForest (from randomForest) functions but it would be nice if I could combine the two in some way, if that makes sense. I'm afraid that if I don't include random effects in my random forest model, it will be incorrect.
The wilcox.test function in R calculates the pseudomedian and a confidence interval when conf.int=TRUE. In this question, Wilcoxon signed rank test - help on interpretation of pseudo median for example, there is a description about the calculation of the pseudomedian, but I don't understand why the confidence interval cannot be done only with the median, but with the median of the means.
I have been tasked with evaluating hospital length of stay (LOS) in two groups of patients using the Cox proportional hazards model. One group of patients received a medication, the other did not. Hospital discharge constitutes a failure, there is no censoring (all patients are eventually discharged). The hazards for the two groups of patients were not proportional though, so Cox is no longer the correct approach. I am not sure how to decide on the appropriate model (accelerated failure-time (AFT) model or multiplicative/proportional hazards (PH) model) and the appropriate survival distribution (exponential, Weibull, etc.) to use for the model. Any tips for how to find the appropriate model and distribution?
During the fine-tunning of a DistilBert model, I tried two optimizers (with different parameter sets) on the same dataset. Here are the results: - AdamW: train loss (0.21), val loss (0.33), accuracy (0.88) - SGD: train loss (0.35), val loss (0.35), accuracy (0.87) I read that if: - train loss > val loss: the model is under-fitted - train loss == val loss: the model is well-fitted - train loss < val loss: the model is over-fitted So I would say that the model trained with AdamW is over-fitted, but on the other end it is (slightly) better. Should I prefer a well-fitted model with a slight loss of validation results, or should I focus only on the best validation results?
Binary cross entropy is normally used in situations where the "true" result or label is one of two values (hence "binary"), typically encoded as 0 and 1. However, the documentation for PyTorch's binary_cross_entropy function has the following: target (Tensor) – Tensor of the same shape as input with values between 0 and 1. (In this context "target" is the "true" result/label.) The "between" seems rather odd. It's not "either 0 or 1, just with a real-valued type", but explicitly between. Further digging reveals this to be deliberate on the part of the PyTorch programmers. (Though I can't seem to find out why.) Granted, given the definition of BCE $( y\log x + (1-y) \log (1-x) )$ it's certainly possible to compute things with target values that aren't strictly {0, 1}, but I'm not sure what the potential use of such a situation is. Under what sort of situations would one potentially compute the binary cross entropy with target values which are intermediate? What would a class label of 0.75 actually mean, philosophically speaking?
I will show what I'm doing it in R to make sure if I'm doing it correctly This is my dataset which I'm using for analysis dput(df2) structure(list(TCGA_ID = c("TCGA-AB-2965", "TCGA-AB-2881", "TCGA-AB-2834", "TCGA-AB-2818", "TCGA-AB-2898", "TCGA-AB-2956", "TCGA-AB-2994", "TCGA-AB-2920", "TCGA-AB-3009", "TCGA-AB-2892", "TCGA-AB-2814", "TCGA-AB-2891", "TCGA-AB-2991", "TCGA-AB-2875", "TCGA-AB-2805", "TCGA-AB-3007", "TCGA-AB-2884", "TCGA-AB-2975", "TCGA-AB-2946", "TCGA-AB-2932", "TCGA-AB-2979", "TCGA-AB-2917", "TCGA-AB-2828", "TCGA-AB-2867", "TCGA-AB-2815", "TCGA-AB-2839", "TCGA-AB-2928", "TCGA-AB-2980", "TCGA-AB-2873", "TCGA-AB-2853", "TCGA-AB-2976", "TCGA-AB-2877", "TCGA-AB-3001", "TCGA-AB-3012", "TCGA-AB-2940", "TCGA-AB-2992", "TCGA-AB-2806", "TCGA-AB-2995", "TCGA-AB-2847", "TCGA-AB-2842", "TCGA-AB-2858", "TCGA-AB-2987", "TCGA-AB-2856", "TCGA-AB-2916", "TCGA-AB-2901", "TCGA-AB-2844", "TCGA-AB-2808", "TCGA-AB-2955", "TCGA-AB-2820", "TCGA-AB-2811", "TCGA-AB-2835", "TCGA-AB-2930", "TCGA-AB-2845", "TCGA-AB-2893", "TCGA-AB-2942", "TCGA-AB-2921", "TCGA-AB-2988", "TCGA-AB-3002", "TCGA-AB-2925", "TCGA-AB-2943", "TCGA-AB-2959", "TCGA-AB-2933", "TCGA-AB-2939", "TCGA-AB-2866", "TCGA-AB-2813", "TCGA-AB-2896", "TCGA-AB-3008", "TCGA-AB-2950", "TCGA-AB-2819", "TCGA-AB-2895", "TCGA-AB-2830", "TCGA-AB-2812", "TCGA-AB-2918", "TCGA-AB-2915", "TCGA-AB-2869", "TCGA-AB-2948", "TCGA-AB-2931", "TCGA-AB-2924", "TCGA-AB-2935", "TCGA-AB-2836", "TCGA-AB-2970", "TCGA-AB-2900", "TCGA-AB-2936", "TCGA-AB-2934", "TCGA-AB-2952", "TCGA-AB-2927", "TCGA-AB-2817", "TCGA-AB-2949", "TCGA-AB-2914", "TCGA-AB-2996", "TCGA-AB-2885", "TCGA-AB-2882", "TCGA-AB-2825", "TCGA-AB-2823", "TCGA-AB-2888", "TCGA-AB-2919", "TCGA-AB-2890", "TCGA-AB-2984", "TCGA-AB-2897", "TCGA-AB-2865", "TCGA-AB-2983", "TCGA-AB-2841"), turqoise_module = c("High", "Low", "High", "High", "Low", "High", "Low", "High", "Low", "Low", "High", "High", "Low", "Low", "High", "Low", "High", "Low", "Low", "Low", "Low", "Low", "Low", "High", "Low", "High", "High", "Low", "Low", "High", "High", "Low", "Low", "Low", "Low", "Low", "Low", "Low", "High", "High", "Low", "High", "High", "High", "Low", "High", "Low", "Low", "High", "High", "Low", "High", "Low", "High", "Low", "High", "High", "Low", "High", "High", "Low", "High", "Low", "High", "High", "High", "Low", "Low", "Low", "High", "High", "High", "High", "High", "High", "Low", "High", "Low", "High", "High", "High", "High", "Low", "High", "High", "High", "Low", "Low", "Low", "Low", "High", "Low", "High", "Low", "Low", "Low", "High", "Low", "Low", "High", "High", "Low"), OS_MONTHS = c(11.3, 29.7, 7.7, 10.2, 36.1, 5.7, 83.5, 11.8, 19, 59.3, 26.3, 21.5, 88.3, 27.7, 18.5, 75.8, 24.8, 34, 24.4, 42.1, 47, 57.3, 99.9, 5.2, 26.3, 16.3, 4, 47.5, 32.7, 3.1, 30, 41.4, 76.2, 86.6, 55.4, 56.3, 30.6, 73.6, 52.7, 0.3, 19.2, 6.3, 5.3, 45.3, 10.5, 3.9, 118.1, 16.4, 0.3, 8.2, 77.3, 7.1, 9.3, 6.6, 43.5, 8.1, 0.8, 46.8, 7.9, 4.2, 15.4, 4.6, 36.9, 5.5, 1.3, 7.5, 27, 40.3, 95.6, 5.7, 8.1, 11.5, 7.4, 0.5, 27.1, 18.1, 0.1, 26, 1.6, 17, 10.7, 6.3, 13.8, 6.6, 1.9, 2.4, 9.3, 32.6, 48.3, 73, 7, 11, 7.5, 0.2, 33.5, 26.8, 0.5, 71.3, 30.5, 2.3, 11.2, 46.5), Status = c(1L, 0L, 1L, 1L, 0L, 1L, 0L, 1L, 1L, 0L, 1L, 1L, 0L, 0L, 1L, 0L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 1L, 1L, 1L, 1L, 0L, 0L, 1L, 1L, 0L, 0L, 0L, 1L, 1L, 1L, 0L, 1L, 1L, 1L, 1L, 1L, 0L, 0L, 1L, 0L, 1L, 1L, 1L, 0L, 1L, 1L, 1L, 0L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 0L, 1L, 1L, 1L, 1L, 0L, 0L, 1L, 1L, 1L, 1L, 1L, 0L, 1L, 1L, 0L, 1L, 1L, 1L, 1L, 0L, 0L, 1L, 1L, 1L, 0L, 0L, 0L, 1L, 1L, 1L, 1L, 0L, 0L, 1L, 0L, 0L, 1L, 1L, 1L)), row.names = c(NA, -102L), class = "data.frame") Where the turqoise_module was obtained based on the a signature of 31 genes which I was able to get based on the explanation here Now what I do next is this fit1 = survfit(Surv(OS_MONTHS, Status)~ turqoise_module, data=df2) fit1 out1 = survdiff(Surv(OS_MONTHS, Status)~ turqoise_module, data=df2) out1 The output of the above model fitting is this fit1 Call: survfit(formula = Surv(OS_MONTHS, Status) ~ turqoise_module, data = df2) n events median 0.95LCL 0.95UCL turqoise_module=High 51 48 7.1 5.7 8.2 turqoise_module=Low 51 17 NA 55.4 NA > out1 Call: survdiff(formula = Surv(OS_MONTHS, Status) ~ turqoise_module, data = df2) N Observed Expected (O-E)^2/E (O-E)^2/V turqoise_module=High 51 48 18.4 47.7 74.7 turqoise_module=Low 51 17 46.6 18.8 74.7 Chisq= 74.7 on 1 degrees of freedom, p= <2e-16 When i plot them using this code ggsurvplot(fit1, pval = TRUE, conf.int = TRUE,data = df2, risk.table = TRUE, # Add risk table risk.table.col = "strata", # Change risk table color by groups linetype = "strata", # Change line type by groups surv.median.line = "hv", # Specify median survival ggtheme = theme_bw(base_size = fsize), palette = c("#990000", "#000099","green","black")) I get this output Now to obtain the turquoise_module based on which I categorized high or low I have used this code df1 <- x_te[,gene] df1 <- as.data.frame(df1) df1$LSC22score <- rowSums( sweep(x = df1, MARGIN = 2, STATS = beta1, FUN = `*`) ) df1 <- df1 %>% dplyr::mutate(turqoise_module = dplyr::case_when( LSC22score > median(LSC22score) ~ "High", LSC22score < median(LSC22score) ~ "Low" )) Now my question is when I see the out1 output and the figure generate there is bit of confusion due to my lack of conceptual clarity. ** The expected number of deaths in the high turqoise module group is 18.4, while the observed number of deaths is 48, indicating that there are more deaths in this group than expected. The opposite is true for the low turqoise module group, where the observed number of deaths is 17 while the expected number is 46.6, suggesting that there are fewer deaths in this group than expected. ** But when I see the figure the I thought patient categorized as low are surviving for longer period of time. Any suggestion or help would be really appreciated how to interpret or where am I going wrong
Analysts often use Rubin's rule (RR) to obtain a pooled estimate of a popular quantity from multiple (imputed) datasets. While popular statistical software (such as the R survey package or Stata's mi) will apply RR to any set of inputs, this may lead to invalid inferences when the underlying assumptions are violated. For example, RR assumes "congenial" sources of input. Usually this means that the missing data are modeled drawn from some multivariate or joint distribution that also features the analysis model. In many cases, this assumption is unjustified. For example, multiple imputation with chained equations (MICE) is a noncongenial imputation model with possibly-flexible functions of predictors of missingness. Many machine learning methods also try to predict missingness with added noise but their sampling distributions are unknown and are non-congenial. Most notably, in cases where complex (e.g., stratified, clustered) sampling is employed, clustering errors with the Horwitz-Thompson estimator basically violates all of the underlying assumptions - post-hoc adjustments to covariance matrices do not lend to congeniality. Of course, congeniality is often not enough! How then might one combine multiple imputations of such data for valid inferences? Specifically, how should one pool estimates from multiply-imputed data with complex sampling designs to ensure consistency (at a minimum) and efficiency/unbiasedness (at best)? I found Barlett and Hughes (2020) propose some options but am not sure if there is a more analytical result to rely on or if the bootstrap they recommend is valid for complex samples. Relevant Readings Bartlett JW, Hughes RA. Bootstrap inference for multiple imputation under uncongeniality and misspecification. Statistical Methods in Medical Research. 2020;29(12):3533-3546. Meng, Xiao-Li. “Multiple-Imputation Inferences with Uncongenial Sources of Input.” Statistical Science, vol. 9, no. 4, 1994, pp. 538–58. Jared S. Murray. "Multiple Imputation: A Review of Practical and Theoretical Findings." Statist. Sci. 33 (2) 142 - 159, May 2018.
This answer gives an answer for discrete distributions referencing Halmos (1946). However, I am looking for a more general result. For this, the references provided in the previous answer only dealt with creating non-negative estimators from already existing unbiased estimators. Bickel and Lehmann (1969) give a characterization of unbiasedly estimable functionals of distributions that are absolutely continuous wrt a measure $\mu$. However, the characterization is not constructive. Is there a constructive version of this result (Lemma 4.3, and thus Theorem 4.2)? After a lot searching, nothing came up. If there is not, how feasible is deriving a general construction?
I am to analyze a set of economic variables, taken from multiple countries, and recorded across time. This is certainly a panel dataset. If I'm not mistaken, the pooled OLS, fixed and random effects models are linear while polynomial regression models seem to focus on a single variable Is there such a thing as a polynomial panel regression model? I am having a very hard time finding any explanation online or in textbooks about this particular topic. Thank you in advance for any help, all possible indications as to where to learn about this topic are welcome.
I am analyzing count data (number of negative mental health symptoms). I ran a 1 sample KS test in SPSS and the sig is <0.001 for both a Poisson distribution and a normal distrubtion - indicating that the data does not follow either. There are no zero values whatsoever (values range from 1-9). What other options do I have for a regression analysis?
I have some doubts about how to recognize if there are extreme weights after balancing my population with inverse probability treatment weighting. For instance, let's look at these results [code at the end of the post] - I know that age is not perfectly balanced but it doesn't matter as it is just an example: M.0.Adj = Weighted mean-weighted rate for the non-treated population / SD.0.Adj = Standard Deviation in non-treated / M.1.Adj = Weighted mean-weighted rate for the treated population / SD.1.Adj = Standard Deviation in treated / Diff.adj = Standardized Mean Difference / V.Ratio.Adj = The ratio of the variances of the two groups after adjusting Moreover, these are a density plot with the propensity scores and a histogram with weights I made: This is an example of the balance achieved (I don't know if it is useful in this context): What to I have to look at in order to know if there are extreme weights? Do I have to look at the plots? How can I know if I balanced correctly and there are no problems caused by extreme weights so that I don't have to take further action to correct them (e.g. trimming ...)? I don't know how to "recognize" the extreme weights. For those who prefer to have the code: library(cobalt) library(WeightIt) library(dplyr) data("lalonde", package = "cobalt") W.out <- weightit(treat ~ age + educ + race + married + nodegree + re74 + re75, data = lalonde, estimand = "ATT", method = "ps") lalonde <- lalonde %>% mutate(weights = W.out$weights) lalonde <- lalonde %>% mutate(ps = W.out$ps) summary(W.out) bal.tab(W.out, stats = c("m", "v"), thresholds = c(m = .10), disp=c("means", "sds")) library(ggplot2) ggplot(lalonde, aes(x = ps, fill = as.factor(treat))) + geom_density(alpha = 0.5, colour = "grey50") + geom_rug() + scale_x_log10(breaks = c(1, 5, 10, 20, 40)) + ggtitle("Distribution of propensity scores") library(weights) wtd.hist(W.out$weights)
I have a set of 5000 samples with 24 features, and I'm runing a Gridsearch using scikit-learn to find optimal values for C, epsilon and gamma in SVR. In total I'm testing 90 different hyperparameter configurations with 5 fold cross validation. Also I included verbose = 10 to get feedback on whats happening. For the first 50 or so hyperparameter configurations the total training time was a couple of seconds and the scores were pretty bad. For the next phew combinations training time scalated to 10 minutes or so and the scores got better. Now evaluating model 64/90 total training time is taking around 200 mins. I've already set n_jobs = -1, and pre_dispatch = '1*n_jobs' and checked in task manager that I didn't have any memory problems. I believe the problem comes when taining with higher values of C (C = 100). A similar thing happened when evaluating a gaussian procces regressor on the same data set. This time considering only two different kernels in a gridsearch, the worst performing kernel took under a minute to train and the better kernel took around 2 hours. What is producing this behaviour and can I do something about it? Should I just reduce the number of hyperparameter configurations? Also, are this times reasonable for a 5000 samples data set?
Given a type 1 ANOVA where both factors are fixed and with equal cell numbers, how do I mathematically formulate the $H_0$ testing for interaction effect? Let's say I have 3 levels of Factor A (rows) and 3 columns of factor B (columns). I denote my interaction term in my model as $\gamma$ This is my take on my question: $\gamma_{11}=\gamma_{12}=\gamma_{13}=\gamma_{21}=\gamma_{22}=\gamma_{23}=\gamma_{31}=\gamma_{32}=\gamma_{33}$ Does this seem right?
I am stuck in the diagnostics of my model, and I am looking for advice on what to do. My data frame goes like this $ ID : Factor w/ 15 levels "Buzinza","Kabukojo",..: 1 1 1 1 1 1 1 1 1 1 ... $ Period : Factor w/ 3 levels "After","Before",..: 1 3 1 1 2 1 1 2 3 1 ... $ Class : Factor w/ 3 levels "Adult female",..: 1 1 1 1 1 1 1 1 1 1 ... $ no.trans: num 4 5 6 4 15 4 17 22 2 23 ... Nbinom had a better AIC than Poisson (which was overdispersed) model_nb <- glmmTMB(no.trans ~ Period * Class + (1 | ID), family = nbinom2(), data = periods) Normality and overdispersion were ok. Vif was not sensible so I calculated the variance of predictor variables and the matrix seemed ok. But plot(fitted(model_nb), resid) showed vertical clustered lines. Dharma accused the presence of 12 outliers in 3465 observations for the non-transformed data and 6 outliers for the sqr transformed data. I tried also lme4 package. I have only those variables so I cannot complicate the model further. And it is already quite simple. Not sure what to do next, any advice would be very welcomed! Cheers,
Let's say we have two Heckman selection models. Can we correlate the residuals from the their outcome parts (since they might have different lengths)? If yes, how?
I am estimating the population mean of the 2023 value of cars from a stratified sample. The value of the cars is right skewed on visual inspection, and some basic diagnostics indicate normality assumptions are violated. I need to calculate the 95%CI of the calculated population mean. My first thought was to use accelerated bootstrap, however, after a bit of research, I can’t seem to find a package in R that calculates this for stratified samples. Before I go trying to code this from scratch, is there an alternative non-parametric approach in R to calculating confidence intervals in skewed stratified samples?
I have multiple discrete time Markov processes. They each consist of the same 12 categorical states. I want to model how the probability of each of these states varies over time across each process. To do this, I fit a multinomial logistic regression model with this formula: State t+1 ~ State t + Time My data is organized like so: Subject Time State t State t+1 There are multiple subjects in this dataset. Each subject has its own Markov process that is indexed by time at regular intervals from t=2 to t=600. In other words, each subject has its own series of state transitions, and each state transition for each subject has a corresponding time point that ranges from t=2 to t=600. The multinomial logistic regression model seemed to have worked. It appears as if its fitted values can be used to represent how the probability of each state varies over time. However, I'm unsure if I fit this model correctly. For instance, I defined time as an integer rather than as, say, an ordered categorical variable. I'm not sure if this was the correct choice. But, overall, I'd like to know whether I used this model appropriately.
I have an interesting medical problem where the price of failure and offset between precision and recall is much much higher than typical ml systems I’ve worked with before. The precision of one class needs to be so high that we actually measure it in false positives per day, which should be 0.01 or less. In terms of precision it is one in five million, 99.99998%. To To get enough test data to have an expected value of one failure, there needs to be 100 days of data. Does that also mean there needs to be 3,000 days of data to say anything with confidence about the precision metric (30x that figure)? 3,000 days of data is 600x how much training data there is currently. However the training data is focusing on the edge cases, it’s not a typical day. A typical day will be mostly easy cases with maybe one interesting case per day. In that sense, we already have well over 3,000 days in terms of expected interesting cases. What are some best practices for getting confidence in deploying the model short of actually collecting the raw amount of real world data needed to definitively hit a metric? It seems estimating how many interesting cases are hit per day and estimating what the 3,000 day metric would be is the way to go, but I can’t convince the owners of it. Are there any similar problems in the literature to point to and learn from? Also having a test set that’s magnitudes bigger than the training set feels weird, but I’m not sure if it’s necessarily wrong per se, but definitely expensive and counterintuitively out of proportion with a typical ml real world test
In Minitab, we can obtain the sample size/power result with only the maximum difference between the group means instead of ALL the group means.(As seen in the picture below.) How is this done? What are the formulas for the calculation of the sum of squares? Also, I guess R shall also do this. However, it seems that functions such as power.anova.test and pwr.anova.test both require ALL the group means. Please, does someone know how Minitab does at the backstage and how can we do this in R?
I have a list of students from different classes, so far I have used a linear regression to create predicted future test scores based on multiple criteria such as attendance, previous test scores etc. I would now like to use these predicted scores to work out the probability of each student finishing 1st, 2nd, 3rd in their class etc. One concern I have is that all the classes are different sizes. I am very new to this, so any advice on the most efficient way to work out probabilities based on predicted scores would be greatly appreciated. Thanks in advance
So I'm trying to use Weighted Binary Cross Entropy Function. I'm trying to calculate the weights for each class. I have 14 of them them in the target variable. I'm using the following function for calculating the weights. def compute_class_freqs(labels): """ Args: labels (np.array): matrix of labels, size (num_examples, num_classes) Returns: positive_frequencies (np.array): array of positive frequences for each class, size (num_classes) negative_frequencies (np.array): array of negative frequences for each class, size (num_classes) """ N = len(labels) print(labels) positive_frequencies = np.sum(labels, axis=1) / N negative_frequencies = 1 - positive_frequencies return positive_frequencies, negative_frequencies I'm taking BATCH_SIZE as 32. Since the total data size is 112121, which is obviously not a multiple of 32 or any power of 2. There are some samples that get left in the last batch that are not 32. Hence I get this broadcasting errors from numpy. Should I ignore the last batches in calculating the weight. Or is it significant that I should do so? I'm using tensroflow 2.11.0 in python 3.9.16
I want to test a claim that the population probability is at least 0.9 and Im confused about how I should set my alternative hypothesis. Should it be H0 : p=0.9 vs H1: p<0.9 or H0: p=0.9 vs H1: p>0.9 ?
I am studying this source about One-Way ANOVA Test in R. We know that ANOVA test assumes that the data is normally distributed and the variance across groups are homogeneous. In the source the claim that we can check this with some diagnostic plots. At the part Check the homogeneity of variance assumption, the say that the residuals versus fits plots can be used to check the homogeneity of variances: The residuals versus fits plot can be used to check the homogeneity of variances. In the plot below, there is no evident relationships between residuals and fitted values (the mean of each groups), which is good. So, we can assume the homogeneity of variances. But it is not explained how we can see it from the plot. Is it because of distribution of the points or of because the red line? So here is some reproducible code with the plot they are talking about: library(ggpubr) #> Loading required package: ggplot2 my_data <- PlantGrowth my_data$group <- ordered(my_data$group, levels = c("ctrl", "trt1", "trt2")) # Compute the analysis of variance res.aov <- aov(weight ~ group, data = my_data) # Summary of the analysis summary(res.aov) #> Df Sum Sq Mean Sq F value Pr(>F) #> group 2 3.766 1.8832 4.846 0.0159 * #> Residuals 27 10.492 0.3886 #> --- #> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 # 1. Homogeneity of variances plot(res.aov, 1) Created on 2023-04-07 with reprex v2.0.2 So I was wondering if anyone could please explain how to interpret this plot and why this could tell us something about the homogeneity of variance assumption?
I have fitted two robust linear mixed effects models, null.model and full.model, with same random-effects term, (1 | id), to a data set using robustlmm::rlmer. These models only differ by a predictor x: null.model <- rlmer(y ~ (1 | id), data) full.model <- rlmer(y ~ x + (1 | id), data) I have chosen to fit robust LMMs over LMMs (provided by lme4::lmer or lmerTest::lmer) since I saw that the residuals (of LMMs) were not following the straight line at the ends when I plotted Q-Q plots against a standard Gaussian distribution. I am now facing a problem of comparing these two models. The manual for robustlmm says ... the log likelihood is not defined for the robust estimates returned by rlmer. So, I can't perform a likelihood ratio test on the lines of anova(null.model, full.model). Is there any way to compare these two models? Am I missing any important statistical assumptions that I should be aware of before I think of comparing these models?
I'm following this note to learn about deriving an upper bound of the UCB algorithm on the Stochastic Multi-Armed Bandit Problem. In particular, the proof of Lemma 15.6 there connotes that we can apply Hoeffding's inequality to $$ Pr(\frac{1}{N_{t,a}}\sum_{s=1}^t Z_{s,a} I_{A_s=a} - \mu_a \geq \epsilon ) $$ where $Z_{s,a}$'s are iid rewards with mean $\mu_a$ and are independent of other rewards, $A_s$ is the action we make at time $s$ which is in general a function of the past rewards and past actions, $N_{t,a}=\sum_{s=1}^t I_{A_s=a}$. Of course, a natural starting point is to condition on $N_{t,a}=n$, so that inside the probability there are $n$ iid random variables. However, I cannot proceed because $Z$'s are clearly not independent of $N_{t,a}$. (as $N_{t,a}$ depends on $(A_s)_{s=1,...,t}$ and each $A_s$ depends on $(Z_{i,a})_{i=1,...,s-1, a\in A}$; in particular, UCB speficies $A_{s+1}= argmax_a U_{s,a}:= \frac{1}{N_{t,a}}\sum_{s=1}^t Z_{s,a} I_{A_s=a} + \sqrt{\frac{log(s)}{2N_{s,a}}}$) Similarly, online notes here (page 2, Lemma 1) and here (page 8, Theorem 4) all directly claim that we can apply Hoeffding's inequality to the above inequality without justification. I'm aware that there are other methods to derive an upper bound for UCB, but I'd really like to confirm if this approach is a dead end.
I am asked in a homework question to prove asymptotic normality for the generalized method of moments estimator. The assumptions (which i think are necessary to solve this particular subproblem) given in the theorem are $ (Z_i)_{i \in \mathbb{N}}$ is a sequence of i.i.d. random variables. $g(z|\theta)$ is continuously differentiable wrt. $\theta$ in a neighborhood $\mathcal{N}$ of $\theta_0\in Int(\Theta)$ ($g$ is a moment restriction function, $\theta_0$ is the true parameter, and $\Theta\subset\mathbb{R}^k$ is the parameter space) $\mathbb{E}[\sup_{\theta\in \mathcal{N}}||g(Z_i|\theta)||]<\infty $ In the concluding argument of the proof I need to show that $G_n(\theta):=\frac{1}{n}\sum_{i=1}^{n}\partial_\theta g(Z_i|\theta)$ converges uniformly to $G(\theta) = \mathbb{E}[\partial _\theta g(Z_i|\theta)]$, i.e. $$\sup_{\theta\in\mathcal{N}} ||G_n(\theta) - G(\theta)||\stackrel{P}{\rightarrow}0$$ It is hinted at that the convergence follows from conditions 2 and 3. I have also snooped around various stack exchanges and gotten a hunch that the Borel-Cantelli lemma might be helpful. But at this point I am truly lost. Any help would be greatly appreciated! (If you feel like you would need more information on the problem, please let me know)
I had planned to perform a 3 x 2 repeated measures ANOVA before I realized that all the variables are distributed in a bimodal, U-shaped distribution where 0 and 1 are the modes. The high occurrence of 0 and 1 are meaningful to the analysis and therefore it may be inappropriate to transform the data. Why is the variable bounded? The outcome variable is a gaze behavior. Zero indicates the absence of a behavior, and 1 indicates that the behavior is strong. What values are within the variable? There are 0 and 1, and lots of values in between. Considering that 1) the data violate ANOVA assumptions of normality of residuals; 2) I prefer not to transform the data; 3) sample size is small (n = 30, within-subjects), I am now planning to proceed with a non-parametric permutation test. Would my understanding be correct?
I am training an LSTM where I have sales data from 20 different individuals over the past 10 years. Now, I read this brilliant answer: How to train LSTM model on multiple time series data? But, due to domain knowledge / prior analysis, I believe that only “recent” timesteps are useful in predicting a future timestep, let’s say, the previous 1 year. Now, what do I do with all of this historical data? Do I just not use it? Can I just create more subsequences out of these longer sequences and train the LSTM on these as well? Or do I have to input the whole sequence for each individual?
The formulation of the conditional density is: $$ f_{Y|X}(y|x) = \frac{f_{X,Y}(x,y)}{f_X(x)}. $$ I need to estimate this density from data and it's prohibitively time-consuming to calculate the joint density (I have 10s of variables). However, for fixed values of x, it is easy to estimate directly the conditional density (without calculating the joint). I also have access to the df of X and its inverse, $F_X$, $F_X^{-1}$. My approach to estimate the conditional density is then to calculate this: $$ \hat{f_{Y|X}(y|x)} \overset{?}= \frac{\sum_i f_{Y|X}(y|x=F_X^{-1}(i))}{NF}, $$ where $i$ takes a range of quantiles (for example $n$ evenly spaced points between 0 and 1) and NF is a normalization factor to ensure the density integrates to 1. I admit not having any solid understanding of why I did this, but if someone can atleast point me to some resources, it would be greatly appreciated. Extra non-relevant note: the densities in question are actually copula-densities, but it should not alter any of the equations.
Experts, I fitted a GAM Model with mgcv package containing a significant independent variable: number of days since first Action. Max.number of days were 40 days. Dependant variable is Action of frogs.Question is Do you recommend gam.predict or time series function for predicting action patterns of frogs.Thanks a lot for your advice!!
Is there a theoretical upper limit to the number of parameters that be estimated with maximum likelihood estimation? My understanding is no, but that if you have too many parameters it may not be possible to find one set of parameters that uniquely optimizes the log-likelihood. Assuming the above is correct, practically speaking, how can I determine if my model has too may parameters? Are there tests or guidelines regarding how many observed data points I need relative to my number of parameters?
I want to run robustness tests for my model. For example, by reducing the sample to heavily concentrated groups, running a different regression (probit etc) etc. But, how do I ascertain that my results are robust? Is it sufficient that my key explanatory variables have same sign as the original model, magnitude of coefficients is similar and that they are significant? Is it necessary that the coefficients are exactly same? Thank you
Some background on my problem - Let us consider a discrete memoryless channel (DMC) $W_{Y|X}$ from Alice to Bob. A DMC is a conditional probability distribution over the random variable $Y$ given input random variable $X$. We wish to send a uniformly randomly chosen message from a set of size $\mathcal{M}$ over this channel with average transmission error at most $\varepsilon$. To achieve this, one has an encoder on Alice's side that takes the message as input and outputs a random variable $X$. $X$ is the input to the channel which outputs random variable $Y$. A decoder on Bob's side takes random variable $Y$ as input and outputs a message. A transmission error has occurred in the communication protocol if the input and output messages are different and otherwise, Alice and Bob have successfully sent a message using $W$. Now consider $n$ i.i.d. copies of the given DMC and denote it by $W_{Y|X}^{\otimes n}$ for $n\in\mathbb{N}$. We ask what the maximum communication rate can be using the DMC $W_{Y|X}^{\otimes n}$. As before, we have an encoder that takes a uniformly random message from a set $\mathcal{M(n)}$ and outputs and $n$-bit string $X^n$. Let $p_{X^n}$ be the distribution over the $n$-bit strings $X^n$ when we encode a uniformly random message. $p_{X^n}$ must be invariant under any permutation of the $n$ positions since we have $n$ i.i.d. copies of the same channel. As we increase $n$, we add more i.i.d. copies of $W_{Y|X}$. Hence, the permutation invariance of our capacity-achieving $p_{X^n}$ holds for any choice of $n\in\mathbb{N}$. For $k\in\mathbb{N}$, let us consider i.i.d. distributions $q_{X^k}^i = \prod\limits_{j=1}^kq^i_{X_j}$ for any choice of distribution $q^i$. Does there always exist a convex combination of such i.i.d. distributions such that $$p_{X^k} = \sum_{i}\mu(i)q_{X^k}^i$$ where $\mu(i)$ is some measure that assigns a weight to each element of our convex combination. I am not sure if this claim is a special case of the de Finetti theorem (see Theorem 2 of these notes). The point that I am confused about is whether my $p_{X^k}$ can be thought of as extendible for $n>k$ to allow me to invoke the de Finetti theorem.
I'm struggling a bit with intuition behind resampling tests for difference in means. I've two samples s1 and s2 of size n1 and n2 respectively. Population parameters are unknown. I'd like to know if the means of the two populations from which the samples came are different. The Permutation approach for this I've read to be: Note the observed difference in mean between s1 and s2. Combine both s1 and s2 into a single large pool Draw n1 samples compute the mean. Compute the mean of the remaining n2 samples and compute the difference between the two. Note this difference down. repeat the above a large number of times, thus getting a distribution for difference in means. Test the observed difference in mean in step 1 against the distribution in step 4 for significance. My problem is - The NULL distribution generated in step 4, wouldn't that be wrong if the samples were actually from populations that did have different means? What we get in step 4, seems to a distribution of difference in means when the two populations are combined without any relevance to whether the means were the same. How can this be used as the null distribution - where the null hypothesis is supposed to be that the two populations have same mean? I have the same question for bootstrap as well (i.e., In step 4, if we draw samples of size n1 and n2 with replacement) Thanks!
I am trying to simulate data coming from a joint model of longitudinal and survival data. Basically, my thought process is this. I need to define a maximum follow-up time, F. I need to define coefficients ($\boldsymbol{\beta}$, $\sigma_t$,generate X from some distribution, W from some distributions and define $\alpha$ and $\gamma$) I need to use the max follow-up time to solve for integrals using the uniroot function. For each individual i, I generate survival probabilities from the uniform distribution. \begin{equation} \label{eq:uniroot1} S(t|W, X) \sim U(0,1) \end{equation} For each individual, I am trying to solve: \begin{equation} \int_0^F h(u|\textbf{W}, m_i(u)) \partial u + \log U(0,1) = 0 \end{equation} where I define here $\textbf{W}$ are baseline covariates and $m_i(u)$ is the time-varying covariate that is essentially "longitudinal marker" without the error term defined as follows: \begin{equation} y_i(t) = m_i(t) + \epsilon_i(t) \\ m_i(t) = (\beta_0 + b_{i0}) + (\beta_1 + b_{i1})t + (\beta_2 X_{2i}) + \beta_3X_{2i}t \\ \end{equation} And the form of the Weibull PH is as follows: \begin{equation} \label{eq:weibullPH} (\sigma_t \exp(\alpha m_i(t) +\boldsymbol{\gamma}^T\textbf{W}_i))t^{\sigma_t-1}\\ \end{equation} Once I solve this equation, I can just generate C from Uniform(0, Max.FollowUpTime)to perform uniform censoring. My questions are: A. Does this seem like the right way to get survival times? B. How do I define the maximum follow-up time such that it is a reasonable upper bound survival time for the values I defined in point 2. Thank you for any pointers!
I am currently working with a MLR model comprising 1 numeric/continuous predictor variable (x1), several nominal categorical variables (x2 ... xi), and an interaction term between the continuous variable and one of the categorical variables (x1*x2). I wish to plot the relationship between y and x1 on a 2d scatterplot (ideally in ggplot2) with a line of best fit and confidence intervals. I believe that this is possible in theory, so long as the categorical variables are fixed at a pre-selected/reference level for plotting purposes. My understanding is that the model terms associated with the different levels of the categorical variables will just shift the intercept up/down the y-axis, and not fundamentally change the nature of the relationship between y and x1 (i.e., the slope). However, I have not been able to work out how to do generate such a plot in practice. A reprex of a similar (toy) model is provided below: # Generate data.frame df<- data.frame( "y"=c(32,27,29,41,26,23,35,36,35,32,29,30,40,27,38,21,31,26,26,34,41,29,26,24), "x1"=c(28,32,36,40,44,48,52,56,60,64,68,72,72,68,64,60,56,52,48,44,40,36,32,28), "x2"=c("M","F"), "x3"=c("A","B","C"), "x4"=c("I","II","III","IV") ) df$x2<- as.factor(df$x2) df$x3<- as.factor(df$x3) df$x4<- as.factor(df$x4) # Generate MLR model lm<- lm( y~ x1*x2+ x3+ x4, data=df ) summary(lm) summary(lm) reads as follows: Imagine that I wish to plot the relationship between y and x1 on a 2d scatterplot, and use x2=M, x3=A, and x4=III as the reference levels at which I wish to fix these covariates. How would I do so? I have tried manually calculating the predicted values for each of the data points if they were associated with these reference levels, and plotting them all, like so: # Manually generate predictions df$fixed<- 32.424825+ # intercept df$x1*-0.003497+ # term for x1 1*-4.525641+ # term for x2=M 1*0+ # term for x3=A 1*-3.666667+ # term for x4=III 1*(df$x1*0.153846) # x1*x2 interaction term when x2=M # Plot df$fixed vs df$x1 library(ggplot2) p<- ggplot( data=df, mapping=aes( x=x1, y=fixed ) )+ geom_point()+ geom_smooth( method="lm", se=T ) p This approach has not worked for me, specifically as i) I wish to show the confidence intervals around the line of best fit, and ii) I have a very large dataset (~500k observations). In essence, I think that what I am doing here is passing ~500k fitted values to ggplot2, and then asking it to plot the line of best fit and associated confidence intervals. Unsurprisingly, there is essentially no uncertainty--given that the values are fitted, and that there are so many of them--so the confidence intervals are arbitrarily small. This would thus not appear to be the correct approach. Is anyone aware of a method/package/function where I can plot y~x1 (ideally in ggplot2 graphics) with x2 .... xi held constant, in a way that still shows the uncertainty in the data (i.e., with confidence intervals)? Thank you very much.
Suppose I have model $M$ generating data $Y=\beta_0+\beta_1X+\beta_2Z+\beta_3W$ with all $\beta$'s known. Instead of using model $M$, I used misspecified models $M':Y=\beta'_0+\beta'_1X+\beta'_2Z$, $M'':Y=\beta'_0+\beta'_1X+\beta'_3W$ and went on testing hypothesis $H_0:\beta'_1=0$ vs $H_1:\beta'_1\neq 0$ with level $\alpha$ for $M'$ and $M''$. $Q1:$ Should this hypothesis testing inversion give a coverage of $1-\alpha$ in general? The inversion of testing to get confidence interval requires model specification correctness assumption. If the model is correctly specified, then $coverage+level$ would be 1. If not, I could not see why it should even be the case. Level $\alpha$ is always fixed number, but coverage could be $0$ due to bias. $Q2:$ Should I even compare efficiency between $M'$ and $M''$ for estimating $\beta'_1$? It could be possible that one of the models having efficiency$>1$. This does not fit into unbiased estimator's Cramer-Rao bound context. What does it mean to even compare $M'$ and $M''$'s $\beta'_1$'s efficiency or confidence interval width here?
Say i have a dataset with groups that i want to use for a Regression problem that looks like the following where feature1 is the group column: idx: [0,1,2,3,4,5] feature1: [1,1,2,2,3,3] feature2: [6,7,8,9,4,5] target: [9,8,4,3,2,6] How do i split this properly without any data leakage? Ive read that you need to split the data by groups such that the groups in train do not appear in test. But doesnt that mean that if i use the group feature as a categorical feature, then that feature in the test set will be completely unseen? How do i tackle this problem? Can i split the data randomly?
I am struggling to understand a certain inequality based on the regression $L_2$-error of a regression function estimate. The setting is that of random forests for regression. Let $\Theta = \{ \Theta_{1}, \dots, \Theta_{M} \}$ be the (iid) random variables that capture the randomness that goes into contructing the individual trees. Let $m(x) = \mathbb{E}\left[ Y ~|~ X=x \right]$ be the (true, unknown) regression function that we want to estimate with the random forst. Assume that trees in the forest are fully grown, i.e. each cell in a tree contains exactly one of the points subsampled/bootstrapped for construction of the tree. Consequently, we can write the regression function estimate of the forest as $m_{n}(X) = \sum_{i=1}^n W_{ni}(X)Y_{i}$ where $W_{ni}(x) = \mathbb{E}_{\Theta}\left[\mathbb{1}_{x_{i}\in A_{n}(x, \Theta_{j})}\right]$ and $A_n(x, \Theta)$ is the cell of $x$ in a tree generated via $\Theta$. Now, the inequality in question is the following. It's from the proof of Theorem 2 in Scornet2015. $$ \mathbb{E}\left[m_{n}(X) - m(X) \right]^2 \leq 2 \mathbb{E}\left[ \sum_{i=1}^n W_{ni}(X)(Y_{i}-m(X_{i})) \right]^2 + 2 \mathbb{E}\left[ \sum_{i=1}^n W_{ni}(X)(m(X_{i})-m(X)) \right]^2 $$ My first question is: Why is that? I have tried applying the basic textbook error decompositions but am not getting anywhere. My second question is: In the publication, the authors refer to the first term as the "estimation error" and the second as the "approximation error". This does not quite fit with my current understanding of these terms: Estimation error: Error of selected function as compared to best possible choice from hypothesis class Approximation error: Error of best possible from hypothesis class as compared to true regression function Getting an intuition on the second question is probably more important to me.
Let’s assume I am simulating data under a given model, and using MCMC with said data to estimate a (known) model parameter. Let’s assume I do this thousands of times. The results I obtain show that the median of the posterior distribution for this parameter is always smaller than the true value, yet the 95% highest posterior density interval generally contains the true value. Is the estimator biased? A biased estimator is one whose expected value systematically differs from the true value. If the median of the posterior distribution is the expected value, then it is, as I always obtain smaller values than expected (never larger ones). However, across ~75% of replicates, the true value still falls within the confidence interval of the expected value. Any thoughts?
I'm trying to derive the influence function of the estimand $\Psi$ $$\Psi(P) = P(Y > y | X = x)$$ Following tutorials for deriving the influence function of the average treatment effect here. Has anyone seen this derived anywhere?
I am conducting a meta-analysis with a skewed distribution. To address this issue, I transformed the "marker" data into a log-scale, "ln marker". I obtained the (geometric) mean and standard deviation of ln marker from one article. My goal is to find the 95% CI of ln marker. To do this, I transformed the geometric mean and geometric standard deviation back into log-scale, so they became the mean and standard deviation of ln marker again. I then used the formula "mean +/- 1.96 SD/sqrt(N)" to calculate the 95% CI of ln marker, assuming the normal distribution by log transformation. However, I am unsure if this is the right way to get the 95% CI of ln marker with geometric mean and SD.
I am working with a bunch of different GAMs with predictor land cover and remote sensing variables derived at different scales and comparing models within scale. I am not interested in spatial and temporal autocorrelation as an explanatory variable in my model, but it does need to be accounted for (checked with Moran's I). I have done that using: te(long, lat,year, d= c(2,1), bs = c("ds","tp")) For the models within scale I am setting an upper bound for k of my spatial temporal variable, checking that "null" model and then fixing it for subsequent models that include landscape and remote sensing variables so they can be compared to my null model. At each scale I have a different size data set due to some restriction parameters that reduce the data size at smaller scales. My question is whether the k values for the spatial and temporal variable should be set across scales or whether it's acceptable to set them by assessing the null model at each scale? When I set it at each scale I'm worried how much it shifts the other variables (especially at the smallest scale that has the smallest data set). At 30 and 20km when given a flexible k max of (10,5) the null model suggests a spatial temporal edf of 34, ~k of c(7,5), but at 10 km it suggests an edf of ~14 pushing down my k to (5,3)- this makes some of my other predictor variables look very different at this scale than the others. I have a feeling that shifting the year k value across scales is bad and that the lat/lon k may be more flexible, but I can't find anything about this. The lat/long k can go to a minimum of 5, which would give 24 edf- c(5,5). Or is it best to just keep it at c(7,5) like the others? I am not directly comparing (using AIC) models across scales, but I would like them to be roughly comparable.
In a book about Biostatistics, I found this example to calculate expected value: Consider the following hypothetical example of a lung cancer study in which all patients start in phase 1, transition into phase 2, and die at the end of phase 2. Unfortunately, but inevitably, all people die. Biostatistics is often concerned with studying approaches that could prolong or improve life. We assume the length of phase 1 is random and is well modeled by an exponential distribution with mean of five years. Similarly, the length of phase 2 is random and can be modeled by a Gamma distribution with parameters α = 5 and β = 4. Suppose that a new drug that can be administered at the beginning of phase 1 increases 3 times the length of phase 1 and 1.5 times the length of phase 2. Consider a person who today is healthy, is diagnosed with phase 1 lung cancer in 2 years, and is immediately administered the new treatment. We would like to calculate the expected value of the survival time for this person. Denote by X the time from entering in phase 1 to entering phase 2 and by Y the time from entering phase 2 to death without taking treatment. Thus, the total survival time is 2 + 3X + 1.5Y and the expected total survival time, in years, is E(2+3X +1.5Y) = 2+3E(X)+1.5E(Y) = 2+3×3+1.5×5/4 = 12.875 . What I don't understand is why in the last equation E[x] is set to 3 while in my opinion it should be 5 because the length of phase 1 (for patients not taking the drug) has the exponential distribution with mean of five years? I'm also a bit confused with this sentence: Consider a person who today is healthy, is diagnosed with phase 1 lung cancer in 2 years, and is immediately administered the new treatment Does that mean that the person will enter the phase 1 in the next two years while today they are healthy? It sounds kind of strange to me how medical advancements can help predict a future disease for someone's today healthy two years from now.
I understand that the x-intercept can be calculated using $y = mx + b$ for a linear model. I am unsure if this is statistically appropriate for a mixed model with count data, given that counts cannot be negative and there are random effects to consider. I have seen examples of x-intercept calculations for count data with simple linear regressions, but I'm unsure if this method can be extended to mixed models. Here is my model: mod_6 <- glmmTMB(total_count ~ mean_temp + (1|month) + (1|spread_event), family = nbinom1, data = dat_nc_ncb) summary(mod_6) Here is the output. Family: nbinom1 ( log ) Formula: total_count ~ mean_ws + (1 | month) + (1 | spread_event) Data: dat_nc_ncb AIC BIC logLik deviance df.resid 1399.1 1415.6 -694.5 1389.1 194 Random effects: Conditional model: Groups Name Variance Std.Dev. month (Intercept) 0.3671 0.6059 spread_event (Intercept) 0.3279 0.5726 Number of obs: 199, groups: month, 10; spread_event, 26 Dispersion parameter for nbinom1 family (): 177 Conditional model: Estimate Std. Error z value Pr(>|z|) (Intercept) 3.4928 0.3515 9.936 <2e-16 *** mean_ws -1.1099 0.5126 -2.165 0.0304 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Is it statistically accurate if extract the fixed effects coefficients using coefficients <- fixef(mod_6), identify the coefficient for the intercept using intercept <- coefficients[1], extract the slope using slope <- coefficients[2] and finally extract x-intercept using x_intercept <- -intercept/slope? Or would be it more appropriate to use a simple glm with quassipoisson family, and then calculate x-intercept. That way, I won't have to worry about random effects? Details about the experiment I left out my potted plants in the field for a week, took them back to the glasshouse and counted the number of infected leaves per plant after two weeks. Plants are infected in ideal condition of temperature. Analysis goal I need to find lower temperature thresholds. More details can be found in figures 1-4 [here], (http://uspest.org/wea/Boxwood_blight_risk_model_summaryV21.pdf), but the basic idea is that we want to find out temperature at which no disease was observed (lower temperature threshold for disease). Since the goal is to find thresholds, I am happy to let go of the random effects if this allows me to calculate x-intercept for the mean_temp.
Based on a data set I found a sensitivity (.82) and specificity (.88) for a diagnostic test bases on a n=257 sample. However, I wonder whether I can generalize these numbers. I thought this was very much the same question as inferring a succes rate in a binary proces (k successes in n trials) using a binomial distribution. But I doubt that running two Bayesian parameter estimations (for sensitivity and specificity respectively) is really the best way to go. Does anyone have an idea how to best approach this issue, or maybe it is just unnecessary.
Does $P(B|A) = 0$ with $P(A) \neq 0$ mean $A \cap B = \varnothing$? I think I already have an answer, but I'm not sure it's correct. I would say no, because we can consider a variable $X \sim U(0,1)$, described by a continuous uniform distribution, where, for example, $A$ is "$0.2 \leq X \leq 0.9$" and $B$ is "$X = 0.5$". This way we have \begin{align} P(B) = 0, P(A) > 0, A \cap B = B \neq \varnothing, \end{align} but $P(B | A) = P(A \cap B) / P(A) = P(B) / P(A) = 0 / P(A) = 0$.
I'm trying to prove that the 2nd order polynomial kernel, $K(x_i, x_j) = (x_i^Tx_j + 1)^2$ is a valid kernel which satisfies the following conditions: K is symmetric, that is, $K(x_i, x_j) = K(x_j, x_i)$. K is positive semi-definite, that is, $\forall v \space\space v^TKv \geq 0.$ We can actually prove that second-order polynomial kernel function is a valid kernel by deriving the corresponding transformation function $\phi(x) = [1, \sqrt{2}x_1, ..., \sqrt{2}x_d, x_1x_1, x_1x_2, ..., x_1x_d, x_2x_1, ...x_dx_d]^T$ where $d$ is the number of features (dimensionality). But I do want to prove that two conditions listed above holds for the given kernel function. My attempts: Symmetry is rather straightforward: $$(x_i^Tx_j + 1)^2 = x_i^Tx_jx_i^Tx_j + 2x_i^Tx_j + 1 = A \in \mathbb{R}$$ $$(x_j^Tx_i + 1)^2 = x_j^Tx_ix_j^Tx_i + 2x_j^Tx_i + 1 = B \in \mathbb{R}$$ It can be observed that $A^T = B$, and since they are scalars, $A = A^T = B \implies A = B$. For the second condition, my attempt is as follows: $$v^TK = [\sum_{i=1}^{n}(x_i^Tx_1 + 1)^2 v_i \space\space ... \space\space \sum_{i=1}^{n}(x_i^Tx_n + 1)^2 v_i] \\ v^TKv = \sum_{j=1}^{n}\left(\sum_{i=1}^{n}(x_i^Tx_j + 1)^2 v_i\right) v_j$$$$ v^TKv = \sum_{j=1}^{n}\sum_{i=1}^{n}(x_i^Tx_j + 1)^2 v_i v_j$$ Now I proceed with expanding the term $(x_i^Tx_j + 1)^2$: $$v^TKv = \sum_{j=1}^{n}\sum_{i=1}^{n}(x_i^Tx_jx_i^Tx_j + 2x_i^Tx_j + 1) v_i v_j $$$$ = \sum_{j=1}^{n}\sum_{i=1}^{n}x_i^Tx_jx_i^Tx_jv_i v_j + 2x_i^Tx_jv_i v_j + v_i v_j$$ After this point, I don't know how to proceed. I feel like I have to use double sum property: $$\sum_{i=1}^{n}\sum_{j=1}^{n}a_ib_j = \sum_{i=1}^{n}a_i \cdot \sum_{i=1}^{n}b_i$$ But I can eliminate only the term with $v_iv_j$. $$v^TKv = (\sum_{j=1}^{n}v_i\sum_{i=1}^{n}v_i) + 2(\sum_{i=1}^{n}\sum_{j=1}x_i^Tx_jv_iv_j) + \sum_{i=1}^{n}\sum_{j=1}^{n}x_i^Tx_jx_i^Tx_jv_i v_j$$ $$ =(\sum_{i=1}^{n}v_i)^2 +2(\sum_{i=1}^{n}\sum_{j=1}x_i^Tx_jv_iv_j) + \sum_{i=1}^{n}\sum_{j=1}^{n}x_i^Tx_jx_i^Tx_jv_i v_j$$ First term is greater than or equal to zero, therefore it can be cancelled out. But, for the rest, I cannot come up with any simplification. I have two questions: How should I proceed further at this point? How can one prove that any polynomial kernel with degree $p$ is PSD using this approach? Thank you for your time.
I'm looking at 4 pairs of independent variables and one dependent variable. The correlation analyses revealed no significant correlation, although it was positive, between all independent variables and the dependent variable. Is it bad that my correlation coefficients were not significant? The regression analyses revealed significant coefficients, but they were quite low, all between .100-.200. I calculated R squared which gives me very low variance, from 7% to less than 1%. what does this mean? Is my analysis just not useful? The independent variables were 4 different factors of 2 different marketing strategies and the dependent variable was purchasing decisions, so I was thinking maybe there is low variance because purchasing decisions are influenced by many other things apart from those factors of the marketing strategies or even marketing strategies in general, e.g. also influenced by price, product type/quality etc. Not sure if this makes sense though.. NOTE: My study aims to compare the 2 marketing strategies, so im not even sure that variance is that important in my situation. I am looking to see which marketing strategy is the most effective, so should I just focus on which one has the higher coefficients?
I will be administering an early literacy assessment to preschoolers at 2 time points in the preschool year. I want to be able to examine the growth from point A to point B. I also wanted to administer a social emotional assessment at the two time points and asses for growth as well. Lastly, I wanted to examine correlation amongst the two. I had initially thought of a growth model but realize two time points may not be appropriate for this type of model. Any suggestions?
I have merged data to create an event study for the treated population. This treatment happens in 4 batches for some university students in certain cohorts (when they turn 19 in 2005 and are at university when the treatment commenced - thus are treated. But this treatment doesn’t happen for a majority of students and for a variety of cohorts. I have tried running an event study but run into issues of colinearity, so I think i may turn to matching, as the treatment happens (especially the first treatment for higher tier universities). so it may be better to match on all characteristics bar treatment. Is this a fitting way to avoid colinearity in event studies (matching on observables) so that the treatment effect is found via matching. Furthermore, what type of matching would be best here (there are more control than there are treat but obviously some universities are different) so exact matching, nearest neighbour matching, propensity score or fancy genetic matching? the cohorts i have are different birth years and thus attended university at different years but this isn’t exactly specified in the data just estimated.
The Bonferroni correction seems to be quite controversial. But I read again and again that it should be used for multiple tests. But what exactly are multiple tests? If I have three different data sets in the same study and run only one t-test on each data set, is that a multiple test and do I have to apply a Bonferroni correction? Or am I only talking about multiple testing if I have one data set and do three tests on the same data set? I find the statements when it comes to the Bonferroni correction very unclear and would be grateful for your expertise.
I'm new to probability theory. Let's say that I have the following situation: Three identical boxes have different collections of doughnuts in them. The box on the left ($L$) has 2 plain ($p$) doughnuts, 3 maple ($m$) doughnuts, and 5 chocolate ($c$) doughnuts. The box in the middle ($M$) has 2 plain doughnuts, 3 maple doughnuts, and 5 chocolate doughnuts. The box on the right ($R$) has 3 plain doughnuts, 4 maple doughnuts, and 6 chocolate doughnuts. You grab a box at random and without looking inside you grab a doughnut from that box. Background: I gave this problem on an Elementary Statistics exam with the question "What is the probability of selecting a plain doughnut?" Since any box and any doughnut inside a box could be selected from the information given in the problem, my answer was $7/33\approx 0.2121$, which is the number of plain doughnuts divided by the total number of doughnuts. One of my students used the rule of total probability: If we let event $A=\{p\}$, event $B=\{L\}$, event $C=\{M\}$, and event $D=\{R\}$, then (I think) \begin{align*} P(A)&=P(A|B)P(B)+P(A|C)P(C)+P(A|D)P(D) \\ &=(2/10)(1/3)+(2/10)(1/3)+(3/13)(1/3)\\ &=41/195\approx 0.2103 \end{align*} That's assuming that $\{B,C,D\}$ is even a legitimate partition here. [Roussas (A First Course in Mathematical Statistics, 1973) defines a partition as a set $A_i\in U$ such that $A_i \cap A_j = \emptyset, i\neq j$ and $\sum_i A_i=\Omega$ where $U$ is a $\sigma$-field for some probability space $(\Omega,U,P)$.] Now that the background has been established, I'm curious to know which one of us is correct and why. The problem is that I don't know enough about probability theory to come to a good answer. I don't want the problem solved for me, but the two questions below will help me in my process. I have started to answer the question from what I do know. My questions are below. I'm letting $\Omega_B=\{L,M,R\}$ represent the outcome space for the boxes. We'll keep the events $B,C,D$ as defined above. I'm also letting $\Omega_d=\{p,m,c\}$ be the outcome space for the doughnuts. I have two questions: I'm trying to represent in notation "the probability that we select a plain doughnut given that we can choose any box". I'm going with $P(A|\Omega_B)$ but I'm not sure that using $\Omega_B$ would be the correct way to represent that. I could also try $P(A|B,C,D)$, but I'm not sure that that would make much sense either. I would also like to represent in notation the "probability that we select a plain doughnut given that we select any box and any doughnut. I'm going with $P(A|\Omega_B\cap \Omega_d)$ here, but, again, I'm not sure how you would represent this here. If what I am doing is sound, great! If not, how would this situation be approached? Let me know if there are still ambiguities in my question. Thanks!
My dataset consists of temperature measurements from thermocouples as shown in the figure. I use several models like Long Short-Term Memory (LSTM) and GRU in order to predict future values of these thermocouples. Only past measurements of the thermocouples are taken into consideration in my model, no other variables. My RMSE and Mean Absolute Error values for all the models are acceptable. However, the residual errors do not follow normal distribution. The distribution of the errors is bimodal, where major mode is a gaussian distribution and minor mode is a lognormal distribution. Is it a requirement for a time series forecasting model to have residual errors that follow normal distribution? Does the non normal distribution of the residual errors show that we cannot trust the model? Thank you in advance!
I need to test the relationship between 3 variables. The problem is, one variable is ordinal and the other two are nominal (more precisely, both are from a dichotomous scale). I have already looked for this, but it seems to me that there is no suitable type of analysis. The best way currently seems to me to use a logistic regression, but the ordinal variable does not fit in there. Do you have any ideas which I can search for or a possible solution?
In a simple neural network, having more nodes on an input layer that on the next layer performs a compression or dimension reduction similar to what PCA does. The fewer nodes encode in a combination some kind of information that is in the previous layer. While the forward computation is structurally similar to PCA , the weights form a matrix, it is not equivalent. That is, an autoencoder reduces dimensions as does PCA, but there is no gurantee of orthogonality or correspondence to eignevalues. Is there an activation function and loss function for one layer (or larger more complicated architecture and choice of activation and loss functions and backprop alternative) that does converge to the PCA coefficients? That is, is there some way to get a weight matrix that is orthogonal -and- the next level of nodes correspond to the eigensystem (sortable by eigenvalues)? The motivation is to 'do everything' with a neural network architecture rather than use processes outside of the NN model. This way one could remove collinearity for modeling non-linear subspaces.
Say I'm running Metropolis-Hastings with target density $p$. What I would like to do is divide the space $E$, on which $p$ is defined, into a disjoint union $E=\bigcup_iE_i$ and run a separate instance of Metropolis-Hastings inside each stratum $E_i$ (for example, since I can come up with a proposal kernel specifically designed for $E_i$). Now, if I know nothing about $p$, how do I know what a smart choice for the stratification into the $E_i$ is? Intuitively, we somehow want that $E=\bigcup_iE_i$ is a decomposition of $p$ into its "modes", but what does that even mean in general (maybe that $p$ "does not vary too much inside each $E_i$?)? Is there maybe some trial-and-error mechanism to detect the modes? To make things simpler, please assume that $E=[0,1)^2$. EDIT: Relevant articles on the web are: Stratification as a general variance reduction method for Markov chain Monte Carlo: https://arxiv.org/pdf/1705.08445.pdf Slides corresponding to the paper: https://icerm.brown.edu/materials/Slides/tw19-2-hire/Computing_Rare_Event_Probabilities_by_Stratified_Markov_Chain_Monte_Carlo_]_Brian_Van_Koten,_University_of_Massachusetts-_Amherst.pdf Xi'an's blog bost about this: https://xianblog.wordpress.com/2020/12/03/stratified-mcmc/ However, I don't know how I would use this in my case $E=[0,1)^2$ and an arbitrary $p$.
Say I want the probability of at least 1 event occurring out of a series of n events each with differing but known probabilities from observing past behavior p_0, p_1, ... p_n. Generally, if I assume all of these events are independent, I can get that for some share of the events via 1 - the product of all (1 - p_i) in my probability pool. But now let's say closer inspection of the original events from which I derived by probabilities shows serial correlation was strong. For instance, if an event happened at step t in the event chain, it was more likely to happen at t + 1. If I then re-asked (say for some new, similar experiment), what's the probability of at least one event, it doesn't make sense that I can use the basic probability formulation above, correct? Should I account for that serial correlation in some way, and if so, how so?
I am struggling to interpret the location parameters in generalised partial credit models. Say you have location parameters $a_1$,$a_2$ and $a_3$. My professor said that for the item to be accepted then they must be monotonic i.e. $a_1$<$a_2$<$a_3$ which I think has to do with the monotonicity assumption of such models however I am not sure.