instruction
stringlengths
9
38.7k
I am trying to figure out if/how I can understand if animals are choosing particular plant species for foraging or just using them based on their availability. To do this, I need to characterize the availability of multiple plant species in plots. I have many plots, each one had 8 quadrats (roughly 5% of total plot area), and individuals of each plant species were counted per quadrat. It was not possible to estimate the abundance of plant species in the whole plot, but the plot-scale is relevant for the foraging animals. Observers also walked through the whole plot and recorded presence of any other species that was not detected in the quadrats to gain an overall species list. Now I would like to characterize the relative frequency or abundance of each species at the plot-level to test how their availability compares with how they were used by foraging animals. I have found that the animals often (43% of the time) foraged on species that were not observed in quadrats, and therefore, lack abundance estimates. How can I estimate the availability of those "outsiders"? Here are some ways I have considered: Arbitrarily assign them a low number, assuming that if they were not detected in our quadrats, they were rare. This may bias my results toward preference for these species, especially if they actually are not rare. Throw out observations of animals foraging on these species since I don't have a way of estimating abundance. This may bias my results by removing observations of foraging on (maybe locally) rare species, when there may be a preference there. Also that is a lot of data :'( Some elegant way someone here will suggest for estimating the relative abundance of these outsiders? I keep thinking of detection probabilities and how we might use estimates from other plots to get work this out but ??? Find another way to assess the importance of particular plant species to these animals, given that the sampling design for plants clearly missed a lot of species. Maybe a logistic model of plants that were used, and covariates...although relative abundance seems important here too... My inclination is to go with #4, but I wondered if anyone had a better idea? Thanks!
I am analyzing a set of data of two factors, one at three and other at seven levels, to check how they influence my response variable. However, when testing the ANOVA assumpions it results it follows a normal distribution but the data is heterocedastic. I know that for heterocedastic one-way ANOVA there is the Welch test, but I did not find any alternative for the multifactor ANOVA.
When fitting a Poisson regression on data with low expected values, the intercept term has a small bias even when the model is perfectly specified. Below, I simulated data just using $y \sim rPois(exp(\beta_0))$ and then fit the data using the glm model $log(E[y]) \sim \beta_0$. On average, the estimates are slightly biased downwards. The bias is small, but I would like to understand why this happens. I could understand why this would happen if $\beta_0$ was a large negative number and the data were mostly zeros, but the data from the $\beta_0$ values I chose is always mostly non-zero. Why would this happen? # function to run the simulation for one set of beta values run_sim <- function(b0, n = 50, R = 10000){ # simulate y values and then estimate b0_estimates <- sapply(1:R, function(i){ y = rpois(n, exp(b0)) tmp = data.frame(y = rpois(n, exp(b0))) mod_col <- glm('y ~ 1', data = tmp, family=poisson) b0_hat <- mod_col$coefficients[1] return(b0_hat) }) # get the bias mean_bias = mean(b0_estimates) - b0 return(mean_bias) } # simulate for beta0 values ranging from 1 to 10 b0_vec = 1:10 bias_vec = sapply(b0_vec, function(b0){ run_sim(b0, R = 10000) }) # plot the results plot(b0_vec, bias_vec, xlab = 'true b0', ylab = 'b0 bias')
We are using a mixed-effects model to assess the potential impact of different treatments (categorical variables) on a specific soil characteristic (numerical variable). The study uses a randomized complete block design with plots nested in blocks. Longitudinal data were collected and initial assessments of the raw data indicated temporal autocorrelation. There were 12 total samples taken during each observation, with three treatments represented within four blocks. For example, data collected on date 1 were similar in format to: The treatments were considered the fixed effects and time and block (both factors) were considered random effects. The basic model being used is: M1 <- lmer(y ~ Trt1 + (1|Block) + (1|Date), data) I have reviewed the results of model and reviewed the residual plots. To look at the residuals, I used the compute_resid() function to calculate marginal and conditional residuals and applied the acf() function. However, since we have multiple observations per date, I am not sure the best way to handle the residuals. I also tried using lme() with a similar approach but specifying the correlation. However, it did not appear to improve the outcome. My primary questions are: Should time (monthly data in this case) be considered a fixed or random effect? How should I assess the residuals (as well as model performance in general)? When I average the residuals by time (I'm not sure this is the best approach), strong autocorrelation is indicated when time is included as a random effect but the autocorrelation is not present in the residuals when time is included as a fixed effect. 3.What is the best type of residual to use in this case (e.g., conditional vs. marginal)?
Let's suppose I perform two separate logistic regression models in two different subgroups of my dataset. glm(death ~ age + ..... , data = female, family="binomial") #female population glm(death ~ age + ..... , data = male, family="binomial") #male population From these, I obtan an OR for age in the female group and one for the male group (numbers are just examples): OR age in the female group: 1.88 (0.41-2.89); p>0.05 OR age in the male group: 1.45 (1.20-1.78); p<0.05 P interaction: 0.3 So the males OR is significant while the female's not. However, when I perform the interaction test, the p is >0.05. How do I interpret this? It means that there is no difference between the two ORs, so why one is significant and the other isn't? If there is no difference, then is the "true" OR < or >1?
I am confused about the derivation of importance scores for an xgboost model. My understanding is that xgboost (and in fact, any gradient boosting model) examines all possible features in the data before deciding on an optimal split (I am aware that one can modify this behavior by introducing some randomness to avoid overfitting, such as the by using the colsample_bytree option, but I’m ignoring this for now). Thus, for two correlated features where one is more strongly associated with an outcome of interest, my expectation was that the one that is more strongly associated with an outcome be selected first. Or in other words, that once this feature is selected, no additional useful information should be found in the other, correlated feature. This however does not seem to be always the case. To put this concretely, I simulated the data below, where x1 and x2 are correlated (r=0.8), and where Y (the outcome) depends only on x1. A conventional GLM with all the features included correctly identifies x1 as the culprit factor and correctly yields an OR of ~1 for x2. However, examination of the importance scores using gain and SHAP values from a (naively) trained xgboost model on the same data indicates that both x1 and x2 are important. Why is that? Presumably, x1 will be used as the primary split (i.e. the stomp) since it has the strongest association with the outcome. Once this split happens (even if over multiple trees due to a low learning rate), x2 should have no additional information to contribute to the classification process. What am I getting wrong? pacman::p_load(dplyr, xgboost,data.table,Matrix,MASS, broom, SHAPforxgboost) expit<-function(x){ exp(x)/(1+exp(x)) } r=0.8 d=mvrnorm(n=2000, mu=c(0,0),Sigma=matrix(c(1,r,r,1),nrow=2),empirical=T) data=data.table(d, replicate(10,rbinom(n=2000,size=1,prob=runif(1,min=0.01,max=0.6)))) colnames(data)[1:2]<-c("x1","x2") cor(data$x1,data$x2) data[,Y:=rbinom(n=2000,size=1,prob=expit(-4+2*x1+V2+V4+V6+V8+V3))] model<-glm(Y~., data=data, family="binomial") mod<-tidy(model) mod$or<-round(exp(mod$estimate),2) sparse_matrix<-sparse.model.matrix(Y~.-1,data=data) dtrain_xgb<-xgb.DMatrix(data=sparse_matrix,label=data$Y) xgb<-xgboost(tree_method="hist", booster="gbtree", data=dtrain_xgb, nrounds=2000, fold=5, print_every_n=10, objective="binary:logistic", eval_metric="logloss", maximize = F) shap<-shap.values(xgb,dtrain_xgb) mean_shap<-data.frame(shap$mean_shap_score) gain<-xgb.importance(model=xgb) head(mod,14) #regression head(mean_shap) #shap values head(gain) #gain ```
I want to prove the following, for a given image distribution $P(X), \mathcal{X} \in \mathbf{R}^{n \times m}$, we have a masking model, $\phi:X \rightarrow \{0,1\}^{n\times m}$. We also have the complement mask $\bar{\phi}$, which basically does the complement masking of $\phi$. I want to find the entropy $H(X \circ \phi(X) | X \circ \bar{\phi}(X))$, $\circ$ is Hadamard product. Here is my attempt: $$\begin{align} H(X \circ \phi(X) | X \circ \bar{\phi}(X)) &= \sum_{x_1\in\mathcal{X}}P(x_1)H(X \circ \phi(X) | x_1 \circ \bar{\phi}(x_1)) \\ &= -\sum_{x_1\in\mathcal{X}, x_2 \in\mathcal{X}}P(x_1)P(x_2 \circ \phi(x_2) | x_1 \circ \bar{\phi}(x_1))\dots\\ \dots\log P(x_2 \circ \phi(x_2) | x_1 \circ \bar{\phi}(x_1)) \end{align}$$ Based on this paper, page 12, equation 11, this quantity should equal to: $\mathbb{E}[\log P(X \circ \phi(X) | X \circ \bar{\phi}(X)]$ But I am not sure how to further progress to get their equation.
This is the target I want to integrate with Monte Carlo with control variate method: $$\theta = \int_{1}^{\infty}\frac{x^2}{\sqrt{2\pi}}e^{-x^2/2}dx$$ I have checked with Wolfram that it is 0.400626, so the control variate should be converge to this value. I use standard normal distribution and gamma distribution (shape = 3 and rate = 1) as two control variate, but fail to converge it! What's wrong with my code? or Idea? Here is my R code and output. sample_size <- seq(from = 100, to = 10^4, by = 10) target <- function(x){ x^2 * exp(-(x^2)/2) /sqrt(2*pi) } Sim4.1.theta <- numeric(length(sample_size)) Sim4.1.se <- numeric(length(sample_size)) Sim4.2.theta <- numeric(length(sample_size)) Sim4.2.se <- numeric(length(sample_size)) MC.sim.4 <- function(size){ u1 <- rnorm(size) f2.1 <- u1 <- u1[u1>=1] T1.1 <- target(u1) u2 <- rgamma(size, shape = 3, rate = 1) f2.2 <- u2 <- u2[u2>=1] T1.2 <- target(u2) c.star.1 <- -lm(T1.1~f2.1)$coeff[2] c.star.2 <- -lm(T1.2~f2.2)$coeff[2] T2.1 <- T1.1 + c.star.1*(f2.1 - pnorm(1,lower.tail = FALSE)) T2.2 <- T1.2 + c.star.2*(f2.2 - pgamma(1,shape = 3, rate = 1, lower.tail = FALSE)) control1.estimate <- mean(T2.1[u1>=1]) control2.estimate <- mean(T2.2[u2>=1]) control1.se <- sd(T2.1)/sqrt(size) control2.se <- sd(T2.2)/sqrt(size) return(rbind(control1.estimate,control2.estimate,control1.se,control2.se)) } for (i in 1:length(sample_size)) { tem <- MC.sim.4(sample_size) Sim4.1.theta[i] <- tem[1] Sim4.1.se[i] <- tem[2] Sim4.2.theta[i] <- tem[3] Sim4.2.se[i] <- tem[4] } plot(x = sample_size, y = Sim4.1.theta, type = 'l',col = '#2166AC', ylim = c(0,0.5), xlab = '# of sampling size') lines(x = sample_size, y = Sim4.2.theta, col = '#B2182B') abline(a=0.400626,b=0,col='red')
I am doing a longitudinal study investigating how Y changed across time, with two time-invariant covariates at level two (between-person), one is Gender and another one is OnAge. So I tried to formulate a conditional growth model via lme4, and if I was correct, the equation should be like this: Level 1: Yti=b1i+b2i Time+uti Level 2: b1i=β01+β11Genderi+β21OnAgei+d1i b2i=β02+β12Genderi+β22OnAgei+d2i Composite: Yti=(β01+β11Genderi+β21OnAgei+d1i)+(β02+β12Genderi+β22OnAgei+d2i)*Time+ uti Accordingly, the formula for lme4 should be lmer(Y~Time+Gender+OnAge+Gender*Time+OnAge+Time+(Time|id)) The question here is how should I interpret the obtained Level-2 coefficients of Gender, Gender*Time, OnAge, and OnAge*Time? I suppose the interpretations for them should be very different from unconditional growth model.
In my study I want to test if certain cities were predominantly visited for holidays or for work (see boxplot below). That means I want to know if there is a difference between the grey box and the white box for each group (City). I have about 20 persons in my study. Some were traveling a lot. That is why for the boxplot I chose 'Proportion of visits' as the y-axis. Can I used a paired Wilcoxon test for each city (paired for each individual and using the number of counts) or is that wrong? I am totally unsure. Help would be very appreciated.
I'm working with a data set from here: https://nij.ojp.gov/funding/recidivism-forecasting-challenge To put it simply, it is a binary classification problem. The gang affiliation variable is pretty useful, but it is only recorded for men, making this a missing not at random data, with the missingness completely correlated with the gender variable. I'd like to use both gender and gang affiliation variable along with other variables. Could someone point me to the resources to handle this kind of missing data? I'm thinking of fitting a logistic regression model and a Bayesian model. Thank you.
I am interested in numerical data imputation problems: how to properly estimate missing values in a tabular data set (rows and columns) with missing numerical values? In 2018, Yoon et al. proposed the GAIN framework, a Generative Adversarial Network tailored for numerical data imputation. Here is the link to the original paper: https://arxiv.org/abs/1806.02920 Unfortunately, after many trial and error (and even after contacting the authors of the paper), it appears to me that traditional imputation methods -- like the kNN-Imputer, or MissForest -- provide better missing value estimates than GAIN. That said, the authors of GAIN claim state of the art results. I attach their Table showing the main results This Table shows the mean normalized Root Mean Square Error (RMSE) of 6 data imputation methods using 5 numerical data sets, averaged over 10 repetitions. Parameters are optimized with 5-cross validation scheme. My questions are... Did anyone manage to obtain satisfactory imputation results with GAIN? If yes, are those results really better than MissForest (which is one of the best numerical imputation methods in my experience)? If not, then what is that Table 2 from the paper GAIN: Missing Data Imputation using Generative Adversarial Nets (Yoon et al., 2018)?
Suppose that the DGP is an AR(p) with a unit root. When we fit, using OLS, an AR(1) model $x_t=\alpha x_{t-1}+u_t$ to the data, we get $\alpha=1$, indicating, correctly, that this is a unit root process. I've been trying to prove this, but so far I've not been successful. Can someone give some hints?
So I am trying to validate my STAN model before using real data and am having some trouble estimating parameters separately. My data structure contains count data with people on the rows, and test items in the columns. I am trying to use a covariate for item type, for example multiple choice, and see how this affects my difficulty parameters. Normally, I would just have the following: for(i in 1:n_examinee){ for (j in 1:n_item){ lambdas[i,j] = exp(theta[i] + raw_difficulty [j]); target += poisson_lpmf(Y[i,j] | lambdas[i,j]); } With the addition of the covariate on the item difficulty we now have: for (i in 1:n_item) { item_difficulty[i] = raw_difficulty [i] + dot_product(X[i], beta_difficulty); } for(i in 1:n_examinee){ for (j in 1:n_item){ lambdas[i,j] = exp(theta[i] + item_difficulty[j]); target += poisson_lpmf(Y[i,j] | lambdas[i,j]); } In this case X[i] refers to the row of a dummy matrix containing item types, indicating which items are which type. This matrix also had a column removed to use as a reference category. Beta difficulty are the covariate estimates for the remaining covariates in X. So for example the first item may be item_difficulty[1] = .3 + [1,0] * [-.25,.63]. However, when comparing my estimates against my true parameters it is clear there are identifiability problems.My estimates for raw_difficulty and beta_difficulty are not near their respective true values, however, their sum, item_difficulty is. Which logically makes sense as there are infinite values that could satisfy the equation. What can I do to solve this problem? While item_difficulty matches the true values, I want to be able to tell how severe the effects of the item type are, so I would need to have reliable beta_difficulty estimates. Any suggestions on how to solve this problem?
Hi StackExchange Community, I am performing a Principal Components Analyses (PCA). I would like to know how to extrapolate some PCA components with other variables that were not considered in the PCA function. I have a nutritional survey with 60 questions that was applied to 420 people. The frequency of consumption was measured in servings and It is standardized for each type of food. I have a clearly Components identified using the following criteria: a. Selected components by eigen-value >1.5 b. Varimax rotation loadings >0.2 for variable . The Results of PCA+varimax rotation: ... PC1: Orange, Apple, Watermelon PC2: Homemade fries, Mayonesa, Pizza PC3: Eggs, Walnuts, Hazelnuts PC4: Witefish , fatty fish small, fatty fish big ... Then, I want to know if it is possible to carry out post-PCA statistical analysis with the standardized scores of the Varimax rotation of each subject in the component and cross-check that information with other confounding variables such as sex, age, education level, etc. This table illustrates that I want to compute: https://ijbnpa.biomedcentral.com/articles/10.1186/s12966-016-0353-2/tables/4 Other studies where similar approach was applied: https://www.mdpi.com/2072-6643/13/1/70#app1-nutrients-13-00070 https://www.cambridge.org/core/journals/british-journal-of-nutrition/article/comparison-of-cluster-and-principal-component-analysis-techniques-to-derive-dietary-patterns-in-irish-adults/2130E0404EA1C0AC9CF4382839DE3498 Can I recover the position of the subjects in the components? I tried to do something using info of this link but I'm not sure if it's correct. I think that with this step I could compute an ANAVOA test or Chi-Square to confounding variables such as sex, education, diet calories etc How to compute varimax-rotated principal components in R? #Code for RStudio library(factoextra) #PCA prc <- prcomp(df, center=TRUE, scale=TRUE) prc$sdev^2 # Choose components with the eigenvalues >1.5 #Varimax and loadings varimax_df = varimax ( prc$rotation [, 1:4] ) varimax_df$loadings varimax_df$rotmat #Scaling component to row. Standarized scores for each row newData <- scale(df) %*% varimax_df$loadings Thanks!
To compare two treatments (independent variables = intervention A vs intervention B), dependent variables are lipid levels (i.e. LDL, HDL, TC, TG) in scales. Blood samplings were collected at various time points for 3 times. What is the best test to calculate the mean change for each dependent variables while factoring in the different measurement time intervals?
A dataset is having 2 or more timeseries eg: with two timeseries x and y I need to predict the slope and intercept using Linear regression model b/w x & y. But my data can have Outliers My Approach: Calculate Mahalanobis distance using MCD Find the Outlier using some threshold value, if greater than threshold then outlier For finding threshold, I chose quantile and chi-square but for all datasets it is not calculating outlier, By playing with it I am able to find but I want to automate this After finding Outliers, Remove the outliers from the dataset Then Build the model using Linear regression My problem is how to choose the threshold value for removing the outlier that will work in all cases or atleast calculation of slope and intercept cases? I cannot check the plot of the data of everytime, because I am writing an python script which will build best model between the timeseries and give slope and intercept as output. I also explored the robust regression, but I want to keep this simple
While building ML model I'm facing a covariate shift detection, so I need to compare old labelled data and new unlabelled data. I'm planning to use KL divergence to quantify the distances between these two distributions. However, KL divergence is asymmetric. Hence, which distribution should be $P(x)$ and which $Q(x)$? My intuition is $P(x)$ should be new, unlabelled (inference) data and $Q(x)$ should be old, historic (training) data. KL divergence is defined like this: $$ \DeclareMathOperator {\KL}{KL} \KL(P || Q) = \int_{-\infty}^\infty P(x) \log \frac{P(x)}{Q(x)} \; dx $$
I'm trying to implement the (R-)ALoKDE algorithm for the density estimation of the data streams. The algorithm has been published and presented in [1, 2]. Although the algorithm seems simple, I'm struggling to implement it, even in the 1D case. My current (1D) implementation can be found on GitHub [3]. I believe that my problems may be due to my lack of statistical knowledge -- hence I'm looking for some guidance here. Imagine a Kernel Density Estimator (KDE) that's a weighted sum of standard KDEs -- let's name it WS-KDE. I've read that -- to sample from a simple KDE -- one can randomly select one of its kernels and draw a sample from it. Is that also correct for the WS-KDE? Can I select the simple KDE (according to weights) first and then a kernel from the selected KDE? If not, what's the proper way to draw a sample from WS-KDE? ALoKDE algorithm -- as far as I understand -- works in the following way. First, it detects if the concept drift (stream non-stationarity) occurred. If so, then a local estimator $$ \hat{f}^{kde}_t = \frac{1}{m_t +1} \left( \sum^{m_t}_{i=1} \frac{1}{h^d_{D_t}} K \left( \frac{||\textbf{x} - \textbf{x}_t^{(i)}||}{h_{D_t}} \right) + \frac{1}{h^d_{\textbf{x}_t}} K \left(\frac{||\textbf{x} - \textbf{x}_t||}{h_{\textbf{x}_t}} \right) \right) $$ is created. The $h$ parameters are easy to compute, $\textbf{x}_t$ is the new sample from the stream, and $\textbf{x}^{(i)}_t$ are local samples, (see pt. 3). I believe the rest of the symbols are self-explanatory, but I'll edit the post again if needed. At this moment, I should have 2 KDEs -- the KDE from the previous step $\hat{f}_t$ and the local one $\hat{f}_t^{kde}$. I then make a weighted sum of them $$\hat{f}_{t+1}(\textbf{x}) = \lambda_t \hat{f}^{kde}_t(\textbf{x}) + (1 - \lambda_t) \hat{f}_t(\textbf{x})$$ where $\lambda_t$ is the weight computed via the formula $$ \lambda_t = max \left(0, min \left(1, \frac{B_t - C_t}{A_t + B_t - 2C_t} \right) \right) $$ The second problem concerns finding the $\lambda_t$. Here, one has to compute $$ A_t = \int [(E(\hat{f}^{kde}_{t}(\textbf{x}; h_{D_t}, h_{\textbf{x}_t}) - f(\textbf{x}))^2 + Var(\hat{f}^{kde}_{t}(\textbf{x}; h_{D_t}, h_{\textbf{x}_t}))] dx $$ $$ B_t = \int [(E(\hat{f}_{t}(\textbf{x}) - f(\textbf{x}))^2 + Var(\hat{f}_{t}(\textbf{x}))] dx $$ $$ C_t = \int [(E(\hat{f}^{kde}_{t}(\textbf{x}; h_{D_t}, h_{\textbf{x}_t}) - f(\textbf{x}))) \cdot (E(\hat{f}_{t}(\textbf{x})) - f(\textbf{x})) + Cov(\hat{f}^{kde}_{t}(\textbf{x}; h_{D_t}, h_{\textbf{x}_t}), \hat{f}_{t}(\textbf{x})] dx $$ I can see that $A_t$ and $B_t$ are MISE of $\hat{f}^{kde}_t$ and $\hat{f}_t$ respectively, and I know how to compute them. The authors claim that $C_t$ is the covariance between $\hat{f}^{kde}_t$ and $\hat{f}_t$, and this I have no idea how to compute. I've also tried to bypass this problem by finding $\lambda_t$, which minimizes MISE of $\hat{f}^{kde}_{t+1}(\textbf{x})$, but it doesn't work correctly -- it tends to either $\lambda=1$ or $\lambda=0$ depending on MISE of which estimator ($\hat{f}_t$ or $\hat{f}_{t}^{kde}$) is smaller. I believe that one computes the MISE of $\hat{f}_{t+1}$ differently than a weighted sum of simple KDEs MISEs. My additional concern is the local sampling described in the paper (mentioned in point 2). During the update step of the algorithm, one has to draw $m_t$ local samples $\{\textbf{x}_t^{(i)}$ from the current KDE $\hat{f}_t$ that are $\tau$-close (according to some distance measure) to the sample $\textbf{x}_t$ drawn from the stream prior to the update step. Just for the sake of argument, I'll mention that the default is $\tau=1$. Imagine now that $\hat{f}_t$ is a good estimator of $N(0, 1)$. Now, due to concept drift, the stream now draws the data from $N(100, 1)$ so $\textbf{x}_t$ would be a value close to 100. How can I efficiently and numerically draw samples that are so deep into the tail of the distribution? Currently, I test my implementation on the stationary standard normal distribution $N(0, 1)$. Solving these two issues will allow me to implement what I need. Any help is greatly appreciated. [1] https://link.springer.com/article/10.1007/s13042-021-01275-y [1, free and public] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8210923/ [2] https://ieeexplore.ieee.org/abstract/document/8621923 [3] https://github.com/Tomev/ALoKDE
I would like to do a DIF analysis on a scale The problem is that I have missing data for 10 items for the focal group The questionnaire in question exists in two forms: a 15 item version and a 25 item version When we collected the data from different researchers we realized that they had not all used the same version: About 100 participants filled out the 15-item questionnaire and 100 others the 25-item one (which includes the 15 items of the first one). That makes about 20% of data missing, I don't know if I can still do something with it and I can't identify if I am in a MAR (missing at random), MCAR (missing completely at random) or MNAR (missing not at random) case. What do you think? Thanks for your help
Is there a way to demonstrate strength of a Hierarchical Bayesian Model versus a non-Hierarchical Bayesian Model on simulated data? I'm ideally looking for a plot that shows that a Hierarchical Bayesian Model better succeeds in detecting a difference/effect against a non-Hierarchical Bayesian Model as we keep decreasing the amount of data. An effect is detected if the pairwise difference between estimated quantities is $> 0$ with probability $0.95$. So this can be calculated from both the models. Is there a way to demonstrate this using simulated data?
Let the DGP be given as: $$X_t\sim t^2\chi_1$$ with all $X_t$ independent. Based on simulations, an ADF test fails detect non-stationarity (i.e. it does not find a unit root). This makes sense, since ADF is a parametric test that assumes a completely different model than the DGP above. I'm looking for some other test that will reliably detect non-stationarity of the DGP above, based on, for example, a sample of size 100 drawn at $t=1,\dots,100$.
I have already a SARIMA model working at my company (an ecommerce) to predict sales. Right now I am only trying to improve it. The current model is only using endogenous variables (i.e the sales variable). Commercial teams claims that Stock info should be really useful to help predicting sales, as it is very correlated to our goal metric (they have an study on it) That beeing said I am trying to include this exogenous variable to the model, but I aint getting no improvment (actually it is getting a little worse). I have already shown them the result but they arent getting convinced with it and I also dont know how to explain why this is happne. Can someone help me undertand it ??
Per the regression model: $\mathbf{y} = f(\mathbf{x},\mathbf{\beta}) + \mathbf{\epsilon}$ Where the $\beta$ estimate of LAD regression is given by: $ \hat{\beta}_{LAD} = \text{argmin}_{ b} \sum_{i=1}^n |y_i - f(\mathbf{b},x_i)|$ Now, some background, per Wikipedia on the topic of Iteratively Reweighted Least Squares, to quote: Iteratively reweighted least squares (IRLS) is used to solve certain optimization problems with objective functions of the form of a p-norm. And further: IRLS is used to find the maximum likelihood estimates of a generalized linear model, and in robust regression to find an M-estimator, as a way of mitigating the influence of outliers in an otherwise normally-distributed data set, for example, by minimizing the least absolute errors rather than the least square errors. One of the advantages of IRLS over linear programming and convex programming is that it can be used with Gauss–Newton and Levenberg–Marquardt numerical algorithms. I am particularly interested in applying the least absolute deviation (or LAD and also referenced as ℓ1 norm) per an iterative least-square solution approach with weights equal to the respective reciprocal of the absolute value of the observed residues (or, more prudently, the max of some small value, like 0.0001, and the absolute value of the residue as created from the difference of actual observed and currently fitted per the centered data set). Interestingly, this recommended weighting scheme is noted in the Wikipedia cited reference, as equivalent to employing a Huber loss function in the context of robust estimation (albeit, its use in the attached spreadsheet was not deemed apparently currently necessary). Now, an issue I have with normally applied LAD regression arises, to quote from Wikipedia on the topic of Least Absolute Deviations: Checking all combinations of lines traversing any two (x,y) data points is another method of finding the least absolute deviations line. Since it is known that at least one least absolute deviations line traverses at least two data points….. Or, I would restate per above, any possible arbitrary two data points in the data set, which may not be an intuitively best-suited alternative (as an intercept related to measures of a central tendency). In particular, my suggested improvement to the art of applying LAD is, relating to, say, a two-parameter robust regression model, especially with a limited number of data points and especially perhaps some evident outlier presence, is, in a manner paralleling Least-Squares (that forces the intercept parameter through the mean of Y and X) is to reduce the dimensionality of the respective LAD regression, while also promoting robustness, along with being consistent with both an intuitive interpretation and possible prior expectation, by proscribing the respective LAD intercept passes through the median of Y and X, namely here Y_bar and X_bar. That is, substitute Y_bar - Beta*X_bar for the intercept term in the LAD regression model, which interestingly and more simply equates to transforming said Y and X by centering them around their respective medians, producing new centered variables Y’ and X’ in a no-intercept model. Further, dividing both Y’ and X’ by the square root of the first Least Squares’ one parameter regression model of Y’ versus X’ with starting weights of 1, results in two new variables Y’’ and X’’ whose simple one parameter regression produces a new Beta estimate. Using this Beta on X’ and subtracting Y’, forms a point residue that upon taking its absolute value, and square root and its respective residue whose reciprocal is used to again-transform X’ and Y’ to X” and Y” for the next step of simple one-parameter Least-Squares resulting in a new Beta. Repeat until sufficient convergent is achieved. Note: One can construct (see the first cited Wikipedia reference for the iterative matrix approach) or employ your existing software package for higher dimension robust LAD regressions with several explanatory variables by simply using the median-centered data in a no-intercept standard LS regression model producing repeatedly new coefficients for all X-variables and their associated residues for weight determination and a new Beta (for a further iteration on transformed data per standard LS analysis or as an IRLS model, if available). The above-described process is freely available for all as an illustrative two-parameter least absolute deviation regression model in a spreadsheet format at this linkS. Just change the data to observe the resulting robustness of the suggested median data-centered process which removes the intercept term. One can self-augment the analysis by also adding a normally (or other) distributed error term. From a theoretical perspective, if one defines a variable Z = Y - Beta*X, then the ℓ1-norm of Z minus its median, one has, by definition, the MAD (median absolute deviation) of Z, which is a cited known robust statistic, as is claimed (reference, see, for example, Wikipedia on “Median absolute deviation”). The only argument I can envision against my median data-centered approach for more robust LAD regression is the very special case where one has particular knowledge of the regression’s intercept term (aka, when Y equals zero) that deviates from the assumed function of the associated medians.
To illustrate the problems imagine I'm drawing labelled spheres from a box. I may or may not know the number of spheres in the box (does it make a difference?) If I draw 10 spheres from the box and they are all different what did I learn about the distribution of the probability of drawing each label? They may be just 10 out of zillion other labels or maybe there exists exactly 10 labels and by luck we sampled exactly one of each. A follow up question is when we have two boxes. We don't know if the draws from those boxes are independent or not. If I keep sampling pairs of labels that haven't shown up before we don't know if these labels were sampled by chance or not. In case it matters or not: my final goal is to draw conclusions about entropy and mutual information
I'm learning to make a book recommendation system but I am facing some difficulties to evaluate the model. I chose the collaborative filtering item based strategy. The dataset is a matrix (book, user) filled with the ratings of the books. The dataset is something like: user_a user_b user_c ... user_x book_1 0 3 5 ... 4 book_2 2 1 0 ... 0 book_3 0 0 0 ... 2 ... Bellow the code to train the model. from sklearn.neighbors import NearestNeighbors model = NearestNeighbors(metric='cosine', algorithm='brute') model.fit(dataset) # get the index of a book that contains 'harry potter' in its name title = 'Harry Potter' mask = books['title'].str.contains(title) book_isbn = books[mask]['isbn'] mask = X.index.isin(book_isbn) book_reference = X[mask].head(1) # find the 5 nearest books from 'harry potter' k = 5 distances, indices = model.kneighbors(book_reference.values, n_neighbors=k+1) Well, the thing is 'kneighbors()' function is returning the distances of the 5 nearest vectors(books) from 'book_reference' and their indices. I don't know how evaluate the performance of this model since its not making predictions. How can I do this?
I am trying to create a dataset where columns 2,3,4 are correlate (0.98,0.97,0.96, respectively) to column 1. Right now I have this code: library(MASS) X<-mvrnorm(20,mu=c(5,6),Sigma=matrix(c(1,0.98,0.98,1),ncol=2),empirical=TRUE) cor(X) Y<-mvrnorm(20,mu=c(5,6),Sigma=matrix(c(1,0.97,0.97,1),ncol=2),empirical=TRUE) cor(Y) Y <- Y[,2] Z<-mvrnorm(20,mu=c(5,6),Sigma=matrix(c(1,0.97,0.97,1),ncol=2),empirical=TRUE) cor(Z) Z <- Z[,2] data <- cbind (X,Y,Z) cor(data) and it produces this matrix: Y Z 1.00000000 0.9800000000 0.0826655886 -0.4293286 0.98000000 1.0000000000 0.0009559618 -0.5221029 Y 0.08266559 0.0009559618 1.0000000000 0.1847887 Z -0.42932859 -0.5221029358 0.1847886713 1.0000000 I would like the final outcome to look like this. It doesn't matter how the column 2,3,4 are correlated with each other (i.e. the X), as long as they have the right correlation with the first column. 1 0.98 0.97 0.96 0.98 1 X X 0.97 X 1 X 0.96 X X 1 Thanks for your help!
Here I generate a dataset where measurements of response variable y and covariates x1 and x2 are collected on 30 individuals through time. Each individual is denoted by a unique ID. The observations are collected in hourly increments, but are only available for given individuals (IDs) when they are present during the respective hour (thus creating irregularities in each time series). library(tidyverse) library(lubridate) library(data.table) set.seed(123) TimeSeries <- data.table(tm = rep( seq(as.POSIXct("2021-01-23 01:00"), as.POSIXct("2021-10-27 17:00"), by="hour"),30), ID = factor(rep( paste("ID_",c(1:30), sep = ""), each = 6664)), Obs = sample(c(NA, 1), 199920,prob = c(0.7,0.3), replace = TRUE)) TimeSeries<- TimeSeries[Obs == 1] #explicitly making large gaps at the beginning of some time series to illustrate that some individuals are present (for the first time) until later in the time series TimeSeries <- TimeSeries%>% dplyr::filter(!(ID== "ID_2" & tm < as.POSIXct("2021-5-27 10:00")), !(ID== "ID_5" & tm < as.POSIXct("2021-3-10 15:00")), !(ID== "ID_6" & tm > as.POSIXct("2021-3-10 15:00")), !(ID== "ID_5" & tm > as.POSIXct("2021-6-10 23:00"))) #response variable TimeSeries[,y:= rnorm(nrow(TimeSeries))] #predictors x1 and x2 TimeSeries[,x1:= rnorm(nrow(TimeSeries))] TimeSeries[,x2:= rnorm(nrow(TimeSeries))] #now irrelevant so remove: TimeSeries[,Obs:= NULL] we wish to fit a linear mixed effects model to determine if changes in x1 and x2 have an effect on the response y while allowing for variation across indivduals with a random intercept. I demonstrate this with nlme: mod <- lme(y ~ x1+x2, random = ~1|ID, data = TimeSeries, method = "ML") However, we suspect that when IDs are present in consecutive (or close) hours, the residuals will be heavily autocorrelated (within IDs). Thus we would like to check this assumption with ACF/PACF plots, and explore different correlation structures if it is a problem. (note, obviously this will not be true for the simulated data above, it was just to illustrate the structure of my data) I am unsure of the appropriate method to look at autocorrelation in this case, and calculate confidence bands. My understanding is that the nlme::ACF function will respect the grouping structure of random effects, but does not calcualte the correct autocorrelation function with irregular or missing values (the later of which is the apparent issue here). Is this true even with the inclusion of na.action = na.omit in the ACF call? Or is there a more appropriate method? Moreover, assuming autocorrelation is an issue, does the structure of this data require the use of continuous correlation structures (e.g., corCAR1) or is it reasonable to use corAR1 and corARMA if we do something like specify the time variable as the number of hours since the first observation? for example: TimeSeries[,TimeSinceFirst:= as.numeric(difftime(tm, min(TimeSeries$tm), units = "hours"))] lme(y ~ x1+x2, random = ~1|ID, correlation = corAR1(form = ~TimeSinceFirst | ID),data = TimeSeries, method = "ML") I have worked through several examples (including those available here, here , and [here] (https://bbolker.github.io/mixedmodels-misc/ecostats_chap.html) ), but I cant seem to find any that deal with gaps and irregularities within each time series like I have presented above.
As precipitation prediction models can only predict positive values, they won't be able to undershoot small values by much. When it comes to overshooting, there is no boundary. High precipitation values can essentially be overshot and undershot equally, except a model predicts ridiculously large amounts. Furthermore, if previous weather has been dry, simple models, such as the moving average can easily predict zero values. This issue I'd like to address. I've come up with a custom variant of the RMSE (cRMSE). Would this address this issue? np.sqrt(np.mean((y_true - y_pred)**2 + w * np.exp(-np.abs(y_true)))) The cRMSE is a custom implementation of the Root Mean Squared Error (RMSE) error metric. This could be a potentially useful approach for precipitation forecasting, as it incorporates an additional weighting factor w $\in\{ℝ|0<w<1\}$ applied to values close to zero for y_true. The cRMSE metric could be useful in cases where you want to give less weight to values close to zero, for example, in situations where predicting zero values accurately is considered less important than predicting non-zero values. The weighting factor w allows adjusting the impact of the additional term in the error metric, and you can experiment with different values of w to find the best balance between accuracy for non-zero values and tolerance for zero values.
In a regression setting with input-output pairs $(x_n, y_n)$ for $n =1, . . . , N$, where the inputs $x_n = (x_{n,1}, . . . , x_{n,D})$ are generated by: $$x_{n,d} \sim N(0, s_d/N),$$ for dimension $d = 1, . . . , D$. $X$ denotes the input matrix and $X^TX$ is a diagonal matrix with diagonal elements $(s_1, . . . , s_D)$. How can you show that the estimated ridge weights simplify to: $$\hat{w}_d^{Ridge} = \frac{s_d}{s_d + \lambda} \hat{w}_d^{LS}$$ for $d = 1, . . . , D$, where $\lambda$ denotes the ridge penalty parameter and $\hat{\mathbf{w}}^{LS}$ denotes the least squares estimates of the regression weights?
Let $X$ and $Y$ be real valued random variables. And define a truncation operator as: $\begin{align} X(\tau) = (|X| \wedge \tau) \; \text{sign}(X), \quad \tau > 0 \end{align}$ Now, I am not sure how to show the inequality: $\begin{aligned} & \mathbb{E}\left[X Y\right]-\mathbb{E}\left[X(\tau) Y(\tau)\right] \\ \leq & \mathbb{E}\left[\left|XY\right|\left(\mathbb{I}\left\{\left|X\right| \geq \tau\right\}+\mathbb{I}\left\{\left|Y\right| \geq \tau\right\}\right)\right]\end{aligned}$
Each time that I wanted to build mediation models a-b-c symbols were very confusing, especially in more complex SEM relations. Therefore, I made some pre-built models (for lavaan in r) that I propose for better name-representation. Please, feel free to comment on that or to propose changes. Note that the pre-built example may contain some errors but they are easily corrected. The advantage is that all indirect, direct, total effect are "ready" calculated, and you dont need to change anything each time that you need such mediation models in basic form. You only need to replace variables only once in Y and M models. Also they are very easily understood what they mean (at my side). subnote: do you think that some function can automate the generation of these relations to n "X"s, to n"Med"s and to n"Y"s by simply provide as input xs, meds, & ys? M=Mediator i= item y = y variable x = x variable d = direct pathX (x to mediator ) m = mediator pathY (mediator to y ) de = direct effect ind = indirect path Example: ind1M1y1 = indirect path of Mediator1 on y1 dx2M2 = direct path of x2 to Mediator2 total2M2y1 = total effect of Mediator2 on model y1 de_toty1 = direct effect - total on model y1 etc. #3x - 3Meds - 4Ys multipleX_Y_MEDs <- " #visual textual speed as multiple x vars visual =~ i1a + i2a + i3a textual =~ i1b + i2b + i3b speed =~ i1c + i2c + i3c # y vars ~ x vars + mediation vars Y1 ~ d1y1*visual + d2y1*textual + d3y1*speed + m1y1*M1 + m2y1*M2 + m3y1*M3 Y2 ~ d1y2*visual + d2y2*textual + d3y2*speed + m1y2*M1 + m2y2*M2 + m3y2*M3 Y3 ~ d1y3*visual + d2y3*textual + d3y3*speed + m1y3*M1 + m2y3*M2 + m3y3*M3 Y4 ~ d1y4*visual + d2y4*textual + d3y4*speed + m1y4*M1 + m2y4*M2 + m3y4*M3 M1 ~ dx1M1*visual + dx2M1*textual + dx3M1*speed M2 ~ dx1M2*visual + dx2M2*textual + dx3M2*speed M3 ~ dx1M3*visual + dx2M3*textual + dx3M3*speed # indirect effect (dx()*m) - y1 ind1M1y1 := dx1M1*m1y1 ind2M1y1 := dx2M1*m1y1 ind3M1y1 := dx3M1*m1y1 ind1M2y1 := dx1M2*m2y1 ind2M2y1 := dx2M2*m2y1 ind3M2y1 := dx3M2*m2y1 ind1M3y1 := dx1M3*m3y1 ind2M3y1 := dx2M3*m3y1 ind3M3y1 := dx3M3*m3y1 # total mediation of each Mediator - y1 ind1M1y1tot := ind1M1y1 + ind2M1y1 + ind3M1y1 ind1M2y1tot := ind1M2y1 + ind2M2y1 + ind3M2y1 ind1M3y1tot := ind1M3y1 + ind2M3y1 + ind3M3y1 # total mediation for each X - y1 ind1y1x1 := ind1M1y1 + ind1M2y1 + ind1M3y1 ind2y1x2 := ind2M1y1 + ind2M2y1 + ind2M3y1 ind3y1x3 := ind3M1y1 + ind3M2y1 + ind3M3y1 # Total mediation on y1 indtoty1 := ind1M1y1tot + ind1M2y1tot + ind1M3y1tot # indirect effect (dx()*m) - y2 ind1M1y2 := dx1M1*m1y2 ind2M1y2 := dx2M1*m1y2 ind3M1y2 := dx3M1*m1y2 ind1M2y2 := dx1M2*m2y2 ind2M2y2 := dx2M2*m2y2 ind3M2y2 := dx3M2*m2y2 ind1M3y2 := dx1M3*m3y2 ind2M3y2 := dx2M3*m3y2 ind3M3y2 := dx3M3*m3y2 # total mediation of each Mediator - y2 ind1M1y2tot := ind1M1y2 + ind2M1y2 + ind3M1y2 ind1M2y2tot := ind1M2y2 + ind2M2y2 + ind3M2y2 ind1M3y2tot := ind1M3y2 + ind2M3y2 + ind3M3y2 # total mediation for each X - y2 ind1y2x1 := ind1M1y2 + ind1M2y2 + ind1M3y2 ind2y2x2 := ind2M1y2 + ind2M2y2 + ind2M3y2 ind3y2x3 := ind3M1y2 + ind3M2y2 + ind3M3y2 # Total mediation on y2 indtoty2 := ind1M1y2tot + ind1M2y2tot+ ind1M3y2tot # indirect effect (dx()*m) - y3 ind1M1y3 := dx1M1*m1y3 ind2M1y3 := dx2M1*m1y3 ind3M1y3 := dx3M1*m1y3 ind1M2y3 := dx1M2*m2y3 ind2M2y3 := dx2M2*m2y3 ind3M2y3 := dx3M2*m2y3 ind1M3y3 := dx1M3*m3y3 ind2M3y3 := dx2M3*m3y3 ind3M3y3 := dx3M3*m3y3 # total mediation of each Mediator - y3 ind1M1y3tot := ind1M1y3 + ind2M1y3 + ind3M1y3 ind1M2y3tot := ind1M2y3 + ind2M2y3 + ind3M2y3 ind1M3y3tot := ind1M3y3 + ind2M3y3 + ind3M3y3 # total mediation for each X - y3 ind1y3x1 := ind1M1y3 + ind1M2y3 + ind1M3y3 ind2y3x2 := ind2M1y3 + ind2M2y3 + ind2M3y3 ind3y3x3 := ind3M1y3 + ind3M2y3 + ind3M3y3 # Total mediation on y3 indtoty3 := ind1M1y3tot + ind1M2y3tot + ind1M3y3tot # indirect effect (dx()*m) - y4 ind1M1y4 := dx1M1*m1y4 ind2M1y4 := dx2M1*m1y4 ind3M1y4 := dx3M1*m1y4 ind1M2y4 := dx1M2*m2y4 ind2M2y4 := dx2M2*m2y4 ind3M2y4 := dx3M2*m2y4 ind1M3y4 := dx1M3*m3y4 ind2M3y4 := dx2M3*m3y4 ind3M3y4 := dx3M3*m3y4 # total mediation of each Mediator - y4 ind1M1y4tot := ind1M1y4 + ind2M1y4 + ind3M1y4 ind1M2y4tot := ind1M2y4 + ind2M2y4 + ind3M2y4 ind1M3y4tot := ind1M3y4 + ind2M3y4 + ind3M3y4 # total mediation for each X - y4 ind1y4x1 := ind1M1y4 + ind1M2y4 + ind1M3y4 ind2y4x2 := ind2M1y4 + ind2M2y4 + ind2M3y4 ind3y4x3 := ind3M1y4 + ind3M2y4 + ind3M3y4 # Total mediation on y4 indtoty4 := ind1M1y4tot + ind1M2y4tot + ind1M3y4tot # total effect on y1 total1M1y1 := m1y1 + ind1M1y1 total2M1y1 := m1y1 + ind2M1y1 total3M1y1 := m1y1 + ind3M1y1 total1M2y1 := m2y1 + ind1M2y1 total2M2y1 := m2y1 + ind2M2y1 total3M2y1 := m2y1 + ind3M2y1 total1M3y1 := m3y1 + ind1M3y1 total2M3y1 := m3y1 + ind2M3y1 total3M3y1 := m3y1 + ind3M3y1 totalM1y1 := total1M1y1 + total2M1y1 + total3M1y1 totalM2y1 := total1M2y1 + total2M2y1 + total3M2y1 totalM3y1 := total1M3y1 + total2M3y1 + total3M3y1 totaly1 := totalM1y1 + totalM2y1 + totalM3y1 # total effect on y2 total1M1y2 := m1y2 + ind1M1y2 total2M1y2 := m1y2 + ind2M1y2 total3M1y2 := m1y2 + ind3M1y2 total1M2y2 := m2y2 + ind1M2y2 total2M2y2 := m2y2 + ind2M2y2 total3M2y2 := m2y2 + ind3M2y2 total1M3y2 := m3y2 + ind1M3y2 total2M3y2 := m3y2 + ind2M3y2 total3M3y2 := m3y2 + ind3M3y2 totalM1y2 := total1M1y2 + total2M1y2 + total3M1y2 totalM2y2 := total1M2y2 + total2M2y2 + total3M2y2 totalM3y2 := total1M3y2 + total2M3y2 + total3M3y2 totaly2 := totalM1y2 + totalM2y2 + totalM3y2 # total effect on y3 total1M1y3 := m1y3 + ind1M1y3 total2M1y3 := m1y3 + ind2M1y3 total3M1y3 := m1y3 + ind3M1y3 total1M2y3 := m2y3 + ind1M2y3 total2M2y3 := m2y3 + ind2M2y3 total3M2y3 := m2y3 + ind3M2y3 total1M3y3 := m3y3 + ind1M3y3 total2M3y3 := m3y3 + ind2M3y3 total3M3y3 := m3y3 + ind3M3y3 totalM1y3 := total1M1y3 + total2M1y3 + total3M1y3 totalM2y3 := total1M2y3 + total2M2y3 + total3M2y3 totalM3y3 := total1M3y3 + total2M3y3 + total3M3y3 totaly3 := totalM1y3 + totalM2y3 + totalM3y3 # total effect on y4 total1M1y4 := m1y4 + ind1M1y4 total2M1y4 := m1y4 + ind2M1y4 total3M1y4 := m1y4 + ind3M1y4 total1M2y4 := m2y4 + ind1M2y4 total2M2y4 := m2y4 + ind2M2y4 total3M2y4 := m2y4 + ind3M2y4 total1M3y4 := m3y4 + ind1M3y4 total2M3y4 := m3y4 + ind2M3y4 total3M3y4 := m3y4 + ind3M3y4 totalM1y4 := total1M1y4 + total2M1y4 + total3M1y4 totalM2y4 := total1M2y4 + total2M2y4 + total3M2y4 totalM3y4 := total1M3y4 + total2M3y4 + total3M3y4 totaly4 := totalM1y4 + totalM2y4 + totalM3y4 # direct effects de_toty1 := d1y1 + d2y1 + d3y1 de_toty2 := d1y2 + d2y2 + d3y2 de_toty3 := d1y3 + d2y3 + d3y3 de_toty4 := d1y4 + d2y4 + d3y4 # general total of whole model tot_whole := totaly1 + totaly2 + totaly3 + totaly4 ind_whole := indtoty1 + indtoty2 + indtoty3 + indtoty4 de_whole := de_toty1 + de_toty2 + de_toty3 + de_toty4 " and #2X - 3Meds - 4Ys multipleX_Y_MEDs <-" #visual textual speed as multiple x vars visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 Y1 ~ d1y1*visual + d2y1*textual + m1y1*M1 + m2y1*M2 + m3y1*M3 Y2 ~ d1y2*visual + d2y2*textual + m1y2*M1 + m2y2*M2 + m3y2*M3 Y3 ~ d1y3*visual + d2y3*textual + m1y3*M1 + m2y3*M2 + m3y3*M3 Y4 ~ d1y4*visual + d2y4*textual + m1y4*M1 + m2y4*M2 + m3y4*M3 M1 ~ dx1M1*visual + dx2M1*textual M2 ~ dx1M2*visual + dx2M2*textual M3 ~ dx1M3*visual + dx2M3*textual # indirect effect (dx()*m) - y1 ind1M1y1 := dx1M1*m1y1 ind2M1y1 := dx2M1*m1y1 ind3M1y1 := dx3M1*m1y1 ind1M2y1 := dx1M2*m2y1 ind2M2y1 := dx2M2*m2y1 ind3M2y1 := dx3M2*m2y1 ind1M3y1 := dx1M3*m3y1 ind2M3y1 := dx2M3*m3y1 ind3M3y1 := dx3M3*m3y1 # total mediation of each Mediator - y1 ind1M1y1tot := ind1M1y1 + ind2M1y1 + ind3M1y1 ind1M2y1tot := ind1M2y1 + ind2M2y1 + ind3M2y1 ind1M3y1tot := ind1M3y1 + ind2M3y1 + ind3M3y1 # total mediation for each X - y1 ind1y1x1 := ind1M1y1 + ind1M2y1 + ind1M3y1 ind2y1x2 := ind2M1y1 + ind2M2y1 + ind2M3y1 ind3y1x3 := ind3M1y1 + ind3M2y1 + ind3M3y1 # Total mediation on y1 indtoty1 := ind1M1y1tot + ind1M2y1tot + ind1M3y1tot # indirect effect (dx()*m) - y2 ind1M1y2 := dx1M1*m1y2 ind2M1y2 := dx2M1*m1y2 ind1M2y2 := dx1M2*m2y2 ind2M2y2 := dx2M2*m2y2 ind1M3y2 := dx1M3*m3y2 ind2M3y2 := dx2M3*m3y2 # total mediation of each Mediator - y2 ind1M1y2tot := ind1M1y2 + ind2M1y2 ind1M2y2tot := ind1M2y2 + ind2M2y2 ind1M3y2tot := ind1M3y2 + ind2M3y2 # total mediation for each X - y2 ind1y2x1 := ind1M1y2 + ind1M2y2 + ind1M3y2 ind2y2x2 := ind2M1y2 + ind2M2y2 + ind2M3y2 # Total mediation on y2 indtoty2 := ind1M1y2tot + ind1M2y2tot + ind1M3y2tot # indirect effect (dx()*m) - y3 ind1M1y3 := dx1M1*m1y3 ind2M1y3 := dx2M1*m1y3 ind1M2y3 := dx1M2*m2y3 ind2M2y3 := dx2M2*m2y3 ind1M3y3 := dx1M3*m3y3 ind2M3y3 := dx2M3*m3y3 # total mediation of each Mediator - y3 ind1M1y3tot := ind1M1y3 + ind2M1y3 ind1M2y3tot := ind1M2y3 + ind2M2y3 ind1M3y3tot := ind1M3y3 + ind2M3y3 # total mediation for each X - y3 ind1y3x1 := ind1M1y3 + ind1M2y3 + ind1M3y3 ind2y3x2 := ind2M1y3 + ind2M2y3 + ind2M3y3 # Total mediation on y3 indtoty3 := ind1M1y3tot + ind1M2y3tot + ind1M3y3tot # indirect effect (dx()*m) - y4 ind1M1y4 := dx1M1*m1y4 ind2M1y4 := dx2M1*m1y4 ind1M2y4 := dx1M2*m2y4 ind2M2y4 := dx2M2*m2y4 ind1M3y4 := dx1M3*m3y4 ind2M3y4 := dx2M3*m3y4 # total mediation of each Mediator - y4 ind1M1y4tot := ind1M1y4 + ind2M1y4 ind1M2y4tot := ind1M2y4 + ind2M2y4 ind1M3y4tot := ind1M3y4 + ind2M3y4 # total mediation for each X - y4 ind1y4x1 := ind1M1y4 + ind1M2y4 + ind1M3y4 ind2y4x2 := ind2M1y4 + ind2M2y4 + ind2M3y4 # Total mediation on y4 indtoty4 := ind1M1y4tot + ind1M2y4tot + ind1M3y4tot # total effect on y1 total1M1y1 := m1y1 + ind1M1y1 total2M1y1 := m1y1 + ind2M1y1 total1M2y1 := m2y1 + ind1M2y1 total2M2y1 := m2y1 + ind2M2y1 total1M3y1 := m3y1 + ind1M3y1 total2M3y1 := m3y1 + ind2M3y1 totalM1y1 := total1M1y1 + total2M1y1 totalM2y1 := total1M2y1 + total2M2y1 totalM3y1 := total1M3y1 + total2M3y1 totaly1 := totalM1y1 + totalM2y1 + totalM3y1 # total effect on y2 total1M1y2 := m1y2 + ind1M1y2 total2M1y2 := m1y2 + ind2M1y2 total1M2y2 := m2y2 + ind1M2y2 total2M2y2 := m2y2 + ind2M2y2 total1M3y2 := m3y2 + ind1M3y2 total2M3y2 := m3y2 + ind2M3y2 totalM1y2 := total1M1y2 + total2M1y2 totalM2y2 := total1M2y2 + total2M2y2 totalM3y2 := total1M3y2 + total2M3y2 totaly2 := totalM1y2 + totalM2y2 + totalM3y2 # total effect on y3 total1M1y3 := m1y3 + ind1M1y3 total2M1y3 := m1y3 + ind2M1y3 total1M2y3 := m2y3 + ind1M2y3 total2M2y3 := m2y3 + ind2M2y3 total1M3y3 := m3y3 + ind1M3y3 total2M3y3 := m3y3 + ind2M3y3 totalM1y3 := total1M1y3 + total2M1y3 totalM2y3 := total1M2y3 + total2M2y3 totalM3y3 := total1M3y3 + total2M3y3 totaly3 := totalM1y3 + totalM2y3 + totalM3y3 # total effect on y4 total1M1y4 := m1y4 + ind1M1y4 total2M1y4 := m1y4 + ind2M1y4 total1M2y4 := m2y4 + ind1M2y4 total2M2y4 := m2y4 + ind2M2y4 total1M3y4 := m3y4 + ind1M3y4 total2M3y4 := m3y4 + ind2M3y4 totalM1y4 := total1M1y4 + total2M1y4 totalM2y4 := total1M2y4 + total2M2y4 totalM3y4 := total1M3y4 + total2M3y4 totaly4 := totalM1y4 + totalM2y4 + totalM3y4 # direct effects de_toty1 := d1y1 + d2y1 de_toty2 := d1y2 + d2y2 de_toty3 := d1y3 + d2y3 de_toty4 := d1y4 + d2y4 # general total of whole model tot_whole := totaly1 + totaly2 + totaly3 + totaly4 ind_whole := indtoty1 + indtoty2 + indtoty3 + indtoty4 de_whole := de_toty1 + de_toty2 + de_toty3 + de_toty4 "
I am trying to write/understand a conditionally-conjugate Gibbs sampler for what is essentially a linear, mixed effects model. I more or less get the conditionally-conjugate posterior for the hierarchical parts of the model (nested/random effects), but I am having trouble understanding what the conjugate posterior looks like when conditioning on one of the crossed (not nested) effects. Borrowing Gelman's notation (equations 13.7 and 13.8), a random effects model without fixed or crossed effects looks like: $$ y_i \sim \mathbf{N}\left(X_iB_{j[i]},\ \sigma^2_y\right) \\ B_j \sim \mathbf{N}\left(U_jG,\ \Sigma_B\right) \\ G \sim \mathbf{N}\left(G_0, \Sigma_0\right) $$ Where $X_i$ are the individual level predictors, $B_j$ are the effects within group $j$, $U_j$ are the group-level predictors, $G$ are the group-level regression coefficients, and the subscript $j[i]$ is "the group $j$ containing observation $i$." $G_0$ and $\Sigma_0$ are fixed hyperparameters. Building off the work of Sosa et al, I think the conditionally conjugate posterior for $B_j$ is: $$ B_j|... \sim \mathbf{N}\left[\left(\Sigma_B^{-1}+\sigma^{-2}_yX_j^\intercal X_j\right)^{-1}\left(\Sigma_B^{-1}U_jG+\sigma^{-2}_yX_j^\intercal y_j\right),\ \left(\Sigma_B^{-1}+\sigma^{-2}_yX_j^\intercal X_j\right)^{-1}\right] \\ G|... \sim \mathbf{N}\left[\left(\Sigma_0^{-1}+\Sigma_B^{-1}U^\intercal U\right)^{-1}\left(\Sigma_0^{-1}G_0 + \Sigma_B^{-1}U^\intercal B_j\right),\ \left(\Sigma_0^{-1}+\Sigma_B^{-1}U^\intercal U\right)^{-1}\right] $$ If we add fixed effects $\beta^0$ then: $$ y_i \sim \mathbf{N}\left(X_i^0\beta^0 + X_iB_{j[i]},\ \sigma^2_y\right) \\ B_j \sim \mathbf{N}\left(U_jG,\ \Sigma_B\right) $$ Gelman suggests we can express this in the form of the first model by "rolling" the fixed effects $X^0$ into $X$, $\beta^0$ into $B_j$, and setting appropriate terms in $\Sigma_B$ to zero. I can sort of see how to make this work through matrix algebra, although sampling from $\Sigma_B$ using an inverse-Wishart posterior becomes trickier. However, I'm really having trouble seeing how to sample from a conditionally conjugate posterior for non-nested, crossed effects $\gamma_k$ where the levels of $k$ do not nest within the levels of $j$. Note that Gelman suggests giving the non-nested effects an intercept of zero since any non-zero means could be folded into an intercept term within $\beta^0$. $$ y_i \sim \mathbf{N}\left(X_i^0\beta^0 + X_iB_{j[i]} + Z_i\gamma_{k[i]},\ \sigma^2_y\right) \\ \beta_j \sim \mathbf{N}\left(U_jG_B,\ \Sigma_B\right) \\ \gamma_j \sim \mathbf{N}\left(V_kG_\gamma,\ \Sigma_\gamma\right) $$ Is it possible to obtain a conjugate posterior for $B_j|\beta_0,\gamma_k,...$ and equivelanetly for $\gamma_k|\beta_0,B_j,...$? Can one simply subtract the expected values of the other parameters we are conditioning on from $y$? $$ B_j|... \sim \mathbf{N}\left[\left(\Sigma_B^{-1}+\sigma^{-2}_yX_j^\intercal X_j\right)^{-1}\left(\Sigma_B^{-1}U_jG+\sigma^{-2}_yX_j^\intercal \left(y_j-X^0\beta^0-Z_k\gamma_k\right)\right),\ \left(\Sigma_B^{-1}+\sigma^{-2}_yX_j^\intercal X_j\right)^{-1}\right] $$ EDIT To answer my own question, subtracting the conditioned-upon parameters from $y$ as above appears to be correct. See Zanella & Roberts, in particular the expressions for the posterior distributions in Section 7 of the supplementary material. I would still appreciate comments/confirmation from anyone with experience using these sorts of models!
I have following issue: I am trying to cluster similar countries with respect to different temporal features. Therefore, I have twelve different datasets each representing a different country. Each dataset contains of a couple of features as time series. The features can differ from dataset to dataset (e. g., dataset1 = feature A, feature C, feature E, ...; dataset2 = feature B, feature H, feature L, ...; and so on). Same features can also be found in different datasets. The length of each time series is the same. I am struggling with finding an appropriate ML clustering algorithm to deal with all three dimensions (i. e., time, country, feature). I tried this approach (https://www.pythonforfinance.net/2018/02/08/stock-clusters-using-k-means-algorithm-in-python/), but I can only cluster the feature importances to another. Also, breaking the issue down to 2x2 (either country & feature over time; or weighted features & country) dimensions instead of having 1x3 dimensions results in an information loss. Is there any ML clustering algorithm in python which can deal with all three dimensions at a time?
Before training a machine learning algorithms, it is advisable to perform feature scaling. Suppose we have a "toy" dataset where each image is composed of two pixels $x_0$ and $x_1$. Lets assume that $x_0 \approx 0$ and $x_1 \approx 1$ for all the training samples. Initially, all images will be $(0, 1)$ but after e.g. min-max normalization they become $(1, 1)$, that is we have lost the "contrast". Are there cases where feature scaling should be perform with caution? Should a per image normalization makes more sense in the aforementioned example? How normalization should be performed? In case of min-max normalization do we use the min and max values across all pixels of all images? I am asking this because when we have a dataset where columns are features (e.g. height, weight etc) we normalize per column. Does this per column normalization makes sense in images (if we flatten an $n\times n$ image into a vector with $n^2$ entries)?
I want to prove a consistency of the sample correlations from canonical correlation analysis(CCA). Here is an informal statement of the theorem: Let $\textbf{X}$ and $\textbf{Y}$ be two p-dimensional and q-dimensional random vectors, respectively, with joint distribution P. Let $\left( x_{1},y_{1} \right), \ldots,(x_{n},y_{n})$ be random samples from P. Suppose $\textbf{r}$ be a set of population correlations between canonical variates produced when doing CCA between $\textbf{X}$ and $\textbf{Y}$. Note that the length of $\textbf{r}$, $|\textbf{r}|=\min(p,q)$. We use population covariance matrices to calculate $\textbf{r}$. In data analysis, we replace the population covariance matrix with the sample covariance matrix which results in sample correlations, $\hat{\textbf{r}}$. Then, $\hat{\textbf{r}}_{n}$ converges to $\textbf{r}$ in probability as n goes to infinity. I really appreciate any help.
In the 2015 paper "Deep Unsupervised Learning using Nonequilibrium Thermodynamics" by Sohl-Dickstein et al. on diffusion for generative models, Figure 1 shows the forward trajectory for a 2-d swiss-roll image using Gaussian diffusion. The thin lines are gradually blurred into wider and fuzzier lines, and eventually into an identity-covariance Gaussian. Table App.1 gives the diffusion kernel as: $$ q(\mathbf{x}^{(t)} \mid \mathbf{x}^{(t-1)}) = \mathcal{N}(\mathbf{x}^{(t)} ; \mathbf{x}^{(t-1)} \sqrt{1 - \beta_t}, \mathbf{I} \beta_t ) $$ The covariance of the diffusion kernel is diagonal, so each component $x_i^{(t)}$ (i.e., each pixel in the image at time step $t$) is independently sampled from a 1-d Gaussian based on the prior time step's pixel value at the same x-y location in the image. So a given pixel should NOT diffuse into neighboring pixels; instead, the action of the diffusion step is a linear Gaussian 1-d transformation of the number held in the pixel, with the mean slightly reduced and some noise added. Question: This seems inconsistent with Figure 1? Instead of the blurred line (wider and fuzzier line), we should have a line that has the same width, but exhibits more noise? In order to have a pixel diffuse into neighboring pixels, we would need a diffusion kernel with a non-diagonal covariance, so that there is nonzero covariance between components?
Experiment is designed as it follows: 9 different pH treatments (controlled), in each treatments 40 marked individuals (randomly chosen from a larger population). Measurements (length, weight, etc.) were performed on the same day on every marked individual from each pH treatment over several time points. Research question is: Is the variable response changing with pH? How do I approach this? What statistical test would be appropriate? Point in any direction would be appreciated.
I´m trying to find by hand the se.fit. I have 30 different values of the Y real and the Estimate Y. How can I calculate it? Thank you!
I have a data set that looks like this toy data library(tidyverse) data <- tibble(ID = rep(c("Billie", "Elizabeth", "Louis"), times = 1, each = 6), Group = c(rep("control", 12), rep("patient", 6)), Time = rep(c("T1", "T2"), times = 3, each = 3), Item = rep(c("a", "b", "c"), times = 6, each = 1), answer = sample(1:7, size = 18, replace = TRUE)) There are some individual participants (ID), who can be either patient or control participants (Group). The participants take part in an experiment two times (Time). At each time, they answer three items (Item), which all measure the same construct . The answershows their answer on a 7-point-Likert-Scale (if you are not from the psychology world, the patients can have biopsies two times, and each time, three samples (items a, b, c) are taken). The research question is: does the group-membership alter the change in answers between time points / is the change in answers between the time points different for the two groups? (are the changes in biopsied tissues different for the two groups). To analyze the data, I use the brms-package. If I wanted an easy life, I would just calculate the average answer per person and time point and continue from there. easy <- data %>% group_by(ID, Time) %>% summarize(Group = unique(Group), mean_ans = mean(answer)) To analyze with brms, my formula would then be bf(mean_answer ~ 1 + Group * Time + (1|ID)) (At least I hope so...) But life is nicer when it's complicated, so my question is: how can I specify a brms-formula that allows me to include the item-level information that is present in my data? I think what I would like to write is something like this bf(answer ~ 1 + Group * Time + (1 | Item|Time|ID)) Reading into crossed and nested random effects here and here, I was under the impression that my data are crossed, leading to the following formula: bf(answer ~ 1 + Group * Time + (1+ ID) + (1|Time) + (1|Item)) But does this formula take into account the correlation structure of my data? Moving on, following this paper, I was under the impression that my data are the "crossed and nested" part of the figure. Following this track, at the end of this site is a guide as to how to specify this case in lme4, but I have a hard time translating this into brms formulas. Finally, i found this great site on country-year panel data, which I am currently exploring, but I am having a hard time translating the scenarios there to my case. I would greatly appreciate any help in this. Thank you already in advance!
I'm working on a project developing a predictive model for whether or not an individual has a (rare) disease based on some non-invasive test results. The idea is that this could help patients avoid lengthy and invasive tests. One of the non-invasive techniques is very predictive for this particular disease; if a patient has received this test and has a certain combination of results (the test returns several different results), it can be very predictive for this disease. However, only a small subset (10-20%) of patients in the dataset have received the test, as it is not offered at all facilities. What would be an appropriate way of modeling these data? I could be wrong but my intuition is that imputation doesn't make sense here. Would a random forest or gradient-boosted model work best, since they can handle "missing" data well? A colleague suggested building two models, one with and one without the data in question, but I'm not sure how that would work. Thank you!
I have a .csv data set (N = 140) that I want to perform analysis on. Specifically, I want to find a relationship between the .csv's "P" (Predictor) cells and the "T" (Target) cells (pictured below). T_Summed is simply the sum of the T cells and P_Avg is simply the average of the P cells. An observation containing these values is pictured below. Using R, nearly every combination of running the P's against the T's (one variable against another) leads to a scatter plot that looks like this: Some of them have slightly more randomness/spread, but all of them had at least some data points with the 'vertical' shape shown in the plot above. My issue lies in the fact that I have extremely minimal knowledge of statistics or data analysis, so I do not know how to run an appropriate model that will fit a plot of this shape, how to validate it, and, if using a model with multiple independent variables, how to visualize it. The only things I know how to do are run a simple linear model, a multiple linear model (don't know how to visualize this), and a potentially misused k-folds cross validation. Here's an example of how I created one of the linear models in question with repeated k-folds cross validation: ex <- trainControl(method = "repeatedcv", number = 5) simple_reg_model <- train(`T_Summed` ~ `P_Avg`, data = data, method = "lm", trControl = ex) I am afraid that my lack of knowledge in statistics will lead me to search around the internet and almost arbitrarily implement solutions that I have little understanding of, thus making my analysis faulty, inaccurate, or misleading. Due to this, I decided to consult the statistics StackExchange with a few questions: Is the code block above, at least for a linear model, an acceptable way of using repeated CV? What alternative models should I possibly begin to explore for this data? How might I go about learning about how to implement them properly? Models with multiple IVs appear to me to be difficult to plot. If anyone has a suggestion for question number 2. that involves multiple IVs, how might I go about plotting them so that they are interpretable? Another question regarding multiple IV models: I do not know how to use feature selection. If anyone has a suggestion for number 2., how would you go about selecting features for your suggestion? I very much appreciate any support. Thank you
I found a similar question here but unanswered. Given a binary treatment generated from a Bernoulli distribution with probability $p$ of success ($(_=1)=$, $(_=0)=1−$), how do I know how the error of the difference in means estimator scales with $p$ and $n$? I know that writing the difference in means estimator as $̂=1/_1∑__−1/_0∑(1−_)_$, can derive its variance from the law of total variance and marginalizing as $$Var(\hat{\tau}) = \frac{1}{n} \left( p \sigma_1 + (1 - p) \sigma_{0} +p(1-p) (\mu_1+ \mu_{0})^2\right),$$ where $\sigma_1, \sigma_{0}, \mu_1, \mu_{0}$ are the variance and mean of $Y | T = 1$ and $Y | T = 0$, respectively. Is this correct? If so, then I get that the variance of the estimator scales with $p/n$, is this right? I am not sure whether this is the right way of modelling things.. I found this derivation of the difference in means estimator variance but for complete random assignment (not Bernoulli trials) and I am not sure how to compare.
I read the textbook https://web.stanford.edu/class/bios221/book/06-chap.html about the false discovery proportion and the p-value histogram. library("DESeq2") library("airway") data("airway") aw = DESeqDataSet(se = airway, design = ~ cell + dex) aw = DESeq(aw) awde = as.data.frame(results(aw)) |> dplyr::filter(!is.na(pvalue)) alpha = binw = 0.025 pi0 = 2 * mean(awde$pvalue > 0.5) ggplot(awde, aes(x = pvalue)) + geom_histogram(binwidth = binw, boundary = 0) + geom_hline(yintercept = pi0 * binw * nrow(awde), col = "blue") + geom_vline(xintercept = alpha, col = "red") I do not understand why the false discovery proportion is calculated with pi0 = 2 * mean(awde$pvalue > 0.5) pi0 * binw * nrow(awde)
I understand that in creating prediction intervals for point prediction we typically use root mean-squared error (RMSE) if we meet our linearity, homoskedacity, and normality conditions. Our normality and homoskedacity conditions allow us to use the empirical rules to construct intervals. That is about $95\%$ of our data at say $x = x_0$ is within two RMSE of our prediction. However, in creating confidence intervals we use standard error(sample standard deviation divided by n-1). I understand that we typically do this in parameter estimation. For example, when predicting the $\beta 's$ in linear regression, we might make a confidence interval for specific betas. Here we're essentially admitting that we don't actually know the true standard deviation, but we have a sampled population form which we'll estimate the standard deviation. This estimate is called the standard error. We can use this approximation even if we don't have the conditions I stated above, correct? My question I guess is if we are already admitting to not knowing the standard deviation because we don't have the entire population's dataset, shouldn't we also use standard error in our prediction intervals as well? It also has the benefit of not needing to meet any conditions unlike in our RMSE interval.
Does anyone know how to model selection for function on function linear model to find the best subset of functional covariate in R
I am looking at the incremental validity of the MMPI-A-RF (a psychological instrument) and MACI (another psychological instrument) in the prediction of the DSM code for depression disorders found in the clinical record prior to testing. Both psychological instruments are continuous variables, and the outcome variable (DSM code) is dichotomous (DSM depression diagnosis present: (yes/no)). How do you apply an odds ratio to determine the effect size in this example? Should I apply the odds ratio in this case? I have eight, continuous predictors in total.
Here's the sample data: Link to a .csv file To briefly explain this: grandparent is 1 if the individual is a grandparent and 0 if otherwise. m_age is the individual's age. m_work is the individual's working status and m_workhour is the individual's weekly working hours. child1_female indicates whether the individual's first child is female. child_number is the number of children that the individual has. I try to use fixest package to do instrumental variable fix effects regression. The variable grandparent is endogenous and its instrument is child1_female. The outcome variable is m_work. However, if I add m_age as the exogenous regressor and use the following code: ivgrandma<-feols(m_work~m_age|respondent_id+year|grandparent~child1_female,grandma) It says that "The endogenous regressor 'fit_grandparent' have been removed because of collinearity (see $collin.var)." I'm very confused about where the collinearity comes from.
0 I have a set of data points. The first coordinate is time and the second coordinate is energy. I am trying to figure out how the energy is decaying over time. Particularly, I have to find if it is decaying over time exponentially or as a power law. I used Mathematica FindFit to model my points as both an exponential decay and a power law decay. It turned out that the exponential decay describes my data points better. But I am not sure if I am doing the right thing. I also plotted my data points in a ListLogPlot and ListLogLogPlot. In both cases, I got a straight line. So, I am a little confused about the actual behavior of my data points. Could anyone help me with this issue? I am copying my data points here. Note that I am only interested in the late-time behavior of the function, not the entire time axis. Thank you! Data1={{5,0.0210796},{7,0.0293022},{9,0.0302858},{11,0.0257149},{13,0.0182589},{15,0.0106745},{17,0.00473577},{19,0.00101295},{21,-0.000754187},{23,-0.00117344},{25,-0.000860244},{27,-0.000278088},{29,0.000293337},{31,0.00072545},{33,0.000988823},{35,0.00110603},{37,0.00111822},{39,0.00106582},{41,0.000980234},{43,0.000882181},{45,0.000783367},{47,0.000689278},{49,0.0006018},{51,0.000521108},{53,0.000446822},{55,0.000378596},{57,0.000316303}, {59, 0.000259989190761133}}
Note that a distribution function (cadlag etc) $F$ is said to be stochastically dominated by a distribution function $G$ if $F(x)\geq G(x)$ for all $x \in \mathbb{R}$. The following result characterizes stochastic dominance equivalently: Theorem: $F$ is stochastically dominated by $G$ if and only if for every increasing function $u$ $$\mathbb{E}_F[u(x)] \leq \mathbb{E}_G[u(x)].$$ I have seen proofs for this result when $F$ and $G$ are absolutely continuous (and thus admit densities) using integration by parts. Is there a more general proof that holds for arbitrary distribution functions/measures on the real line? To be clear, the integral definition implies the CDF one by using $u(x) = \mathbb{1}\{x \in (z,\infty)\}$ for all $z \in \mathbb{R}$. The converse direction doesn't seem immediately obvious.
I'm working on a negative binomial model for count data. Unfortunately I can't provide a more detailed description because I wasn't explicitly allowed to. All I can say now is that the data is about people (social sciences). Here is my model: model = glmmTMB(Y ~ offset(log(offset.var)) + X1 + X2 + X2.squared + X3 + X2:X3 + scale(X4a) + scale(X4b) + scale(X4c) + scale(X4d) + (1|factor), dispformula = ~ offset.var, family = "nbinom2", data = data) # I included a dispersion formula because of the "residuals against predictor" plots # it also seems to help the uniformity (KS) test I used an excellent DHARMa library for diagnostics. Here are the main results (the best I've come to): The testOutliers(model_res, type = "bootstrap") comes out significantly. When I inspected the outliers, I found that there are 12 of them, they occur only on the right side (scaled residual = 1) and their problem is that they have non-zero values in dependent variable while having very low (almost zero) values in the offset variable. (But the Y values are not excessively high – those observations were excluded prior to the analysis.) Since I consider these observations valid, I decided to keep them in my analysis but I don't know if that's correct because they seem to highly confuse the overdispersion tests: > performance::check_overdispersion(model) dispersion ratio = 64208237559.269 Pearson's Chi-Squared = 64079821084150.227 p-value = < 0.001 # if I remove all 12 observations whose scaled residual == 1, I get dispersion ratio = 1.183 > DHARMa::testDispersion(model_res) DHARMa nonparametric dispersion test via sd of residuals fitted vs. simulated data: simulationOutput dispersion = 0.35968 p-value = 0.254 alternative hypothesis: two.sided # {DHARMa} test doesn't detect any problem even with the 12 observations included, unlike {performance} test > DHARMa::testZeroInflation(model_res) DHARMa zero-inflation test via comparison to expected zeros with simulation under H0 = fitted model data: simulationOutput ratioObsSim = 1.0023 p-value = 0.99 alternative hypothesis: two.sided > performance::check_collinearity(update(model, . ~ . - X2.squared - X2:X3)) Term VIF VIF 95% CI Increased SE Tolerance Tolerance 95% CI X1 1.09 [1.04, 1.20] 1.04 0.92 [0.83, 0.96] X2 1.12 [1.06, 1.22] 1.06 0.89 [0.82, 0.94] X3 1.02 [1.00, 1.37] 1.01 0.98 [0.73, 1.00] scale(X4a) 1.25 [1.17, 1.36] 1.12 0.80 [0.73, 0.85] scale(X4b) 1.12 [1.06, 1.22] 1.06 0.89 [0.82, 0.94] scale(X4c) 1.09 [1.04, 1.20] 1.04 0.92 [0.83, 0.96] scale(X4d) 1.04 [1.01, 1.22] 1.02 0.96 [0.82, 0.99] > performance::check_autocorrelation(model) OK: Residuals appear to be independent and not autocorrelated (p = 0.770). # observed values: 0 1 2 3 4 5 6 7 8 10 11 12 14 15 16 20 563 170 119 56 25 40 5 1 6 19 1 1 1 3 1 2 # predicted values: 0 1 2 3 4 5 6 7 8 9 12 13 14 15 17 18 20 21 23 41 46 220 441 176 75 46 9 11 6 6 8 2 2 3 1 1 1 1 1 1 1 1 # conditional R^2 = 0.3949, marginal R^2 = 0.3380 (if it helps) My question is: Is there anything I should do with the outliers or model performance? I would appreciate any answers or suggestions. You can download the anonymised dataset and some basic R script from here. EDIT: I've clarified the statement about outliers and narrowed the focus of the question.
I am not clear on the difference between the two concepts but I am interested in air pollution exposure in a given period of time and in the literature, I know that lag models are used. I have also seen the moving average for air pollution, is this the same as lag models? If not, what is the difference between them?
Definition (Consistency) Let $T_1,T_2,\cdots,T_{n},\cdots$ be a sequence of estimators for the parameter $g(\theta)$ where $T_{n}=T_{n}(X_1,X_2,\cdots,X_{n})$ is a function of $X_{1},X_{2},\cdots,X_{n}.$ The sequence $T_{n}$ is a weakly consistent sequence of estimators for $\theta$ if for every $\varepsilon>0,$ $$\lim_{n\rightarrow\infty}P_{\theta}(|T_{n}-g(\theta)|<\varepsilon)=1.$$ If $T_{n}$ converges with probability one or almost surely (a.s.) to $g(\theta)$, that is, for every $\theta\in\Theta$ $$P_{\theta}\left(\lim_{n\rightarrow\infty}T_{n}=g(\theta)\right)=1,$$ then it is strongly consistent. Strongly consistency implies weakly consistency.This definition says that as the sample size $n$ increases,the probability that $T_{n}$ is getting closer to $\theta$ is approaching $1$. I am confused about what is the ${\color{Red}{\left(\Omega,\mathcal{F},P_{\theta}\right)}}$ those $T_{n},\,n=1,2,\ldots$, defined on? What's the specific probability measure ${\color{Red} {P_{\theta}}}$ ?
I am doing a regression to find what are the association between hospital admission (yes/no) and disease severity (mild, moderate, severe) using multinomial logistic regression. I noticed that the regression estimates differ when I fit severe vs mild (mild as the reference category) and mild vs severe (severe as the reference category). I think that's probably related to degrees of freedom and other statistics that moderate vs mild (in model 1) or moderate vs severe (in model 2) have. My questions: In this case, is it possible to obtain a consistent estimates like, when using a variable with >2 categories as a covariate (say: A, B, C), A vs C is the inverse (1/) of C vs A Alternatively, is it also reasonable to swap the places? So age becomes the dependent variable and severity becomes the independent variable (note that the variables are dummy and, hypothetically in my real dataset, it is plausible although the ideal way is to use severity as the dependent variable). That way I can get the consistent estimates where mild vs severe is the inverse of severe vs mild, but I'm confused as I am combining the severity (in this case dependent variable becomes independent variable) with other covariates (which are definitely independent variables) Thanks in advance.
I like to organize my studies always with the weakest hypotheses possible. In this case, I want to understand well what assumptions I should add to be able to study linear regression models in time series. I want to analyze how far I can go in the OLS context assuming that my sample is not independent. At the end of the day, in the world of time series this is one of the first assumptions that fails. So let's start with the classical assumptions of linear regression. First, suppose the true model given by \begin{equation}\label{I}\tag{I} y= \beta_0 + x_1 \beta_1 + ...+x_K \beta_K + u= x'\beta + u, \quad E(u|x)=0 \end{equation} Now consider a sample $(y_i,x_i)_{i=1}^n$ (identically distributed but not independent) such that: $(y_i,x_i)_{i=1}^n$ satisfies (\ref{I}): $$y_i= \beta_0 + x_{i1} \beta_1 + ...+x_{iK} \beta_{K} + u_i= x_i'\beta + u_i,\quad i=1,...,n$$ in matrix notation, we have $$y= X\beta + U$$ Suppose that $X'X$ is non singular (almos sure); With this two assumptions I can show the existence of $\hat \beta= (X'X)^{-1}X' y$ Now, suppose that $$E(U|X)=E\left( \begin{bmatrix} u_{1} \\ \vdots \\ u_{n} \\ \end{bmatrix}\Bigg| \, \begin{bmatrix} x_{11} & \cdots & x_{1K} \\ x_{21} & \cdots & x_{2K} \\ x_{n1} & \cdots & x_{nK} \end{bmatrix} \right)=0$$ with this aditional assumption, the traditional books show that $E[\hat \beta]=\beta$. And I think that an $AR(1)$ process does not satisfy this assumption. The fourth assumption in the classical linear regression model is the normal distribution and the no correlation of the errors: $$U \sim N(0. \sigma^2 I), \quad I \,\, \hbox{identity matrix }$$ In this case, we have $\hat \beta \sim N(0, \sigma^2 (X'X)^{-1})$. But I think (I'm not sure) that this hypothesis fails not assuming independence in the sample, because: $$u_i = y_i - x_i' \beta$$ Thus, it is likely that the correlation matrix of $U$ is not $\sigma^2 I$. But this is not a problem, we can assume that $$U \sim N(0. \sigma^2 \Omega)$$ In this case, we have $\hat \beta \sim N(0, \sigma^2(X'X)^{-1} X' \Omega X (X'X)^{-1})$, assuming that $\Omega$ is known. I would still need to mention a hypothesis of efficiency, but in order not to go on too long, I ended up here. As far as I've read, many of the classic books don't make the case for the sample not being independent. They start from the hypothesis of an iid sample. I think this is really bad, as it doesn't allow you to make a natural transition into the world of time series. So my concrete question is: are all the conclusions I mentioned true assuming the hypotheses of linear regression, but in case the sample is not independent? Is there something I'm getting wrong?
Suppose we want to solve $$max_{\theta} \sum_{i}^N log f(y_i|x_i; \theta, \gamma).$$ Here, $\theta$ and $\gamma$ are two parameter vectors. The problem above derives an estimate of $\theta$, taking the parameter vector of $\gamma$ as given. Suppose we know that $\gamma\sim N(\mu, \Sigma)$. Then, we can plug in $\mu$ as a consistent estimate of $\gamma$ in the problem above and derive a consistent estimate of $\theta$. In this case, I believe that we should correct the standard errors for $\theta$ for the fact that $\gamma$ are random variables. However, I am not entirely sure how to correctly adjust the standard errors. Wooldridge, in "Econometric Analysis of Cross Section and Panel Data", writes that if the first-stage estimator of $\gamma$ is of the form $$ \sqrt{N}(\hat{\gamma} - \gamma) = \frac{1}{\sqrt{N}} \Sigma_{i} r_i(\gamma) + o_p(1), $$ then $$\hat{Avar}(\hat{\theta}) = (\Sigma_i \hat{H}_j)^{-1}(\Sigma_i \hat{g}_i \hat{g}_i')(\Sigma_i \hat{H}_i)^{-1},$$ where $$\hat{g}_i = \hat{s}_i + \hat{F}\hat{r}_i,$$ $s_i = \frac{\partial log f_i}{\partial\theta}$ is the score of the likelihood and $F$ is the gradient of the score with respect to $\gamma$. Now I am wondering what an estimator for $r_i$ would possibly be in this case. In particular, one would need to calculate $r_i r_i'$, which could be equal to the covariance of $\gamma$ ($\Sigma$), but I am not entirely sure.
When I tried SPSS two-way ANOVA analysis, the result shows df is zero,and no value in F, Sig, and Mean square. Does anyone know why this happened? Does it mean I need to input more data to analysis?
My LSTM model is configured as follows: Input: 60 measurements of the same float feature over time (cell dim = 60) Output: 3 classes Training and validation function: cross entropy with softmax Optimizer: Adam Learning rate: 0.0005 Batch size: 180 (3 sequences) The classes have the following distribution in the training and test datasets: A = 49.5%, B = 49.5%, C = 1% Class C rarely occurs and I'm only interested in classes A and B. During the training, the validation loss has an opposite trend to the training loss, and in some points they seem perfectly specular. Also, I've noticed that the precision and recall improve when the training loss falls below a certain threshold, but that's generally where the validation loss is high. What could be causing this behavior? I started with 25 layers and saw this behavior, suspecting overfitting I progressively decreased the layers 25 LSTM layers, 1500 training samples, 1500 test samples, best F1 score = 0.564 6 LSTM layers, 1500 training samples, 1500 test samples, best F1 score = 0.569 1 LSTM layers, 1500 training samples, 1500 test samples, best F1 score = 0.639 Therefore, since the confusion matrix and consequent F1 remained very similar to the previous training (if not better), I kept on only 1 layer to reduce the complexity of the model. Since testing this on a larger number of test samples (about a million) the F1 score was very poor, I thought of adding progressively more training and test data, also hoping that this it would have improved any overfitting issues. 1 LSTM layers, 3000 training samples, 3000 test samples, best F1 score = 0.568 Since with 3000 training samples, the F1 score is similar to the previous model, but still poor when tested on a larger dataset I tried to add even more data. 1 LSTM layers, 6000 training samples, 6000 test samples, best F1 score = 0.320 And here is the real problem: even if the training and validation loss are similar to previous models, the F1 is very low and does not seem to improve, even by increasing the learning rate. On this dataset I also tried to add L1 and L2 regularization, but they seem to have the only effect of reaching the same F1 score in fewer epochs. Adding dropouts I didn't notice any improvements. I also tried varying the batch size and learning rate, but I always got very similar results. In short Is the divergence between the training loss and the test loss normal? Why doesn't the model seem to work anymore when on a 6000 sample dataset? What can I do to maintain the previously obtained F1 score even on the dataset of 6000 samples?
So for an assignment, I am looking at how plant foliage health (ranked on a scale of 0-5, 0 meaning the foliage is all dead, 5 meaning the foliage is all alive) has changed over time (I have data from 2008 and 2023) as an indicator of population health. It has been a while since I have done statistics so any help on what analysis is best would really be appreciated
When you do LASSO or ridge regression, and pick the hyperparameter using cross-validation, the 1SE rule suggest to select not the best CV result but the one with the most penalization that's still within 1SE of the best value. That's meant to be a good approximation to accounting for the overfitting to the validation set that occurs by picking the hyperparameters on the validation set itself. Once I move to elastic net regression (with both a L1- and a L2-penalty), it is less clear what an equivalent rule would be. That's because there will be a whole curve in 2D-space that will be the boundary of where you achieve within 1SE of the best CV result and it's not really clear what's "more penalization" (more of a L1 or more of a L2 or some combination thereof). Is there any work that has looked into this? Is there some clever averaging approaches? Like take solutions all along the curve and model average them (I guess that's as easy as averaging coefficients taking 0 for models where one is not selected for linear regression, but involves more formal model averaging in case of GLMs with non-linear link functions?)? I'm also interested in whether we know if with two hyperparameters any such way of picking hyperparameters enjoys a similar "approximate optimality" as the 1SE rule.
I am studying survival analysis and am trying to see if there's a way to probabilistically forecast future outcomes, using simulation or other means. In the first example below, I fit a Cox model to the complete "lung" data from the survival package, showing 1000 months of outcomes. In the second example, I adjust the "lung" data as-if I only had 500 months of survival data, creating object "lung1". Using survival analysis, how could I probabilistically forecast events for months 501-1000 for lung1, assuming I only had data for months 1-500? I've used time-series forecasting models (ETS, ARIMA, etc.) but I wonder if there's a better solution using survival analysis? A problem with these time-series models is generating negative survival outcomes which obviously is impossible. Nevertheless, I post an image below of an ETS forecast model I've used before with log adjustments to eliminate negative-value outcomes. I post simple code for the Cox survival models at the bottom. Images for "lung" and truncated "lung1" data: Example of ETS time-series model forecast (using other data): Code: # Example from http://www.sthda.com/english/wiki/cox-proportional-hazards-model library(survival) library(survminer) # status 1 = censored # status 2 = dead ### Full data set ### # Cox regression of time to death on the time-constant covariates cox <- coxph(Surv(time, status) ~ age + sex + ph.ecog, data = lung) # Plot the baseline survival function ggsurvplot(survfit(cox, data = lung), palette = "#2E9FDF", ggtheme = theme_minimal()) ### Truncate the full data set "as if" we only had the first half of the time series available # lung1 reduces study time to 500 months (from 1000) and adjusts status (via status1) at month 500 cut-off lung1 <- lung %>% mutate(time1 = pmin(time,500)) %>% mutate(status1 = if_else(time > time1,as.integer(1),as.integer(status))) # Cox regression of time to death on the time-constant covariates cox1 <- coxph(Surv(time1, status1) ~ age + sex + ph.ecog, data = lung1) # Plot the truncated survival data myplot <- ggsurvplot(survfit(cox1, data = lung1), palette = "#2E9FDF", ggtheme = theme_minimal(),xlim = c(0, 1000)) myplot$plot <- myplot$plot + scale_x_continuous(breaks = sort(c(seq(0, 1000, 250)))) myplot
My answer still unaddressed for some unknown reasons. Given the constants $\{a,b,c,d,e,f$}, I want to compute the conditional mean $\text{E}[Z|S_1,S_2]$ and the conditional variance $\text{Var}[Z|S_1,S_2]$, with: $Z=a+bX_1+cX_2+dY_1+eY_2+fY_3$ Is the following true? $\text{E}[Z|S_1,S_2]=a+b\text{E}[X_1|S_1,S_2]+c\text{E}[X_2|S_1,S_2]$ and $\text{Var}[Z|S_1,S_2]=b^2\text{Var}[X_1|S_1,S_2]+c^2\text{Var}[X_2|S_1,S_2]+d^2\sigma_{Y_1}^2+e^2\sigma_{Y_2}^2+f^2\sigma_{Y_3}^2+bc\text{Cov}[X_1,X_2|S_1,S_2]+2de\text{Cov}[Y_1,Y_2]+2df\text{Cov}[Y_1,Y_3]+2ef\text{Cov}[Y_2,Y_3]$ where $\text{Cov}[X_1,X_2|S_1,S_2]=\text{E}[X_1X_2|S_1,S_2]-\text{E}[X_1|S_1,S_2]\text{E}[X_2|S_1,S_2]$ Assume $S_1=X_1+\epsilon_{X_1}, S_2=X_2+\epsilon_{X_2}$ (where $\epsilon_{X_1}\sim \mathcal N(0,\sigma_{\epsilon_{X_1}}^2)$ and $\epsilon_{X_2}\sim \mathcal N(0,\sigma_{\epsilon_{X_2}}^2)$) and the following joint distributions: $\begin{pmatrix} X_1 \\ X_2 \end{pmatrix}$ $\sim \mathcal N$ $\bigg(\begin{pmatrix} \mu_{X_1} \\ \mu_{X_2} \end{pmatrix}, \begin{pmatrix} \sigma_{X_1}^2 & \rho_{X_1X_2}\sigma_{X_1}\sigma_{X_2}\\ * & \sigma_{X_2}^2 \end{pmatrix}\bigg)$ $\begin{pmatrix} Y_1 \\ Y_2 \\ Y_3 \end{pmatrix}$ $\sim \mathcal N$ $\Bigg(\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}, \begin{pmatrix} \sigma_{Y_1}^2 & \rho_{Y_1Y_2}\sigma_{Y_1}\sigma_{Y_2} & \rho_{Y_1Y_3}\sigma_{Y_1}\sigma_{Y_3}\\ * & \sigma_{Y_2}^2 & \rho_{Y_2Y_3}\sigma_{Y_2}\sigma_{Y_3}\\ * & * & \sigma_{Y_3}^2 \end{pmatrix}\Bigg)$ Assume also that $(X_1,X_2)$ and $(S_1,S_2)$ are independent from $(Y_1,Y_2,Y_3)$.
I am confused about one aspect of the use of Gaussian processes for Bayesian inference. I understand that it relies on the assumption that your train and test data points form a multivariate normal distribution where you define a prior mean and covariance for the distribution. What I don't understand is that I believed covariance had a strict statistical definition $\text{cov}(X, Y) = \mathbb{E}\left(X-\mu_X)(Y-\mu_Y)^\top\right)$. How is it justified statistically to just use what seems like any old function we like? I am pretty new to this so would appreciate if anyone could direct me to good resources on the topic too.
I have a question on how to account theoretically for the risk of competing event in a specific setting. Suppose we have a cohort of patients at high risk of both infection-related mortality and non-infection related mortality. We randomize these patients to receive treatment A, which has an effect in reducing the risk of infection-related mortality, but not on the non-infection related one. Follow-up length is long, let's say 10 year. When we draw survival curves for infection-related mortality, we observe an initial reduction in the risk in patients randomised to A; however, this effect dilutes after 4 years, and the curves approaches after that time. My guess is that patients receiving A are protected for infection-related mortality, but starts to develop non-infection mortality, get censored, and this influence the survival curves (as well as the incidence rate of events). Indeed, if we observe patients long enough, we will end up having all patients either died of infection-related mortality or censored (i.e., died for other reasons). Is this an actual problem when looking at cause-specific risk of death, in the context of competing events? How can I account for such bias? Would a Cox-regression analysis be influenced by this potential bias if we have a long enough follow-up?
I'm looking for a way to compare 2 (or more) igraph objects in R. These are trajectories in 3-dim which are a network of nodes and edges but are not necessarily the same number of either, just that they have a corresponding coordinate in 3-dim. I think something like Procrustes could work nicely but requires an equal number of points, and so I'd need a pre-step like iterative closest point (ICP) to find correspondences between the network. I fear I have gone down the rabbit hole trying to fit a solution to Procrustes (and have yet to get a sensible answer from ICP so far), and thought I'd reach out to the smart people in this community and see if there's a technique I'm missing. This is a new area for me so grateful for any advice. Thanks in advance!
this is my first post. I am not sure how to approach this problem, I would like direction or even an algebraic solution please. I have to sort 70 different sized items out into 12 groups. Each group has the same total allowed size, the size has minimum and maximum. I want to know how to work out the total possible permutations there are? How would I go about working this out? Thanks
I need some clarification: Is it correct that when applying a VAR or VARMA model, there is only dependent variables? for example you will have a dependent variable X_t and dependent variable Y_t. Thus, there is no independent variables included in the data set.
I am using lmfit to fit a function in Python. This is my fit: Log scale: Lin scale: where the histogram is my data and the dashed line the fit. The error bars are simply the sqrt of the number of counts. I would like to decide on whether it is a good fit automatically, for this I am using the chi squared test. Despite the fit looks decent to me, I always have to reject it when performing the chi squared test. So either I am doing the test wrong or my 'eye calibration for what a good fit looks like' is wrong. I am obtaining the value of chi square from the fit by doing result.chisqr as stated in the documentation of lmfit. For this fit I get a value of 162.8 = result.chisqr. Now to select the critical value for chi I am using scipy.stats.chi2.ppf(.95, degrees_of_freedom) where degrees_of_freedom = result.nfree (for this example it is 22 = degrees_of_freedom). This gives me a critical chi squared value of 33.9 and since result.chisqr is larger, I have to consider that this is a bad fit. Is this actually a bad fit or am I doing the test wrongly? I noticed that if I use result.redchi, i.e. the reduced chi squared statistic, then this seems to work. I am not sure if I have to use this or it is just a coincidence.
I asked a question originally here, but my notations were confusing and I couldn't convey properly in terms of statistics. Notations in statistics is a bit new to me, because my background is mostly related to deterministic sciences. I have an i.i.d. random variable $$ Y_1, Y_2, Y_3, ..., Y_M \sim \mathcal{N}(0, \sigma^2) $$ I have a function that uses this random variable as follows, $$ X_i = f(Y_i) = \left| \frac{\sin\left( N \frac{Y_i}{2} \right)}{\sin\left(\frac{Y_i}{2}\right)} \right|^2 $$ So, in principle, I can say that $X$ is also a random variable. Then, I have another variable $Z$ that is also, in principle, a random variable. $$ Z_l = \sum_{i = 1}^{M} X_i $$ Now I have a different number of $Z$; that is $L$. From prior experience, I know that the distribution of $Z$ is an exponential one. $$ Z_1, Z_2, Z_3, ..., Z_L \sim \text{Exp}\left[ \frac{1}{F} \right] $$ Where $F$ is the expectation of the random variable $Z$. When I try to find an expression for $F$, I use the following integral. However, I use the assumption that $M$ is a large number. $$ F = M \int_{-\infty}^{+\infty} f(y) p(y) dy $$ However, for a finite number of samples of $Z$, (that is $L$) and also probably $M$, the expectation of $Z$ itself would have some variance that should go to $0$ when $L \to \infty$. Basically, I want to know the distribution of the expectation of $Z$ instead of just having the expression above. Something like the following. $$ \mathbb{E}[Z] \sim \Pi(F, ...) $$ I wrote the first entry as $F$ to emphasize that it is the mean of the distribution. Additional information: I know what $M \int_{-\infty}^{+\infty} f(y) p(y) dy$ looks like in closed form.
Suppose $Y_i=X_i'\beta+\epsilon_i$ with $E(\epsilon_i|X_i)=0$. Consider the usual OLS estimator for $\beta$ using a random sample $\{X_i,Y_i\}_{i=1}^n$: $\widehat{\beta}=(\frac{1}{n}\sum_{i=1}^nX_iX_i')^{-1}\frac{1}{n}\sum_{i=1}^n X_iY_i$. Substitute $Y_i=X_i'\beta+\epsilon_i$ into the expression gives $\widehat{\beta}=\beta+(\frac{1}{n}\sum_{i=1}^nX_iX_i')^{-1}\frac{1}{n}\sum_{i=1}^n X_i\epsilon_i$. The way to prove consistency is to show that $\frac{1}{n}\sum_{i=1}^nX_iX_i'\overset{p}{\rightarrow} E(X_iX_i')$, and $\frac{1}{n}\sum_{i=1}^n X_i\epsilon_i\overset{p}{\rightarrow} E(X_i\epsilon_i)=0$ by weak law of large numbers and then by continuous mapping theorem. Note that the weak law of large numbers only requires the existence of expected values: $E(X_iX_i')$ and $E(X_i\epsilon_i)$, where $E(X_i\epsilon_i)=E(X_iE(\epsilon_i|X_i))=0$ always hold under our model. Thus it seems that all I need to assume is that $E(X_iX_i')<\infty$ and $E(X_iX_i')$ being invertible and I only need these two assumptions. Am I right?
I'm comparing the performance of the Restrected ML estimator and the Baysean method for estimating a multilevel model by Monte Carlo simulation. As I'm a beginner in Baysean analysis, I don't know how to specify the prior distribution. In general, it is preferable to use an uninformative prior for a fair comparison. But in the current case, since estimates of ML are available, I'm wondering if I can use them in the prior distribution. For example, suppose I have an estimate of 1 with an estimated standard error of 2 for some parameter beta. Can I then use it in the prior such that beta~Normal(1, 2^2)(use in both mean and variance) or beta~Normal(1, 100^2)(use in mean only)?
We regress $Y$ on categorical data $X_i,\ i=1,\ldots,\ p$. Suppose this is a large dataset and many of the rows of the design matrix are duplicated. We minimize the dataset as follows: We average $Y$ for each unique row of the design matrix, We perform a weighted regression with each row of the design matrix weighted by its number of duplicates. How do the regression coefficients and their standard errors in this reduced dataset compare to regression on the raw data?
I'm running Linear Mixed Models on a dataset. The assumption for homoscedasticity is not being met, however when I remove one independent variable, then it's being met. So all the other variables except this one are homoscedastic. To fix this and make the dataset fit the model better, could I just transform the independent variable which is the issue by cube rooting it? And leaving others as they are? Or do I have to transform all of them if I'm transforming one variable? Also, is transforming them going to increase any kind of error rates and make inferences difficult? I would really appreciate help with this and I apologize if this is a silly question, as I'm a beginner.
I try to compare the logistic regression with XBGoost on the simulated data. What I found is that XBGoost AUC is better than that of logistic regression, even when logistic regression predict perfect probability (the probability used to generate binary outcome). Please see details below: Simulate X: generate 4 random variables (x1,x2,x3 and x4).See code section A. Similate Y: Let log_odds = x1+x2+x3+x4 (setting all coefficient to 1 and intercept to 0). Then convert log_odds to probability, and use probability to generate binary outcome. See code section B. Fit logistic regression. The estimated coefficients are very close to ones used for similation.The AUC is 0.834. coef: [[0.92180079 1.07390035 0.97258221 0.80164048]] Intercept [-0.00462648]. See code section C. Fit XGBoost. The AUC is 0.908.See code section D. Simulate testing set with different random seed. Logistic regression AUC is 0.836, and XGBoost AUC is 0.907. See code section E. As I understand, when I use simulated probability to generate binary outcomes, I was introducing randomness to the data, which could not be modelled/predicted. However, if the logistic regression already predict probabilities that are so close to the simulated ones, how could XGBoost generate better performance. Is this a problem of AUC, my test design, or my code? Thank you very much in advance! import random import numpy as np import pandas as pd import xgboost as xgb from xgboost import XGBClassifier import sklearn from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler, LabelEncoder, MinMaxScaler from sklearn.metrics import classification_report from numpy.random import seed from numpy.random import rand # Section A: Simulate X random.seed(1) seed(1) n=10000 x1=np.array(rand(n))*4-2 x2=x1+np.array(rand(n)) x3=-np.array(rand(n))*1.9 x4=np.array(rand(n))*1 print(sum(x1<=x2)==n) df=pd.DataFrame({"x1":x1,"x2":x2,"x3":x3,"x4":x4}) # Section B: Simulate Y def logistic(z): return 1 / (1 + np.exp(-z)) lp=x1+x2+x3+x4 prob = logistic(lp) y = np.random.binomial(1, prob.flatten()) # Section C: Fit logistic regression and check AUC from sklearn.linear_model import LogisticRegression LR = LogisticRegression() LR.fit(df.values,y) print("coef: ",LR.coef_, LR.intercept_) print("AUC: ",LR.score(df, y)) # Section D: Fit XGBoost and check AUC from sklearn.tree import DecisionTreeRegressor, plot_tree from xgboost import XGBClassifier from sklearn.metrics import classification_report from sklearn.metrics import roc_auc_score, auc, log_loss fit_xgb= XGBClassifier(booster='gbtree', numWorkers=1, n_estimators=10, minChildWeight=15.0, seed=1, objective='binary:logistic', maxDepth=3,eta=0.08, reg_lambda=10.0, alpha=10.0, gamma=0.0, colsampleBytree=0.7, subsample=0.8) fit_xgb.fit(df, y) xgb_prob = fit_xgb.predict_proba(df)[:,1] print('XGB AUC is', roc_auc_score(y, xgb_prob)) # Section E: Simulate testing set with different random seed, and check AUC random.seed(10) seed(10) n=10000 x1_1=np.array(rand(n))*4-2 x2_1=x1_1+np.array(rand(n)) x3_1=-np.array(rand(n))*1.9 x4_1=np.array(rand(n))*1 print(sum(x1_1<=x2_1)==n) df_1=pd.DataFrame({"x1":x1_1,"x2":x2_1,"x3":x3_1,"x4":x4_1}) lp_1=x1_1+x2_1+x3_1+x4_1 prob_1 = logistic(lp_1) y_1 = np.random.binomial(1, prob_1.flatten()) xgb_prob_1 = fit_xgb.predict_proba(df_1)[:,1] print('XGB AUC on testing set is: ', roc_auc_score(y_1, xgb_prob_1)) print("Logistic regression AUC is: ",LR.score(df_1, y_1))
I have recently read some work that features hypothesis testing of individual regression coefficients when the overall regressions featuring those coefficients have $R^2_{adj}<0$. One example is Schmidt & Fahlenbrach (2017), granted, in regressions where the primary variables of interest (the ones whose tests I am skeptical to believe) are instrumental variables. The hypothesis tests of the individual regression coefficients turn out significant with $p<0.05$, for what it is worth. However, the $R^2_{adj}<0$ is troubling. If we take $$R^2_{adj} = 1 - \left[\left(\dfrac{\overset{n}{\underset{i=1}{\sum}}\left( y_i - \hat y_i \right)^2}{n - p - 1}\right) \middle/ \left(\dfrac{\overset{n}{\underset{i=1}{\sum}}\left( y_i - \bar y \right)^2}{n-1}\right) \right]\text{,}$$ then $R^2_{adj}<0$ means that the fraction numerator exceeds the fraction denominator. That is, our (unbiased) estimate of the error variance is worse than our (unbiased) estimate of total variance. From this, I conclude that the model exhibits "anti"-performance, and we are worse-off for having done the modeling. How could I possibly believe any individual regression coefficient hypothesis test when the model performs so poorly that we not only lack much predictive ability (rather typical) but do a worse job of predicting than we would do if we did no modeling? How believable are the hypothesis tests of individual regression coefficients when the overall regressions have $R^2_{adj}<0?$ (This seems related but not quite the same and containing a mixed-bag of responses, anyway.) REFERENCE Schmidt, Cornelius, and Rüdiger Fahlenbrach. "Do exogenous changes in passive institutional ownership affect corporate governance and firm value?." Journal of Financial Economics 124.2 (2017): 285-306.
I have a dataset from a class taught in two different ways, groupwork or lectures, and they chose which one they preferred after the first lesson and then again after the second lesson. I'm not sure the best way to analyse this potential change in preference; the data is all anonymous and not paired from the 1st questionnaire to the 2nd one. I'm not great with stats so any help greatly appreciated :)
I don't understand how we obtain the model that we test for a unit root in the ADF test. Let me explain better. ADF test is used to test if a time series has a unit-root. We assume that the Data Generating Process of the time series is an AR(p). $$Y_t = \rho_1\cdot Y_{t-1} + \rho_2\cdot Y_{t-2} + \rho_3\cdot Y_{t-3} +...+ \rho_p\cdot Y_{t-p} + \upsilon_t$$ $$ \upsilon_t \sim i.i.d.\text{N} \Big( 0, \sigma^{2}\Big)$$ In order to do so, ADF test the null hypothesis that $\phi = 0 $ on the following model. $$\Delta{Y_t} = \phi\cdot Y_{t-1}+ \gamma_1\cdot \Delta{Y_{t-1}} + \gamma_2\cdot \Delta{Y_{t-2}} + \gamma_3\cdot \Delta{Y_{t-3}} +...+ \gamma_{p-1}\cdot \Delta{Y_{t-p+1}} + \upsilon_t$$ My question is how did we derive the second model from the first one? Which math operations are required? I tried to subtract $ Y_{t-1} $ from both side, some other manipulation and rearrangements but I didn't succeed. Lagged error terms always came up. The book "Applied Time Series Econometrics" from Lütkepohl states that: Any suggestions? Am I misunderstanding or forgetting something?
I have the below graphical causal model. I thought that when we apply the intervention i.e. do calculus we get to the graph on the right - that is deleting arrows going into the treatment (drug). to be clear i want to see effect of drug on cancer. However when i used R daggify and use the ggdag_adjustment_set it only highlights age as something to control for: Why hasn't area been highlighted - is it because if i control for area it may lead to bias? What kind of variable is 'area' and do i need to control for it
I am interested in testing for equivalence. I know about the TOST procedure, but many people who I work with do not, so I wanted to apply a method that they are more familiar with it. Specifically, I was about to use the two-sided $1-\alpha$ confidence interval of an estimate $\hat{\theta}$ to check whether this estimate lies within a predefined equivalence interval $(\theta_L,\theta_U)$. But I am hesitating after reading the following statement at the website of the software Minitab: [...] the confidence interval for equivalence also considers the additional information of the lower and upper limits of the equivalence interval. Because the confidence interval incorporates this additional information, a (1 – alpha) x 100% confidence interval for equivalence is in most cases tighter than a standard (1 – alpha) x 100% confidence interval that is calculated for a t-test. In another post on Stack Overflow, Horst Grünbusch made an interesting statement: The TOST-CI is simply the intersection of the one-sided CIs. Assuming this statement is true, then my understanding is that for any test statistic $\theta$ with symmetrical distribution, the intersection of the one-sided $1-2\alpha$ confidence intervals should be identical with the one-sided $1-\alpha$ confidence interval. I would like to ask: Is the cited statement by Horst Grünbusch true? For which test statistics is the two-sided confidence interval not identical to the intersection of the one-sided confidence intervals (i.e. the TOST-CI)? Can one give an intuition, why the TOST-CI would "in most cases be tighter", as indicated by Minitab?
I'm trying to find the variance of an ARMA(1,1) model of the following form: $$y_t=a_0+a_1y_{t-1}+\epsilon_t+b_1\epsilon_{t-1}$$ where $\epsilon_t$ is a white noise process. I have found it more convenient to write this model in terms of $\epsilon_t$'s: Writing using lag operators: $$(1-a_1L)y_t=a_0+(1+b_1L)\epsilon_t$$ Re-arranging: $$y_t=\frac{1}{1-a_1L}(a_0+(1+b_1L)\epsilon_t)$$ $$=\sum^\infty_{j=0}a_1^jL^j(a_0+(1+b_1L)\epsilon_t) $$ $$=\sum^\infty_{j=0}a_1^ja_0+\sum^\infty_{j=0}a_1^jL^j(\epsilon_t+b_1\epsilon_{t-1})$$ $$=\frac{a_0}{1-a_1}+\sum^\infty_{j=0}a_1^j(\epsilon_{t-j}+b_1\epsilon_{t-1-j})$$ Taking Variance, $$Var(y_t)=Var(\frac{a_0}{1-a_1}+\sum^\infty_{j=0}a_1^j(\epsilon_{t-j}+b_1\epsilon_{t-1-j}))$$ $$\sum^\infty_{j=0}a_1^jVar(\epsilon_{t-j})+b_1\sum^\infty_{j=0}a_1^jVar(\epsilon_{t-j-1})+2Cov(\sum^\infty_{j=0}a_1^j\epsilon_{t-j}, b_1\sum^\infty_{j=0}a_1^j\epsilon_{t-1-j})$$ $$=\sum^j_{j=0}a_1^j\sigma^2+b_1\sum^\infty_{j=0}a_1^j\sigma^2+\sum^\infty_{j=0}a_1^j.2Cov(\epsilon_{t-j}, b_1\epsilon_{t-1-j})$$ $$\frac{\sigma^2}{1-a_1}+\frac{b_1\sigma^2}{1-a_1}=\frac{\sigma^2(1+b_1)}{1-a_1}$$ This answer seems intuitive however it differs from ARMA (1,1) Variance Calculation @Neeraaj $$Var(y_t)=\frac{(1+2a_1 b_1 + b_1^2)\sigma^2}{1-a_1^2}$$
I am fairly comfortable with Bayesian hierarchical regression models, but I am new to panel data analysis. As someone from the social sciences, I have found that the majority of resources on panel data come from econometrics, which can be confusing. The terminology, particularly the distinction between 'Fixed' and 'Random' effects models, differs from what I am used to. An yes, I've read several threads on this website detailing the difference. In my view, panel data is well-suited for 'Hierarchical' regression models because each individual is measured at least twice, making it logical to introduce varying intercepts to the model. For example, in this question, I would assume the following model: $$y_{it} = \beta X_{it} + u_i + \epsilon_{it}$$ where $u_i$ represents the varying intercept across people (although I would probably express $u_i$ as $a_i$, but that's beside the point). However, I have come across materials that suggest that if $u_i$ is correlated with observed predictors, this model should not be used (although I still don't fully understand why). But I don't see how this assumption is any different from clustering on countries or any other hierarchical structure. It is an incredibly strong assumption that probably does not hold, so is everyone who uses hierarchical regression models simply wrong?
I am fitting a function to data. I want to tell whether the fit is good or not. Consider this example (which is actually my data): Despite the definition of 'fit is good' being totally ambiguous, most humans will agree in that the fit in the plot is reasonable. On the other hand, the 'bad fit example' shows a case in which most humans will agree in that this fit is not good. As a human, I am capable of performing such 'statistical eye test' to tell whether the fit is good looking at the plot. Now I want to automate this process, because I have tons of data sets and fits and simply cannot look at each of them individually. I am using a chi squared test, but it seems to be very much sensitive and is always rejecting all the fits, no matter what significance I choose, even though the fits are 'not that bad'. For example a chi square test with a significance of 1e-10 rejected the fit from the plot above, which is not what I want as it looks 'reasonably good' to me. So my specific question is: What kind of test or procedure is usually done to filter between 'decent fits' and 'bad fits'? This question is a follow up of this other question.
I have a measured signal with additive noise that is sampled from a Rician distribution originally. For processing I divide the signal by the baseline measurement (first few points), and then end up taking the log of this signal to yield my final vector, y. I want to then use maximum likelihood to estimate the most likely set of parameters from my data. I know the model that maps parameters to to y (it's an integral equation). My question is: Is it possible to calculate the maximum likelihood without an analytical form of the likelihood (and hence log-likelihood)? The reason is the noise pdf is quite complex. Any help or ideas would be greatly appreciated
Due to the inequality $2\sqrt{xy}\leq x+y$, the geometric mean is always closer to the smaller value than the arithmetic mean. In my situation, I need a "mean" that is closer to the larger value, so I thought of simply "flipping" the geometric mean $g(x,y)$ at the arithmetic mean $a(x,y)$: $$a(x,y) + \Big(a(x,y)-g(x,y)\Big) = x+y-\sqrt{xy}$$ The result is always between the arithmetic mean and the larger value. Is there some special name for this "mean"?
I have been seeing the notebooks provided by GPy to do a Coregionalized Regression. The notebook is here. From what I see you have two inputs and two outputs, $X_1$ which is related to $Y_1$ and $X_2$ which is used to model $Y_2$. Is there any way to make a coregionalized model that uses both the input of X1 and X2 to predict both Y1 and Y2 simultaneously? So, I can in a way have $Y_{i} = f(X_{1}, X_{2})$ I've tried playing with kernels, but it doesn't work well. It seems that, in a way, they are assuming independence of the input space since you have to add to which task the input relates. According to what I understand in the notebook, I would have $Y_{1} = f(X_{1})$ and $Y_{2} = f(X_{2})$ but having $cov(Y_1, Y_{2}) \neq 0$ since there's a task covariance. e.g: newX = np.arange(100,110)[:,None] newX = np.hstack([newX,np.ones_like(newX)]) print newX
I have carried out a designed agricultural experiments with two treatments and recorded the effect on the abundance of a pest insect. The field experiment was divided into four blocks with two plots (replications) per block, resulting in 2 x 2 x 4 = 16 plots. Pest insects were counted per plant on the same 15 plants in a row in each of the replications. The pest insects originated from one or two nearby fields and are therefore not evenly distributed. The data looks like this: Treatment Block Plant Plot Insects A 1 1 1 0 A 1 2 1 5 A 1 3 1 2 ... B 4 15 16 1 Since the counts of insect are Poisson distributed (or negative binomial, zero-inflated, generalized poisson, zero-inflated poisson ... whatever best fits the data), I was going to use a GLM (glmmTMB in R). The latest book by Zuur and Ieno (2021) gives good guidance on choosing the best distribution, so I have no questions about this. The treatments are what I'm interested in, so the treatment is an essential covariate in the model. The nested structure is accounted for by +(1|block/plot). This leads to the model: model <- glmmTMB(insect ~ treatment + (1 | block / plot), family = poisson, data = mydata) But this still doesn't take into account the spatial correlation of the 15 adjacent plants. As suggested by A.F. Zuur et al. (2009) in 'Mixed Effects Models and Extensions in Ecology with R', p. 161 and following, I first made a bubble plot to get an idea of the spatial patterns. I used the standardized residuals from a simple model with no spatial/correlation structure simpel_model <- glm(insect ~ treatment, family = "poisson", data = mydata) E <- rstandard(simp_model) and the coordinates of the plants. In the bubble plot you can clearly see the 16 plots with the 15 plants in a row (distance from plant 1 to plant 15 is about 5m). As the positive and negative residuals are grouped together, it looks to me like there is a high spatial correlation. I also produced a semi-variogram to see if it would be appropriate to use one of the suggested correlation structures such as correlation = corGaus() in gls (probably I would have to use + gau() in glmmTMB for my purpose). As my understanding is that the spatial correlation occurs mainly within the 5m plant rows, I also created a semi-variogram with cutoff = 5. In none of the variograms do I see any of the proposed correlation patterns (Gaussian, Exponential, Linear...). Unfortunately I have no ideas how to proceed now. Does anyone have any idea how I can implement the correct correlation structure in my model?
For classification problems, I wonder why using different kinds of loss functions makes sense. In particular, it feels like the model being learned, $p(y|X)$, can always be thought of as a binomial or multinomial distribution. Consequently, we can always minimize cross-entropy, as it is equivalent to the maximum likelihood for binomial or multinomial distribution. Yet I do see that other forms of loss functions appear to be more effective than cross-entropy. For example, polyloss claims to outperform cross entropy and focal loss on a variety of classification tasks. My question is then why not always use cross entropy? Why can a different loss sometimes do better?
I have multiple explanatory variables and one dependent variable. Data for all are collected at an annual basis, time series, dependent on their t-1 value. ¨ Will use of ARDL Autoregressive distributed lag be appropritate to apply? If so, some other questions regarding ARDL: Do all, both Xs and Y, need to be I(1)? or is it ok if some Xs are I(0)? I also did read somewhere that for ARDL non-stationary data can be used? If have non-stationary data and have to turn it stationary, is it right that I can use either 1 differencing OR just include time as an independent variable? Thank you so much in advance!
I have data which has additive Rician noise. I found that the likelihood function for the Rice distribution in equation 2.3 here: https://arxiv.org/pdf/1403.5065.pdf For my analysis, however, we take a signal (a vector of N points) that has Rician noise, and then we transform it in the following way (given in Python code, but let me know if someone would like me to write it out): def convert(S, C1, C2, C3, C4): A=S/np.mean(S[0:5]) E0 = exp(-C1*C2) E = (1.0 - A + A*E0 - E0*np.cos(C3)) / (1.0 - A*cos(C3) + A*E0*np.cos(C3) - E0*np.cos(C3)) R = (-1/C2)*np.log(E) transformed_signal = (R-C1)/C4 return transformed_signal Where C1-C4 are known constants. The first 5 points used here are the baseline of the signal (before it changes). The total number of points in signal is ~1000. I then fit this transformed_signal to a model to extract 2 parameters of interest. My question is how do I calculate the likelihood function for this transformed data? Does anyone know the procedure to achieve that?
I am looking for some guidance (published or otherwise) on performing PCA on left-censored environmental data (where data above an instrument's detection limit is reported). Any help is appreciated.
Overview I want to perform a Bayesian model selection on many datasets and use these datasets to determine the required parameter priors. Example Scenario: Coins Suppose I have a collection of thousand coins produced by a machine that randomly produces fair and loaded coins. The loaded coins are not identical, but their heads ratio $θ$ follows an unknown distribution $p(θ|\mathcal{M}_\text{loaded})$ obeying some constraints (see below). For each coin, I want to decide whether it’s fair using Bayesian model selection with two models $\mathcal{M}_\text{loaded}$ and $\mathcal{M}_\text{fair}$. I know: For each coin: the number of heads from hundred tosses (and thus an estimator $\hat{θ}$ for the heads ratio $θ$). Model priors $p_\text{fair}$ and $p_\text{loaded}$ with $0.1≤p_\text{fair}≤0.9$. The probability density $p(θ|\mathcal{M}_\text{loaded})$ of the heads ratio of the loaded coins obeys the following constraints: symmetric around ½, smooth, not very far from a uniform distribution, say, $0.1 < p(θ|\mathcal{M}_\text{loaded}) < 10$ everywhere. With all this given, the main information I am lacking for this is a prior for $p(θ|\mathcal{M}_\text{loaded})$. I estimate this by finding a suitable distribution and fitting it to my data for all coins, ignoring coins with $0.4<\hat{θ}<0.6$, since those have a decent chance to be fair. The rest of the Bayesian model selection is straightforward. Questions Is this procedure sound? I acknowledge that I use the same data twice. However, the data for a given coin has barely any impact on the parameter priors relevant to its model selection. (I could also exclude the data for the given coin when determining the priors for its analysis, doing a thousand fits instead of just one.) If yes, is there a name or reference for this approach? If no, is there a better way to determine parameter priors for $\mathcal{M}_\text{loaded}$? I am particularly interested in ways that can be extended to a more complex model space as well as higher-dimensional and unbounded parameter spaces.
I have eight independent variables, and also think of adding a ninth variable for time. Is this too much? Are there any consequences? Annual data for nine years.
I want to perform LDA in my cohort which is based on 140 inviduals distributed according in 3 groups. These individuals have undergone an analysis of 50 variables (gene expression). So my dataset is 137x51 (1 categorical variable + 50 numerical variables) I want to perform LDA and see how the individuals behave using the multiple predictors (in my case a set of genes). However I am not sure how to deal with missing values in the dataset, and which method fits better to the LDA. I lie my doubts in here to see if somebody has experience in this topic: The mice package has multiple approaches to do it with "pmm", "norm" or whatever. From my point of view, the missing values due to non amplification molecular process are missing completely at random, but seems that this consideration introduces a bias. So they could be treated as missing at random and perform multiple imputation. The thing is, should I construct blocks with the mice package, according to the genes and their family belonging, or should I leave the default option and let the imputation work. My data adjusts to normal distribution Thanks in advance
I have an ARMA(2,1) model of the following form, $$y_t=a_1y_{t-1}+a_2y_{t-2}+\epsilon_t+b_1\epsilon_{t-1}$$ Re-arranging and using lag operators: $$(1-a_1L-a_2L^2)y_t=(1+b_1L)\epsilon_t$$ solving for $y_t$ $$y_t=\frac{(1+b_1L)}{1-(a_1L+a_2L^2)}\epsilon_t$$ Using the definition of an infinite geometric series $$(1+b_1L)\sum^\infty_{j=0}(a_1L+a_2L^2)^j\epsilon_t$$ $$(1+b_1L)\sum^\infty_{j=0}(a_1\epsilon_{t-1}+a_2\epsilon_{t-2})^j$$ $$\sum^\infty_{j=0}(a_1\epsilon_{t-1}+a_2\epsilon_{t-2})^j+\sum^\infty_{j=0}(a_1b_1\epsilon_{t-1}+a_2b_1\epsilon_{t-2})^j$$ Using this solution to compute the variance: $$Var(y_t)=Var(\sum^\infty_{j=0}(a_1\epsilon_{t-1}+a_2\epsilon_{t-2})^j+\sum^\infty_{j=0}(a_1b_1\epsilon_{t-1}+a_2b_1\epsilon_{t-2})^j)$$ $$=\sum^\infty_{j=0}Var(a_1^j\epsilon_{t-1}^j+a_2^j\epsilon_{t-2}^j)+\sum^\infty_{j=0}Var(a_1^jb_1^j\epsilon_{t-1}^j+a_2^jb_1^j\epsilon_{t-2}^j)$$ $$=\sum^\infty_{j=0}(a_1^{2j}+a_2^{2j})Var(\epsilon_{t-1}^j+\epsilon_{t-2}^j)+\sum^\infty_{j=0}(a_1^{2j}b_1^{2j}+a_2^{2j}b_1^{2j})Var(\epsilon_{t-2}^j+\epsilon_{t-3}^j)$$ I think I must have made a mistake along the way, I don't know what to do with the term $Var(\epsilon_{t-1}^j+\epsilon_{t-2}^j)$ any help would be much appreciated.
The form for MSE for $N$ data points with scalar values $Y=[Y_1,Y_2,...,Y_N]$ is given by the formula: $$ MSE = \frac{1}{N} \sum_{i=1}^N (Y_i - \hat{Y}_i)^2 $$ How I see it, $ d_i = Y_i - \hat{Y}_i$, where $d_i$ is the Euclidean distance between the actual and predicted values for the $i^{th}$ data point. Thus, extending this to higher dimensions, say $D$ dimensions, $Z=[\vec{Z_1},\vec{Z_1},...,\vec{Z_N}]$. Thus, the MSE should be: $$ MSE = \frac{1}{N} \sum_{i=1}^N d_i^2 = \frac{1}{N} \sum_{i=1}^N \|Z_i - \hat{Z}_i\|^2 = \frac{1}{N} \sum_{i=1}^N \sum_{j=1}^D (Z_{ij} - \hat{Z}_{ij})^2 $$ However, although I did not see any direct result which mentions this, it seems most implementations of MSE use a different formula (not too different from what I thought above): $$MSE' = \frac{1}{N} \sum_{i=1}^N \frac{1}{D} \sum_{j=1}^D (Z_{ij} - \hat{Z}_{ij})^2$$ Is this $MSE'$ the correct form? If the MSE should provide the Mean Squared Error, where the error is measured by the Euclidean distance between the points, then why is this averaging over $D$ too? I do know that it doesn't make too much of a difference (a constant factor) if we use one of the measures consistently, but which one is standard? Is there a unique definition of MSE in these cases?
I have a function that takes a few hundred parameters and which returns a score I want to optimize for - It's a piece of software attempting to play a game against another player. The parameters partially determine the actions of the player and so have an effect on my final score I would like to find a set of parameters that optimizes the likely outcome of the played game. I'm facing several difficulties: The game is chaotic, so except for the most sensitive parameters, most of the hundreds of parameters have only a small individual effect. The game is computationally heavy to run. I will likely only have around 10000 datapoints I can gather with my limited computational resources. The only way I can even get to 10k datapoints is by running it in parallel. Single threaded approaches may not work for me I don't have a derivative of my function Parameters can be floats, integers or booleans. Some of the ints/floats may not currently have the right sign. Booleans tend to be the most impactful parameters, but I think these are mostly set right now Some parameters can entirely shut down my player if brought outside of acceptable ranges. I do not always know these acceptable ranges. While I am adjusting parameters, I am also adjusting the software, which subtly or not so subtly changes the meaning and ideal value of some parameters Due to the difficulty, I am not expecting to find even a local maximum let alone a global one, I am happy if I can get some of the most important parameters in the right order of magnitude without messing the less important parameters up too much. So far the best approach I've found and am currently using is: Randomly vary a subset of parameters by picking a value from a normal distribution around the currently selected best value. (booleans are randomly flipped) Play a game (selfplay), then store the used parameters and final score in a file Collect datapoints from my last n games, for floats and integers calculate a Pearson correlation coefficient (p) for every parameter correlated with my score. Then adjust every parameter x by setting x = x + abs(p) * y * p, where y is a scaling factor. Booleans are flipped if p indicates I should Occasionally manually adjust parameters based on what seems nonsensical. I've alternated optimizing for different rating values, not just the score, but also whether my bot has won and other relevant game specific values such as how many gamepieces I own at the end This (clearly flawed) approach at least seems to make my parameters drift closer to their ideal on average. But, if I pick a low scaling factor y, my parameter convergence is way too slow. If I pick a high y, there's a lot of unintended drift. I often observe (regardless of y and n) that my performance score decrease after a optimization attempt. I've tried some other approaches such as machine learning (neural nets and random forest trees) for parameter optimization, but with little luck. There probably isn't enough data to prevent overfitting on my noisy data Are there better approaches I can use here to optimize my parameters?
I am conducting a study where I look at the interaction of 3 categorical variables and 1 continuous variable. However, I want to be able to see all the possible comparisons of these 4 variables. In the past, I have used emmeans but I noticed that emmeans only takes the lowest and highest value of the continuous variable which does not make sense in repeated measures since it basically takes the lowest participant compared to the highest participant. Linear mixed model fit by REML. t-tests use Satterthwaite's method ['lmerModLmerTest'] Formula: RT ~ Domain * ShiftType * TrialType * VA_k + (1 | Probe) + (1 | Story_order) + (1 | Subject) Data: bsmu[bsmu$ACC == 1, ] REML criterion at convergence: 79528.9 Scaled residuals: Min 1Q Median 3Q Max -2.2523 -0.4384 -0.1779 0.1482 12.8338 Random effects: Groups Name Variance Std.Dev. Probe (Intercept) 169631 411.9 Subject (Intercept) 545749 738.7 Story_order (Intercept) 101769 319.0 Residual 3042405 1744.2 Number of obs: 4472, groups: Probe, 380; Subject, 60; Story_order, 8 Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 2254.31 228.51 80.13 9.865 1.74e-15 *** DomainL2 268.94 188.33 4375.18 1.428 0.1534 ShiftTypeNo Shift -30.19 200.03 1992.88 -0.151 0.8800 ShiftTypeUnchanged 244.29 201.03 1959.52 1.215 0.2244 TrialTypeSpace 482.29 206.74 2100.11 2.333 0.0198 * VA_k 150.41 105.24 170.10 1.429 0.1548 DomainL2:ShiftTypeNo Shift -132.44 264.37 4376.51 -0.501 0.6164 DomainL2:ShiftTypeUnchanged -193.74 265.03 4372.25 -0.731 0.4648 DomainL2:TrialTypeSpace -300.61 276.90 4372.10 -1.086 0.2777 ShiftTypeNo Shift:TrialTypeSpace -78.89 289.34 2148.38 -0.273 0.7851 ShiftTypeUnchanged:TrialTypeSpace -444.28 289.46 2117.47 -1.535 0.1250 DomainL2:VA_k -97.90 101.58 4222.47 -0.964 0.3352 ShiftTypeNo Shift:VA_k 56.42 101.62 4242.86 0.555 0.5788 ShiftTypeUnchanged:VA_k -82.56 100.25 4236.84 -0.824 0.4103 TrialTypeSpace:VA_k 39.49 104.60 4268.38 0.377 0.7058 DomainL2:ShiftTypeNo Shift:TrialTypeSpace 198.60 385.45 4374.32 0.515 0.6064 DomainL2:ShiftTypeUnchanged:TrialTypeSpace 446.45 385.71 4379.42 1.157 0.2471 DomainL2:ShiftTypeNo Shift:VA_k 53.32 142.70 4224.51 0.374 0.7087 DomainL2:ShiftTypeUnchanged:VA_k 292.02 142.67 4209.58 2.047 0.0407 * DomainL2:TrialTypeSpace:VA_k 164.48 148.53 4217.14 1.107 0.2682 ShiftTypeNo Shift:TrialTypeSpace:VA_k -101.93 148.10 4262.76 -0.688 0.4913 ShiftTypeUnchanged:TrialTypeSpace:VA_k 55.99 145.95 4256.12 0.384 0.7013 DomainL2:ShiftTypeNo Shift:TrialTypeSpace:VA_k -152.50 208.23 4232.78 -0.732 0.4640 DomainL2:ShiftTypeUnchanged:TrialTypeSpace:VA_k -422.48 206.97 4219.86 -2.041 0.0413 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Correlation matrix not shown by default, as p = 24 > 12. Use print(x, correlation=TRUE) or vcov(x) if you need it ```
I am working on a model to account for flood risk and it is based on three variables: Variable 1: drainage (float: 0 - 80) Variable 2: estimated population (float: 0-2,000) Variable 3: road network importance (float: 0-1) All the 3 variables are highly left-skewed, but they are not correlated. Supposing that the 3 variables are equally important to analyze the flood risk , is there a way I can combine them to create a score?
Tests exist to determine whether a distribution is normal. For example the Shapiro-Wilk’s method. I'm wondering how to determine whether I'm powered to detect that my distribution is non-normal (e.g., null hypothesis is that skew is 0, alternative is that it is different from 0). I could of course run a simulation - I'm specifically interested in an analytic power analyses. Here is an example where my sample is too small (underpowered) to detect a significant effect, even though the population is skewed (the data is drawn from a gamma distribution with a skew of 2): > set.seed(123) > dat = rgamma(10,shape = 1) > shapiro.test(dat) Shapiro-Wilk normality test data: dat W = 0.94299, p-value = 0.5867 What sample size would I need to have 80% power to find a significant effect?
A literature search yielded no obvious answers, so I wonder here if there any feasible methods to estimate the following. Suppose I have data $Y_i, \vec X_i$ indexed by $i = 1, \cdots, N$. Note that $Y_i$ represents a scalar binary outcome, and $\vec X_i$ a vector of predictors. I assume that my data are generated by the following, where $\mathbf{1}\left\{ . \right\}$ is the indicator function, $\varepsilon_i$ independent error, and $f(.)$ an unknown function, $$ Y_i = \mathbf{1}\left\{ f(\vec X_i) \geq 0 \right\} + \varepsilon_i $$ Are there any methods to estimate the function, $f(.)$, or more likely moment of the function $E[f(X_i)]$, possibly non-parametrically?