instruction
stringlengths
9
38.7k
Let’s assume there is a variable $z$ that can be modeled as: $$z=f(x)+g(y)+Ξ΅$$ Where $f$ and $g$ are unknown functions greater than zero, $x$ and $y$ are independent variables and $Ξ΅$ is random noise. $y$ is totally unknown, but I have data pairs $(x,z)$ and I’m trying to identify the $x$ value at which $z$ is β€œconsiderably” influenced by $f(x)$. "considerably influenced" means something like finding $x_1$ at which the expected value of $z$ change certain predefined amount. That is, finding $x_1$ such that: $$E[z|x = x_1] - E[z|x = x_0] > C$$ for some predefined reference point $x_0$ (which can be some low value of $x$) and threshold $C$. I can assume that $f(x)$ is monotonically increasing. An example of the relationship between $x$ and $z$ could be this: Qualitatively, this point might be somewhere above $x=20$, but I’d like to have a formal approach using well defined and repeatable criteria. I was thinking on fitting a model $\hat{z}=h(x)$ and find $x_1$ at which $Ξ”h=h(x_1)-h(x_{0})=C$ so the problem can be easily solved if $x_{0}$ and $C$ are defined. My question is what type of model would be suitable and whether the heteroscedasticity of the data makes this approach invalid. Any suggestions and critics to my approach are also appreciated.
I have a question about bootstrapping correlated values from grouped data. The context is using Census data grouped by region, $R$ (tract or block group). Each region has a list of estimated values $V$: median income, population in poverty, age 18 to 64, etc. Each estimate has a margin of error, $MOE$. This $MOE$ is easily translated to a std error for a 90% confidence interval. When I use this data, I would like to sample from the regional values according to the distribution of the errors. If I create an error term $e(v,r)$ independently for each region $r$ and value $v$ then I am assuming independence of the $V$'s when they are very likely to be correlated. How should I set up a sampling system to include the correlation patterns within the regions? Is there a common name for this process? TIA.
I am currently doing the statistical analysis for my master thesis. In my experiment, I have 3 species that I expose to Erosion treatments. After every treatment, I see what percentage of each species fall under one of 11 damage categories as can be seen in the picture: I collected how big the angle compared to standing upright was. If the number was 5 for example, it was heavily leaning to the right. Angle_cat is just the damage category, sorry for not naming it properly. For the percentages, I looked at all shoots of the same plant species after an erosion treatment and calculated how much percentage of the plant fell under this damage category. So in the picture, all the percentages of Phrag (which is reed) and an erosion of 0 cm amount to 1. So for the Phrag 2.5 cm erosion, you can see that the percentage of species with an angle of -3 decreased. The problem I currently have is that for each Erosion;Species combinations, I have multiple damage categories. So if I do glm(Percentage~Erosion*Species) all percentages would be like different experiment results of the same treatment, instead of a part of the results. Is there a way I can look at the difference between the different Erosion;Species;damage categories with a glm or do I need a different method? Another way of representing this data is like this (if this helps): I wish you a nice easter, Sam
Edvard, the evaluator in sample B, does not know Richard, the target subject in sample A. However, the two, independently, give the same answer/Likert value (1-5) to 30% of the questionnaire items. It results in an artificial inflation of the correlation between the evaluations. How can I reduce it?
I've been trying to learn how to do this analysis but I can't find any information that sheds light on my case and I can't figure out what to do from Hayes' book. I would really appreciate it if someone could help me out. For my undergraduate thesis, I'm examining how emotion regulation predicts resilience and whether age and gender moderate this relationship. The Emotion Regulation Questionnaire measures two strategies: Cognitive Reappraisal and Expressive suppression. This means I have two predictors and two moderators (one of which is dichotomous, meaning gender). What model should I use in Process? And considering that only one predictor can be entered, do I run the analysis twice? And would running multiple analyses lead to Type errors I and II, requiring a Bonferroni or Hochberg correction? Any advice would be greatly appreciated.
I've been told that these following acf graphics basically shows that both series are stationary, but i didn't really understand why. Is it if both autocorrelation and partial autocorrelation can gradually decay to 0 then the series is stationary? Or it's a lot more complicate than this..?
I am trying to implement the Schwartz-Smith (2000) commodity pricing model from the paper Short-term variations and long-term dynamics in commodity prices The model is estimated using the Kalman Filter, where the state space is described by the following two equations: xt = c + Gxt-1 + $\omega$t yt = dt + Ftxt + vt, where xt = [$\chi$t, $\xi$t] is a 2 x 1 vector of the state space variables and yt = [lnFT1, . . . , lnFTn] is an n x 1 vector of of the log futures prices at time t with maturities Ti for i = 1, . . . , n c, G, d, and F are the state space parameters with appropriate dimensions $\omega$t is a 2 x 1 vector of disturbances with E[$\omega$t] = 0 and a covariance matrix of Var[$\omega$t] = W vt is an n x 1 vector of disturbances with E[vt] = 0 and a covariance matrix of Var[vt] = V All of the state space parameters with the exception of V, so c, G, W, d, and F are functions of seven underlying constant parameters (as well as the time increments and the maturities): $\theta$ = ($\kappa$, $\sigma$$\chi$, $\mu$$\xi$, $\sigma$$\xi$, $\rho$ $\xi$$\chi$, $\lambda$$\chi$, $\mu$$\xi$*) The nature of which is described in the paper. While V is assumed to be a diagonal matrix, whose elements are si2 for i = 1, . . . , n I am trying to estimate these parameters with the Expectation-Maximization (EM) algorithm. For that I am using the pykalman Python package to calculate the loglikelihood of the observations, then maximize the loglikelihood with respect to $\theta$ and the elements of V, using scipy.optimize. I then reestimate the Kalman Filter with the new parameter estimates; this is repeated until convergence. The problem with this approach is that I am only getting the end result of the estimation and no descriptive statistics. In the original paper, next to the estimated parameter values, the standard errors are also provided but the method with which they are calculated is not described. How can I go about obtaining the values of these standard errors?
I have a fairly elaborate Directed Acyclic Graph (DAG) for the analysis that I am running, but I am presenting a simplified example here to clarify a few things. Here is a DAG from dagitty.net: According to the graph, I only need to adjust for A in order to close the back door path and to identify the total causal effect of Treatment on Outcome. In other words, the minimal adjustment set for this diagram is just A. Conversely, if I were to condition on C, the pathway Treatment -> C -> Outcome would be biased because C is on the front door path between the Treatment and the Outcome, so C should be left out from a regression model OR else B would also need to be conditioned on to close the formed back door path. My question is about variables like B, the adjustment for which is not strictly necessary (assuming C stays unadjusted for). Adjustment/conditioning on B or leaving out completely is seemingly inconsequential for the total causal effect of Treatment on Outcome. In this case, what are the implications, benefits or drawbacks of including B-type variables in my regression models? Would I not gain any precision or explanatory power in the model by including it as a control, rather than optionally leaving it out?
I am iteratively solving a stochastic equation by generating a random field and using the resulting generation to move toward an equilibrium. I know that the system converges but I want to use an appropriate stopping criteria. At equilibrium, I know that if I continue to iterate, the mean change over many iterations will be 0 at every node $x$ in the field. My null hypothesis $H_0$ at equilibrium is that the change in value at each point $\Delta X_{i}$ will follow a normal distribution with mean 0 and unknown variance. Let's say I have $N$ nodes. Therefore, if I keep track of the changes over the previous $T$ time steps, I can compute a student's t-distribution test statistic for each node $K_i = \frac{\langle \Delta X_i \rangle_T}{\sigma_{i,T} /\sqrt{n}}$, for $i = 1,2,...,N$ where $\langle \Delta X_i \rangle_T = \frac{1}{T}\sum_{t=1}^T\Delta x_{i,t}$ and $\sigma_{i,T} = \frac{1}{T-1}\sum_{t=1}^{T}(\Delta x_{i,t}-\langle \Delta X_i \rangle_T)^2 $. This is where I am a little uncertain. I can check each node individually for acceptance (say rejecting the null hypothesis with $\alpha = 0.05$ significance level). My plan to decide whether my whole system is at equilibrium is to treat each node as Bernoulli trial. I can compute $R$ such that $P(R_o<R) = 0.95$ (0.95, or any probability I want to prescribe), where $R_o$ is the observed number of rejections by inverting the cumulative binomial distribution resulting from probability $\alpha = 0.05$ and $N$ trials for my desired acceptance probability. Therefore I can say if my actual number of observed rejections $R_{o,actual} > R$, I am not yet at equilibrium. Does this make sense? One area where I'm still confused is that, this scheme seems to be independent of the value of $\alpha$. Is that correct? Is there a different test that would be more appropriate?
I am using bootstrapping to calculate confidence intervals for a risk ratio. In some of the bootstrapped samples, there are no observations of one of the outcomes, leading to the risk ratio being value / 0. Thus, when I try to calculate the SE, I get a NaN value in R, and thus an Inf upper CI. How should I calculate the standard error and confidence intervals with a risk ratio? I have a very small sample within strata (and need to calculate the risk ratio in the strata.)
I'm trying to adjust a Generalized Additive Model to a daily time series. My goal is to do a short-term forecast for the gas demand of my city. I have data since 2015, including information about the weather (minimum and maximum temperatures). The data with which I'm working is daily data, and since I have information from many years back I have double seasonality: yearly and weekly. Here is an image of the historical data throught the years. We can see that every year the demand rises during winter months (may, june, july, aug, sept): And here is an image of the data weekly. We can see that during the end of the week the demand decreases: The real question is the following: what is the right sintax for taking into account this two seasonalities when trying to adjust a gam() model from the mgcv package in Rstudio? I know that it should be something like this: library(mgcv) gam_1 <- gam(gas_demand ~ s(x1, bs = "cr", k = 7) + s(x2, bs = "ps", k = 365), data = df, family = gaussian) #This is just an example, x1 and x2 have not been defined, they #are supposed to be the covariates of the model #The daily data should be stored in the data frame 'df' #The response variable that I'm trying to forecast is 'gas_demand' Should I create a column in my data frame that goes from 1 to 7, depending on the day of the week of the observation to take into account the weekly seasonality? And for the yearly seasonality, should I create a column with values from 1 to 365 (depending on the day of the year)? Or a colum with values from 1 to 12, depending on the month of the year? I'm not sure what would be the right way to do it. And my final question: which type of basis function is recommended for each type of seasonality? I'm really desperate for a response since I'm struggling to find examples that work with this exact type of data. Thanks in advance! :)
The leaps library regsubset function gives an object that contains the list of BIC drops of each subset model from the intercept model. However, it is different from what is calculated manually. For example, using the mtcars dataset Reproducible code: library(leaps) # Stepwise selection stepwise = regsubsets(mpg~., data=mtcars, method="seqrep", nvmax=10) # Plot results to see best subset from stepwise selection is of size 3 plot(stepwise, scale="bic") # Optimal subset size is 3 which.min(summary(stepwise)$bic) # The variables from subset size 3 are cyl, hp and wt coef(stepwise, 3) # Minimum (optimal) BIC value drop from intercept model is -45.41594 min(summary(stepwise)$bic) # Manual calculation i.e. BIC of model of 3 variables - BIC of intercept # Gives -48.88168 BIC(lm(mpg~cyl+hp+wt, data=mtcars))-BIC(lm(mpg~1, data=mtcars)) Why is there such a difference (-48.9 vs -45.4)? Roughly an absolute difference of 3.5 Refer to this post for BIC display values of regsubset summary, notice the use of word "about" and the discrepancies in manual and regsubset summary can be seen there too. (also around a difference of 3.5)
I want to investigate whether there is a significant difference in the centrality of eigenvectors between two groups. The sample sizes of the two groups are over 9000 and 300, respectively, with a large difference. I used the Mann-Whitney U non-parametric test, and I want to know whether the result of the non-parametric test is reliable.
The setting is $A\in \mathbb{R}^{n*n}$ with each entry being i.i.d. bounded r.v. in $[a,b]$. The question is to prove $\Vert A\Vert_2$ is sub-Gaussian. Intuitively I thought since $\{A_{ij}\}_{i,j=1,...,n}$ is bounded, then $$\Vert A \Vert_2 = \sup_{\Vert v \Vert = 1} \vert v^TA^TAv\vert = \sup_{\Vert v \Vert = 1}\vert\sum_{i,j}v_iv_j(\sum_k A_{ki}A_{kj})\vert\leq \max(a^2,b^2)$$ Then $\Vert A\Vert_2$ is bounded so that it is sub-Gaussian. Is there any problem in the above process?
Is it possible to use intention-to-treat principle in randomized studies with crossover design? Was it ever used in such studies?
A previous question indicates that after conducting a permutation ANOVA and finding a significant interaction, permutation tests would be suitable for pairwise comparisons. However, I also saw online tutorials (e.g. this blog post, or an older one from the same author) presenting how to use bootstrapping to obtain contrasts, confidence intervals and p-values after conducting a permutation ANOVA. How should I decide which is the suitable approach?
I'm confused about the two different recalculated 95% confidence interval (CI) results from two meta-analyses on the same article. The original article reported a geometric mean of 2.3 and a geometric standard deviation (SD) of 1.60. However, one meta-analysis calculated a 95% CI of 0.76-3.84 using these values, while another meta-analysis reported a 95% CI of 2.03-2.57. How is it possible for two different meta-analyses on the same original article to produce two different 95% confidence intervals? Thank you for reading this :)
I red the paper about transformers, and fully understood every single piece of that, so now I'm implementing it from scratch in tensroflow, without using any shipped layer from the library. The only missing part is how they intend to take a single tensor (batch size, time-steps, embeddings) and give it to the multihead module. For what I can tell, it seems that they take the embeddings of size $d_{model}$, then split the into $n$ heads, so each head get a piece of the embedding of size $d_{model}/n$, however I don't quite see what's the intuition why this should work Am I missing something? are they just duplicating the input for each head instead?
Let $X_i$ ($i=1,\dots, n$) be a random sample from $X\sim \exp(\lambda_1)$ and $Y_j$ ($j=1,\dots, m$) be a random sample from $Y\sim \exp(\lambda_2)$, and $X$ and $Y$ be independent. I try to find the generalized test of $H_0: \lambda_1=\lambda_2$ v.s. $H_1: \lambda_1\neq \lambda_2$. Find the distribution of the statistic and the critical region of the generalized test at level $\alpha$. My work: The likelihood function is that for $\theta=(\lambda_1, \lambda_2)$ $$ L(\theta)=\lambda_1^n\lambda_2^m \exp(-n\lambda_1\bar{X}-m\lambda_2\bar{Y}) $$ where $\bar{X}$ and $\bar{Y}$ are sample mean. Then I want to get the likelihood ratio statistic: \begin{align} \Lambda(x) &= \frac{\sup_{\theta=\theta_0}L(\theta\mid X)}{\sup_{\theta\neq\theta_0}L(\theta\mid X)} \end{align} The global MLE are $\hat{\lambda}_1=\frac{1}{\bar{X}}$ and $\hat{\lambda}_2=\frac{1}{\bar{Y}}$. The restricted MLE for $\lambda_1=\lambda_2$ is $$\lambda_0=\frac{m+n}{n\bar{X}+m\bar{X}}$$ So we have $$ \Lambda=\frac{(m+n)^{m+n}}{n^nm^m}[\frac{n\bar{X}}{n\bar{X}+m\bar{Y}}]^n[1-\frac{n\bar{X}}{n\bar{X}+m\bar{Y}}]^m $$ So we take the statistic $$T=\frac{n\bar{X}}{n\bar{X}+m\bar{Y}}$$ So to find the critical region, we need $$\Lambda=CT^n(1-T)^m\le \lambda_0$$ From here, I am not sure how to solve that. It seems that $CT^n(1-T)^n$ is decreasing if $T\le n/(n+m)$ and increasing is $T\ge n/(m+n)$. (since $g'(T)=T^{n-1}(1-T)^{m-1}[n-(m+n)T]$) So $\Lambda\le \lambda_0$ (we will reject $H_0$) is equivalent as $$c_1\le T\le c_2$$ for some constants $0<c_1<c_2$ and it satisfy $$c_1^n(1-c_1)^m=c_2^n(1-c_2)^m$$ For the test level $\alpha$, we also need $$ P(c_1\le T\le c_2)=\alpha $$ (the probability that we reject $H_0$). The distribution of $T$: under $H_0$ we have $\sum X_i\sim Gamma(n,1/\lambda)$ and $\sum X_i+\sum Y_j\sim Gamma(n+m,1/\lambda)$. Then $$ \frac{m+n}{n}T\sim \frac{\chi^2(2n)/(2n)}{\chi^2(2(m+n))/(2(m+n))}\sim F(2n, 2(m+n)). $$ But what is the critical region?
I am reading this paper and need to replicate what they did in Table 4: However, I am having trouble understanding what are local and global polynomial regressions. Could someone please explain? If you need more context, you can see the link and see page 8, right above Figure 5: "We do this exercise for the six global and the two local polynomial regressions"
Consider random variables $V_1$, $V_2$, $V_3$, and $V_4$. Then, define $U_2=V_1+V_2$ and $U_3=V_1+V_3$. That is, $U_2$ and $U_3$ are correlated by construction. My first question is whether the following equality can hold: $$ \Pr\left[U_2<V_4\mid V_4,U_3\right] = \Pr\left[U_2<V_4 \mid V_4\right] \tag1 $$ In my personal opinion, because $U_2$ and $U_3$ are correlated, the equality hoes not hold in general. But, I am not sure. Second, I am considering an assumption below: $$\Pr\left[U_2<V_4+U_3\mid V_4,U_3\right] = \Pr\left[U_2<V_4 \mid V_4 \right]. \tag 2$$ Here, again, it is not clear for me that this assumption can be plausible since $U_2$ and $U_3$ are correlated. Lastly, if the assumption $(2)$ can be plausible, does the assumption $(2)$ imply the equation $(1)$? Thank you.
I want to predict the concentration of a biomarker (continuous) according to the interaction of white blood cells with time (both continuous), considering medical units as a random effect that may impact both the intercept and smooth of this interaction. Is the following syntax appropriate? bam0 <- bam(bmk ~ s(units, bs="re") + te(time, wbc, k=c(6,9), bs=c("cr","cr")) + ti(units, time, wbc, k=c(6,9), bs=c("re","cr","cr")), data = dat, method = 'fREML', family = "gaussian", discrete = TRUE) or should I use bam1 <- bam(bmk ~ s(units, bs="re") + te(units, time, wbc, k=c(6,9), bs=c("re","cr","cr")), data = dat, method = 'fREML', family = "gaussian", discrete = TRUE) The advantage of bam0 is that the second term can be viewed as a contour plot because it does not include the random effect (see below). Nevertheless, isn't there redundancy of the specific interaction (i.e., main effect excluded) of time with wdc between the 2nd and the 3rd term? bam1 would seem more appropriate to me, but unlike bam0, it does not allow viewing the contour plot below, a priori because it includes the random effect (?).
The question of modeling the zero-inflated part of a negative binomial mixed effects model is a thorn in my side. I've read a lot of articles and blogs and it seems to be an issue that is largely glossed over, perhaps due to the fact the choice about the zero-inflated part of the model is very specific to each research question. However, some articles/blogs emphasize that the zero-inflated part intends to model structural zeros and they select variables very narrowly. Others discuss how it is a mixture of sampling and structural zeros, and model both the count and zero inflated parts the same way. However, with zero-inflated assumed to be a mixture of structural and sampling zeros, why not just generally model both sides the same way? Is it more about parsimony? How do you chose which variables to include?
I am seeking help on how to perform Monte Carlo simulations on (potentially) correlated time series. I have a single product (e.g., men's wallets) that are sold out of seven stores in the same city. I have last year's daily sales of wallets for each store. Most days, a store sold zero, some days they sold one, fewer days they sold two... the histogram resembles an exponential distribution. I want to be able to use last year's sales, to simulate the potential combined sales of the seven stores. (Assume no sales growth year on year). I am wary of just sampling each store's daily sales independently and combining them into a new RV, as there may be some correlation, and some seasonality, of the sales. Ideally I'd be able to simulate a range of the combined sales for January 1, a range of the combined sales for January 2, .... a range of the combined sales for December 31; that takes into account potential correlation and seasonality of the sales. Bonus points if you can point me to some R functions that do this. It's important to note that I'm not looking to forecast a time series, but rather understand potential range of the 7 stores' sales in sum, based upon variability in the existing 7 stores' time series of sales.
When studying the relationship between multiple time series often the first step is to determine stationarity of the individual time series. Given one of the time series, one can check for stationarity by assuming an AR(p) model for some p, and then applying an ADF test. However, from the outset one expects that the series is related to some other series. This means that: One would expect significant autocorrelation in the residuals of the univariate analysis, and The univariate model is misspecified I'm wondering how to make sense of this situation then. Is there a way to take into account that the time series is expected to be driven partially be external forces when doing an ADF test? If so, how? And which, if any, problems can arise from failing to take this into account? edit: To make this more precise, suppose that the data is generated by a VAR(p) process with 2 variables. If we then do a univariate AR(p) model fit on one of these two variables, the residuals need not be white noise, and can be auto-correlated. How does this affect the applicability of the ADF test? and more generally, the ADF test is really a 'unit root' test, not a 'stationarity' test. Meaning, that when the data generating process is not an AR(p), then simply fitting an AR model to it anyway and testing if that has a unit root does not seem to make sense to me... If the process is not even an AR, then the concept of a unit root is not relevant to begin with, so why test for it?
I received a reviewer report on a meta-analysis that I submitted which stated that a multivariate regression that I performed should instead be "a backward stepwise multivariate logistic regression analysis since Bsum is insignificant". Ignoring the fact that every source I know says that stepwise procedures should be avoided, what is Bsum and how do you test for it? I suppose it is the sum of coefficients, but I don't seem to find anything about it. And how did the reviewer find out it was not significant since they don't have access to the dataset itself to compute the model themselves?
I'm having trouble understanding why I get radically different results if I try to find the parameter of a Zipf distribution when I use the methods proposed by Clauset et al. (2009) as opposed to using the log-transformed rank and frequency data and fitting a linear regression. This is the code I'm using: # Creating 500 random samples from Zipf distribution with parameter 1.5 frequencies = np.random.zipf(1.5, 100) n = len(frequencies) # Continuous MLE alpha_hat_cont = 1 + n / sum(np.log(frequencies)) # Approximation of discrete MLE alpha_hat_discr = 1 + n / sum(np.log(frequencies/0.5)) # Fitting linear regression rank vs frequency to log-transformed data ranks = np.arange(1, n+1) slope=-np.polyfit(np.log(ranks), np.log(np.array(sorted(frequencies, reverse=True))), 1)[0] print(alpha_hat_cont, alpha_hat_discr, slope) One can also use the powerlaw package in Python, as such: fit = powerlaw.Fit(frequencies, discrete=True, xmin=1) print(fit.alpha) which gives the exact same result as alpha_hat_discr above (if the argument xmin is specified to be equal to 1). I know results don't have to be the same (Clauset et al. suggest using MLE because the OLS on the log-transformed data is a bad approximation), but these are radically different. For context, I'm trying to find the Zipf exponent of the rank-frequency distribution of a corpus. Thank you very much for your help!
1. Context I have a dataset structured like this: > str(dataset) 'data.frame': 52135 obs. of 9 variables: $ lat : num 59 59 55 59 59 63 59 59 59 59 ... $ long : num 16 16 12 16 16 14 15 16 15 15 ... $ date : chr "1951-03-22" "1951-04-08" "1952-02-03" "1952-03-08" ... $ julian_day : num 81 98 34 68 53 71 16 37 73 87 ... $ year : int 1951 1951 1952 1952 1953 1953 1954 1954 1954 1954 ... $ decade : chr "1950-1959" "1950-1959" "1950-1959" "1950-1959" ... $ time : int 1 1 1 1 1 1 1 1 1 1 ... $ lat_grouped : num 1 1 1 1 1 2 1 1 1 1 ... $ year_centered: 'AsIs' num -35 -35 -34 -34 -33 -33 -32 -32 -32 -32 ... I have performed two quantile regression methods on the 3 groups of latitudes (1, 2 and 3) in my data. The first method is very common, using the rq function from the quantreg package. The second is adapted from Solution to the Non-Monotonicity and Crossing Problems in Quantile Regression by Saleh & Saleh (https://arxiv.org/abs/2111.04805). From what I have understood (I am not a mathematician), the algorithm is based on a constrained optimization approach, where the quantile regression line is constrained to be non-crossing by imposing a set of linear constraints on the parameters of the regression line. These constraints are formulated as a set of linear inequalities, which are then solved using a linear programming algorithm. The functions implemented given in the paper are these ones: minimize.logcosh <- function(par, X, y, tau) { diff <- y-(X %*% par) check <- (tau-0.5)*diff+(0.5/0.7)*logcosh(0.7*diff)+0.4 return(sum(check)) } smrq <- function(X, y, tau){ p = ncol(X) op.result <- optim(rep(0, p), fn = minimize.logcosh, method = 'BFGS', X = X, y = y, tau = tau) beta <- op.result$par return (beta) } The regression was performed for tau = 1:99 / 100. 2. Results As you can see, visually, there is a clear difference between the 2 methods. The smrq function from Saleh&Saleh (red, right column) seems to outperform qr traditional way of doing (blue, left column). I have also plotted the intercepts, and as shown in Saleh&Saleh, smrq gets rid of the non-monotonic behaviour observed in rq: However, I wanted confirmation that the smrq is better, so I performed a k-fold cross-validation. But here, qr seems to be much better than smrq. 3. Issues I have in mind that the evaluation of a model is to correlate to your research question and that deciding which modelling approach is better is context-dependent. However, the approach used by Saleh&Saleh is supposed to help deal with crossing problems. As they state, their paper "describes a unique and elegant solution to the problem based on a flexible check function that is easy to understand and implement in R and Python, while greatly reducing or even eliminating the crossing problem entirely. It will be very important in all areas where quantile regression is routinely used and may also find application in robust regression, especially in the context of machine learning." I must admit that I do not know where to go from all this. Some questions that I have in mind are: was the k-fold cross-validation relevant (why/why not)? any idea on a relevant way to test rq against smrq other than a visual approach? I have tried to be concise and yet complete, which is not always easy. I would be happy to bring some more details if needed - please just ask.
I am training a model with LightGBM, and I am getting an output like this: [LightGBM] [Info] Total Bins 1981 [LightGBM] [Info] Number of data points in the train set: 28632, number of used features: 15 [LightGBM] [Info] Start training from score 2.534713 [20] training's rmse: 4.7065 valid_1's rmse: 4.79156 [40] training's rmse: 4.45158 valid_1's rmse: 4.61878 [60] training's rmse: 4.32291 valid_1's rmse: 4.55663 [80] training's rmse: 4.24446 valid_1's rmse: 4.53266 [100] training's rmse: 4.18674 valid_1's rmse: 4.52748 [120] training's rmse: 4.13661 valid_1's rmse: 4.52959 [140] training's rmse: 4.09082 valid_1's rmse: 4.53327 [160] training's rmse: 4.04819 valid_1's rmse: 4.53705 [180] training's rmse: 4.00448 valid_1's rmse: 4.53943 [200] training's rmse: 3.96052 valid_1's rmse: 4.54488 [220] training's rmse: 3.9187 valid_1's rmse: 4.5526 [240] training's rmse: 3.87888 valid_1's rmse: 4.55612 [260] training's rmse: 3.83932 valid_1's rmse: 4.56151 [280] training's rmse: 3.8001 valid_1's rmse: 4.56596 [300] training's rmse: 3.76323 valid_1's rmse: 4.56899 [320] training's rmse: 3.72648 valid_1's rmse: 4.57288 [340] training's rmse: 3.68954 valid_1's rmse: 4.57776 [360] training's rmse: 3.65472 valid_1's rmse: 4.58399 [380] training's rmse: 3.62083 valid_1's rmse: 4.58822 [400] training's rmse: 3.58848 valid_1's rmse: 4.59112 [420] training's rmse: 3.55622 valid_1's rmse: 4.5942 [440] training's rmse: 3.52427 valid_1's rmse: 4.59629 [460] training's rmse: 3.49288 valid_1's rmse: 4.5998 [480] training's rmse: 3.46305 valid_1's rmse: 4.60098 [500] training's rmse: 3.4332 valid_1's rmse: 4.604 [520] training's rmse: 3.40395 valid_1's rmse: 4.60809 [540] training's rmse: 3.37517 valid_1's rmse: 4.61122 [560] training's rmse: 3.34607 valid_1's rmse: 4.61451 [580] training's rmse: 3.31775 valid_1's rmse: 4.61881 [600] training's rmse: 3.28888 valid_1's rmse: 4.62112 [620] training's rmse: 3.26158 valid_1's rmse: 4.62205 [640] training's rmse: 3.23438 valid_1's rmse: 4.62703 [660] training's rmse: 3.2086 valid_1's rmse: 4.63075 [680] training's rmse: 3.18285 valid_1's rmse: 4.63385 [700] training's rmse: 3.15612 valid_1's rmse: 4.63661 [720] training's rmse: 3.12905 valid_1's rmse: 4.64054 [740] training's rmse: 3.10485 valid_1's rmse: 4.64305 [760] training's rmse: 3.07981 valid_1's rmse: 4.6468 [780] training's rmse: 3.05544 valid_1's rmse: 4.65087 [800] training's rmse: 3.03116 valid_1's rmse: 4.65354 [820] training's rmse: 3.00748 valid_1's rmse: 4.65625 [840] training's rmse: 2.98355 valid_1's rmse: 4.65951 [860] training's rmse: 2.96051 valid_1's rmse: 4.66279 [880] training's rmse: 2.93658 valid_1's rmse: 4.66475 [900] training's rmse: 2.91332 valid_1's rmse: 4.66645 [920] training's rmse: 2.89188 valid_1's rmse: 4.67055 [940] training's rmse: 2.87073 valid_1's rmse: 4.67544 [960] training's rmse: 2.84836 valid_1's rmse: 4.67783 [980] training's rmse: 2.82654 valid_1's rmse: 4.68047 [1000] training's rmse: 2.80509 valid_1's rmse: 4.68257 [1020] training's rmse: 2.78431 valid_1's rmse: 4.68481 [1040] training's rmse: 2.76303 valid_1's rmse: 4.68846 [1060] training's rmse: 2.74237 valid_1's rmse: 4.69077 [1080] training's rmse: 2.7221 valid_1's rmse: 4.69346 [1100] training's rmse: 2.70158 valid_1's rmse: 4.69541 [1120] training's rmse: 2.68161 valid_1's rmse: 4.69885 [1140] training's rmse: 2.66289 valid_1's rmse: 4.70274 [1160] training's rmse: 2.6443 valid_1's rmse: 4.70658 [1180] training's rmse: 2.62565 valid_1's rmse: 4.70889 [1200] training's rmse: 2.60685 valid_1's rmse: 4.71195 From [20] -> [1200[, the RMSE of the validation set has barely changed, whereas the test set's RMSE has improved quite a bit. I know this is a clear sign of overfitting, but is the fact that the validation set isn't improving at all a sign of something? Is the model learning relations that only exist in the training set somehow? I don't really understand how the model can be improving on the training set so much, whereas the validation set remains static?
Can I use the same training and validation data to perform MLM and train the weights of a classification head? Here is the background of my specific problem: The problem is a binary classification problem using text data. I am using 'bert-base-uncased' from Huggingface. The entire data set was created using various data augmentation methods. The test set will be the original data set used to augment the data. My question is whether I can use the same data to do MLM and later train the AutoClassForSequenceClassification head.
In this paper (PMID 23123231; paywalled), the authors develop a logistic regression prediction model for Alzheimer's disease. In Table 3 the authors then present disease prediction results after applying the model to a validation set (the Class column specifies probability intervals into which the patients were split): I'm trying to replicate a statistical analysis I've found in a spreadsheet, where the creator of the spreadsheet has used the data from Table 3 alone to calculate the Positive Likelihood Ratio for different probabilities of disease predicted by the model. While I'm able to blindly follow the calculations in the spreadsheet, I'm struggling with understanding the reasoning and validity of the steps. All similar worked examples I've found assume access to the complete validation set, making it possible to calculate the sensitivity and specificity of the model from the true/false positive/negative rates, which doesn't seem to be the case here. Is there a general procedure for this type of analysis? Any help in the form of explaining the reasoning or guiding me to some good reading material is much appreciated.
I have a sampling process like the following: Randomly select psus from one stage with equal probability Use all ssus of each psu selected for estimation. Each ssu is associated with a statistic from an aggregate of sub-units So for example we want to estimate a statistic about the number of insect per trees in orchards: Let's say we sample 10 counties as our psus. All the farms from those counties are our ssus. We then have aggregate statistics from each farm about the number of insects per tree, where the farms have different number of trees. We don't have individual observations for each tree, just the aggregate statistics. Because our statistic relies on the number of trees, which we don't have observations on directly, in a sense trees is our sampling unit. So we might think of our effective sample size as being based on the design effect on measuring the variance of insects per tree. However, that is tricky. We only have one observation: the statistic for each farm. But farms may have wildly differently number of trees that contribute to the aggregate total. Would it make sense to think of there being one observation for each tree, where each tree within each farm is assumed to have the same rate of insects per tree? Since we do not have any sub-information from each farm, I'm not sure how else to structure my data for standard error estimate and analysis.
I need to test if there is a significant difference in leakage between prototypes. I've tried kruskal wallis and post hoc wilcoxon pairwise and Dunn-Bonferroni test, and the post hoc test always say that there is no significant difference between them for except A.4 and C.4, but that can't be... for example how can there not be a significant difference between A.4 with mean 27% leakage and C.2 with mean 1.13 % leakage?
I have set of data Columns say $A,B$ Now I form a new data Columns $Y= A+B$ Now $Y$ is the dependent I have another independent data $X$ I now need to fit a regression of the form $Y=c*X^b$ where $c$ and $b$ are constant to be predicted from regression Now I used power fit to find $c$ and $b$ To get a good $R^2$, $AdjR^2$ , $P(>\mid t \mid) <0.06$ and $cooks~Distance<0.16$ I took less than 0.16 as the conditions to be checked if the model is good Until now if I get $YPred$ as the $Y$ predicted values. As I know $A$ and $B$ values Now by doing $A= YPred - B$ To get $Apred$ to be very close to the actual $A$ That is if I perform a regression of $A$ vs $Apred$ then by I look to get $R^2$, $AdjR^2$ , $P(>\mid t \mid) <0.06$ and $cooks~Distance<0.16$ Kind help with the best way to approach If my method of approach Someone can guide me in this Actually what I have is $A=c*X^b+d*B$ Now i need find a suitable value of $c,b$ and $d$ using regression here I really don't know how to approach I assumed $d=-1$ And tried the above Please someone help I will be helpful to me
I have hypothesised equivalence among 3 repeated measures. The data are such that a nonparametric approach would be needed. What I have considered, but not necessarily know how to implement correctly: Compare confidence intervals at 100 * (1-2 * alpha)% for a Friedman ANOVA. This may be represented in a graph. Attempt something like a two one-sided t-test procedure, but for a Friedman ANOVA. Could someone point me to a resource where I could figure out how to do this?
I know this is technically assignment question. The training data contains 9000 observations and 900 features. I have to build the model to predict the testing data, which contains roughly 5,000 observations and same number of features as the training data. I am wondering that since there are so many features, we use PCA for feature selection? I tried random forest, lasso regression, but they are so slow. I am not hitting the good accuracy using the features given by PCA. Should I use random samples for random forest or lasso regression? For modeling, I notice that SVM overperforms compared to naive bayes, neural network, and linear discrimnant analysis. Should I just use SVM over the ensemble method, because of multiclass data? I am getting roughly 88 percent correct on hold one leave one out of training data, but 25 percent correct on testing data. For the binary data, I was able to get good accuracy(not perfect) by lasso for feature selection and doing ensemble method of logistic regression, neural network, svm, qda, and lda.
Is bias variance tradeoff a thing for quartile regression? Can I assume the error for quantile estimation follows a certain distribution (e.g., estimated quantile - true quantile follows normal distribution)?
Let $Z$ have a uniform distribution on $[βˆ’0.5, 0.5]$. Let $X$ be a continuous random variable which is independent of $Z$. Let$$Y = ⌊X + ZβŒ‹ βˆ’ Z.$$ I would like to ask how to compute the marginal density function and joint density function of $Y$ and $X-Y$. I have zero idea on how to deal with the floor function. I only knew that $$⌊x+ nβŒ‹=⌊xβŒ‹+n$$ when n is an integer. However $Z$ is in the range $[βˆ’0.5, 0.5]$.
I’m looking at making predictions of baseline and treated survival from a parametric survival curve that are unbiased. I have a matched sample to try to control for observed confounding and wanted to use doubly robust estimates in my treatment effect model by controlling for covariates. I was wondering, if it was valid to estimate, say, a exponential survival curve, with a treatment variable and some control variables, then take the treatment effect parameter and intercept from this model into a pexp function in R and make predictions with just these parameters and plot a marginal exponential survival curve, and leave out the other predictors? I’m not sure if this breaks any statistical rules. Edit I'm trying to do something like the below. I have estimated the conditional treatment effect and estimated parameters for treatment and covariates. Now, I just want to plot a survival curves for the baseline and treatment parameters with these parameters only to get the marginal distribution. The idea was to remove any bias caused by the covariates through the GLM then predict what survival looks like for baseline and treatment only I was wondering if this procedure is valid. t<-seq(0,max(pbc$time),50) boot_fit_exp <- flexsurvreg((Surv(time, status) ~ trt+age+sex), data = pbc, dist = "exp") coef_boot_fit_exp <- coef(boot_fit_exp) hazard <- exp(coef_boot_fit_exp[1] + coef_boot_fit_exp[2]) # Calculate survival probability surv_prob_baseline <- 1-pexp(t,boot_fit_exp$res[1]) surv_prob_trt <- 1-pexp(t,as.vector(hazard)) Any help on this would be really appreciated, thanks
For some reason my model keeps showing up to be a poor model when checking its accuracy through the confusion matrix and AUC ROC. This is the model I am stuck with after doing backward elimination. This is the logistic output: `Call: glm(formula = DEATH_EVENT ~ age + ejection_fraction + serum_sodium + time, family = binomial(link = "logit"), data = train, control = list(trace = TRUE)) Deviance Residuals: Min 1Q Median 3Q Max -2.1760 -0.6161 -0.2273 0.4941 2.6827 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 15.741338 7.534348 2.089 0.03668 * age 0.063767 0.018533 3.441 0.00058 *** ejection_fraction -0.080520 0.019690 -4.089 4.33e-05 *** serum_sodium -0.111499 0.053639 -2.079 0.03765 * time -0.020543 0.003331 -6.167 6.95e-10 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1` This is the confusion matrix output glm.pred Survived Dead 0 46 10 1 5 14 The auc is showing up as 0.178 library(pROC) # Calculate predicted probabilities for test set glm.probs <- predict(glm9, newdata=test, type="response") # Create prediction object for test set pred <- prediction(glm.probs, test$DEATH_EVENT) # Create ROC curve for test set roc.perf <- performance(pred, measure = "tpr", x.measure = "fpr") # Plot ROC curve for test set plot(roc.perf, legacy.axes = TRUE, percent = TRUE, xlab = "False Positive Percentage", ylab = "True Positive Percentage", col = "#3182bd", lwd = 4, print.auc = TRUE) # Add AUC to ROC curve auc <- as.numeric(performance(pred, measure = "auc")@y.values) text(x = 0.5, y = 0.3, labels = paste0("AUC = ", round(auc, 3)), col = "black", cex = 1.5) abline(a=0, b= 1) How can I get past this problem? I check the classes and its showed that there is a data imbalance. But I don’t know what to do with this knowledge.
I am working on binary classification problem: there is a website, where user can do 2 types of actions: Non-target actions Target actions I have a large dataset with columns: utm_source, utm_medium, utm_keyword, ... target There is one row per session. utm_source, utm_medium, utm_keyword and others are parameters of session. target = 1, if user performed at least 1 target action during session target = 0 otherwise My task is - given the parameters of session to predict if user will perform at least 1 target action during this session. I have to achieve ROC-AUC > 0.65, if this makes sense. Dataset contains 1732218 rows total 50314 (2.9%) rows with target = 1 But there are many sessions with identical parameters (and with different session_id in raw data, but naturally I have dropped session_id). So if I remove duplicates from the dataset, it will contain 398240 rows total 24205 (1.4%) rows with target = 1 The question is - should I remove these duplicates and when? My current approach is Original dataset with duplicate rows represents natural distribution of data, so I have to test my model on the part of original dataset. I can train my model on explicitly balanced dataset, and duplicate removal can be the part of this balancing. But I there are people (on course, where I am studying now), who have removed all duplicates before train-test split - and these people have successfully defended this task and achieved certificate... So what approach is right? Links to ML literature are welcome.
Let's say I train a model and it has an RMSE of 2.5. Does this mean, that on average, my prediction will be 2.5 away from the true value? Or does some scaling need to be done in oreder to get this value's magnitute to be in line with the target variable's magnitude?
Is there any mathematical result that states that the Wilcoxon-Mann-Whitney (WMW) test is optimal in some sense, for a specific testing problem that is a subproblem of the general problem the WMW test is testing, say against an alternative of two specific distributions where one is stochastically larger than the other, maybe a location shift model with specified distributions but maybe something else? I have in mind maximum power for given level, however I'd be interested in other types of optimality as well. Also I suspect that any result would be asymptotic, maybe of the type "locally asymptotically optimal". I had a look at the Hajek, Sidak, Sen book Theory of Rank Tests, but I don't think it has such a result. There is an exercise that states efficiency 1 of the Wilcoxon signed rank test for one sample in a specific situation, also mentioned here: https://www.jstor.org/stable/43686636 I am however not aware of anything like this for the two-sample test, and I'd like to know whether anything exists.
I am performing different Active Learning experiments using several query strategies. One of the query strategies is the "Ranked batch-mode sampling" from Python's modAL library, which is an implementation of Cardoso et al.’s ranked batch-mode sampling. The query strategy computes the scores for each data instance of the unlabelled pool $x \in \mathcal{D}_{\mathcal{U}}$ using the formula $$ score(x) = \alpha (1 - sim(x, \mathcal{D}_{\mathcal{L}})) + (1 - \alpha)\phi(x), $$ where $\alpha = \frac{n}{n + p}$, $n$ is the size of $D_{\mathcal{U}}$, $p$ is the size of $\mathcal{D}_{\mathcal{L}}$, $\phi(x)$ is the uncertainty of predictions for $x$, and $sim$ is a similarity function, for instance cosine similarity. The similarity function measures how well the space is explored near $x$, and, thus, will give a higher score to those data instances that are less similar to the ones that have already been labelled. In each iteration, the scores are computed for all the instances in the unlabelled pool. The instance with the highest score is removed from the pool and the scores are recalculated until $B$ instances have been selected. My computation of the time complexity is as follows: The time complexity of computing cosine similarity between two vectors of size $d$ is $O(d)$. Therefore, the time complexity of computing the cosine similarity between the every unlabelled sample $x \in D_{\mathcal{U}}$ and all the labelled samples in $\mathcal{D}_{\mathcal{L}}$ is $O(n \cdot p \cdot d)$. Overall, the time complexity of querying a batch of size $B$ is $O(B \cdot n \cdot p \cdot d)$. Are my computations correct? A representation of the time required by this query strategy vs. the size of the labelled dataset $D_{\mathcal{L}}$ show an exponential curve, not a linear curve.
I want to verify for multicollinearity between independent categorial variables. Which test I should use? First, I want to examine the relationship between the willingness to participate in medical decision making (dependent variabele - 2 categories) and education (independent variable). Later, I would do a multiple binary logistic regression (adjusted). I have education (5 categories), paid work (2 or 3 categories) and household income (2 categories). Income was questioned using a 5-point likert scale.
I'm making experiments to evaluate language models to brazilian portuguese datasets. So, i've made so each dataset is divided in 10 parts, I want to use cross-validation to determine the model's performance. But the thing is, I also want to use Hyperparam Search to determine the best parameter for the model. I've red that you should do Hyperparam Search for each fold of the cross validation, and in other places, that you have to use a part of the dataset that will not be used in the cross validation. Can someone please give me a hand on this one? I would also appreciate if someone has some academical articles that describe the process please :)
(Using R) - this is my first time posting a stats question online, so please let me know if I'm on the wrong forum or haven't provided enough information and I'll do my best to fix it! About the data and my goal here: Best analogy I can think of is that it's a language course and the final exam is a long conversation. Four times during the course I gather reports on student performance (for example, handwriting, speed of writing, reading ability). I want to know if I can predict pass or failure for the course based on these four reports. I've created a demo dataset here: set.seed(22) reportsdata <- structure(list(Student = c(1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 5L, 5L, 5L, 5L, 6L, 6L, 6L, 6L, 7L, 7L, 7L, 7L, 8L, 8L, 8L, 8L, 9L, 9L, 9L, 9L), TermReport = c("A", "B", "C", "D", "A", "B", "C", "D", "A", "B", "C", "D", "A", "B", "C", "D", "A", "B", "C", "D", "A", "B", "C", "D", "A", "B", "C", "D", "A", "B", "C", "D", "A", "B", "C", "D"), Handwriting = c(sample(x = 1:5, size = 36, replace = TRUE)), Speedwriting = c(sample(x = 1:5, size = 36, replace = TRUE)), Reading = c(sample(x = 1:5, size = 36, replace = TRUE)), Loudness = c(sample(x = 1:5, size = 36, replace = TRUE)), Enthusiasm = c("5", "5", "3", "5", "2", "4", "3", "NA", "1", "4", "3", "3", "NA", "2", "1", "1", "1", "2", "2", "NA", "3", "2", "4", "2", "4", "3","5", "2", "3", "1", "2", "3", "5", "4", "NA", "5"), EndCoursePassFail = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L)), class = "data.frame", row.names = c(NA, -36L)) Note that their score at the end of the course has been retroactively applied to all their entries, though whether they would pass (1) or fail (0) was not known at the time. My real dataset has the same structure, but contains a little over 600 observations and 30 variables (which have been filtered to those that contain less than 30% NA entries, e.g. sometimes could not get a score for enthusiasm). So far I've been trying a mixed effects logistic regression with student and trial as random effects (Bobyqa & Nelder_Mead are the only optimisers that don't fail, I need to use ~ . syntax as there are too many variables to list and for reproducibility). E.g.: model <- glmer(EndCoursePassFail ~ . -Student -TermReport + (1|Student) + (1|TermReport), data = reportsdata, family = binomial, control = glmerControl (optimizer = "bobyqa", optCtrl=list(maxfun=1e6)), nAGQ = 1, na.action=na.exclude) For both my original dataset and for the sample data provided above when the seed is set to 22, my model produces convergence errors: Warning messages: 1: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, : Model failed to converge with max|grad| = 0.0477113 (tol = 0.002, component 1) 2: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, : Model is nearly unidentifiable: very large eigenvalue - Rescale variables? But, my sample dataset shows the following errors if seed is set to 1: boundary (singular) fit: see help('isSingular') I think my issue could be due to perfect separation between Student and Course Result since the result had been retroactively added. The question is - where to go from here? Some thoughts: I could average student scores across term reports somehow, so that I no longer need repeated measures and therefore don't get perfect separation. But this seems crude and feels like looking at only the tip of the iceberg. Looking at other answers for similar issues, I might need to switch to trying a penalised likelihood from blme (R package), however I don't understand it well enough yet to know whether (and if so, how) this sort of perfect separation can be dealt with using blme. Or, I could pretend that there aren't any repeated measures and run the model as though there weren't any - but of course, this is also crude and ignores a lot of potentially useful information provided by the data. Also, in case it is relevant - because there are so many scores to include in the full dataset, I want to later use stepAIC (or a loop, or equivalent) to roughly identify the 'best' model.
I am working with a distribution with the following density: $$f(x) = - \frac{(\alpha+1)^2 x^\alpha \log(\beta x)}{1-(\alpha + 1)\log(\beta)}$$ and CDF $$\mathbb{P} (X \leq x) = \int_0^x - \frac{(\alpha+1)^2 t^\alpha \log(\beta t)}{1-(\alpha + 1)\log(\beta)} \, dt = \frac{x^{\alpha+1}((\alpha+1)(\log(\beta x))-1)}{(\alpha+1)\log(\beta)-1}$$ with $x \in (0,1), \beta \in (0,1)$ and $\alpha >-1.$ How can I generate random samples from this distribution in Python/R? Which books can I use to learn about the simulation of random variables and random numbers? Any help is appreciated.
So I am currently studying the relationship between University Rankings (my dependent variable) and academic freedom (my independent variable, which can take values between 0 and 1). I have a panel of hundreds of universities over 17 years. At first, I was hesitating between a linear model (using OLS regression with fixed effects) and an ordinal model (since my dependant variable is a ranking going from 1 to 600). Some research showed me that an ordinal regression model might not be the best choice because of two reasons : Non-linear regressions can be problematic when there is a lot of fixed effects An ordinal regression model might not be optimal when there is a high number of different ordinal categories (my model has 105 categories : one for each number from 1 to 100, and then one for 200, 300, 400, 500 and 600). So, knowing this, I have two questions regarding what I should do next. Question 1 : is what I read in my research true ? Is it really not a good idea to use an ordinal regression model here ? Question 2 : If I cannot use an ordinal regression, what could I do in order for my regression to take into account that losing or gaining ranks in the top 100 part of the ranking is way more impactful than at the bottom ? What I mean by this, in the context of Univeristy Rankings, is that going from the 80th place to the 30th place should be considered way more of a big deal than going from the 500th place to the 450th.
I've recently started learning about Bayesian statistics, and I came across this very nice answer by Xi'an https://stats.stackexchange.com/a/129908/268693, which [in my slight paraphrasing] says the following: Given a family of distributions $\{f(\cdot|\theta): \theta \in \Theta \}$ defined on a sample space $\mathcal{X}$, and a prior distribution $\pi$, we require that $$ \int_{\Theta} f(x|\theta) \pi(\theta) \,d\theta < \infty \quad \text{ for all } x \in \mathcal{X}; $$ otherwise, we do not obtain a valid prior distribution $\pi(\theta)$, and so Bayesian inference is not possible. This leads me to the following question: What are some families of distributions $\{f(\cdot|\theta): \theta \in \Theta \}$ that one might encounter in practice for which there exists a set $E \subset \mathcal{X}$ of positive measure such that $$ \int_{\Theta} f(x|\theta) \,d\theta = + \infty \quad \text{ for all } x \in E? $$ In other words, I'm curious if there is a family of distributions for which a uniform prior leads to an "improper posterior" such that the problem cannot be remedied by re-defining the $f(\cdot|\theta)$'s on a set of measure zero. Here are a couple of examples I came up with: The Cauchy distribution: $f(x|\theta) = \frac{1}{\theta \pi [1 + (x/\theta)^2 ]}, \; x > 0, \; \theta > 0$. In this case, $\int_{0}^{\infty} f(x|\theta) \,d\theta = + \infty$ for all $x > 0$. A rather contrived example: For each $\theta \in \Theta := (1,\infty)$, let $f(x|\theta) := \frac{1}{\theta^x}$ for each $x \in \mathcal{X} := (0,\infty)$. Then $$ \int_{\Theta} f(x|\theta) \,d\theta := \begin{cases} \frac{1}{x-1} & \text{ if } x > 1 \\ \infty & \text{ if } 0 < x \leq 1 \end{cases} $$ (Would $f(x|\theta) = 1/\theta^x$ ever be used in practice?) Are there any other such examples? I am especially interested in examples such as last one, where the set $\{x \in \mathcal{X}: \int_{\Theta} f(x|\theta) \,d\theta = + \infty \}$ has positive and finite measure.
I am trying to implement Attention is All You Need paper in PyTorch without looking at any code. I'm struggling to understand how do I get the Keys and the Values from the output of the top encoder. Do I learn 2 linear projections which take as input the output of the top encoder (which is a single matrix) and output the keys (or values) matrix? I don't quite understand it from the original paper, nor from The Illustrated Transformer. I understand how to get the Queries; they are obtained the same way as in the encoder self-attention layer: via a learned linear projection (linear layer).
In short: MCMC is used to construct posterior distributions for parameters of central tendency and all parameters used in the formula for this central tendency. I only care about the parameters of central tendency which will readily converge. However, several upstream parameters do not converge. Should I worry about this? More details: I work with a Hierarchical Bayesian Model in which an exponential fit is applied to a dose-response curve constructed for each sample of a set of 100+ samples. Given the fitting parameters, a dose is determined from a measured response. These 100+ doses are then used to calculate parameters of central tendency. I use MCMC to derive posteriors for all these parameters. I only care about the parameters of central tendency, and these will readily converge by visual inspection and Rubin & Gelman diagnostics. But many of the exponential fit parameters will not converge. Should I be worried about this?
I have noticed that Logistic Regression (https://en.wikipedia.org/wiki/Logistic_regression) is a model that used significantly for both Regression problems and Classification problems. When used for Regression, the main purpose of Logistic Regression appears to be to estimate the effect of a predictor variable on the response variable. For example, here are some examples in which Logistic Regression is used for Regression problems: Modelling of binary logistic regression for obesity among secondary students in a rural area of Kedah : https://aip.scitation.org/doi/pdf/10.1063/1.4887702 A logit model for the estimation of the educational level influence on unemployment in Romania : https://mpra.ub.uni-muenchen.de/81719/1/MPRA_paper_81719.pdf A logistic regression investigation of the relationship between the Learning Assistant model and failure rates in introductory STEM courses : https://stemeducationjournal.springeropen.com/articles/10.1186/s40594-018-0152-1 When used for Classification, the main purpose of Logistic Regression appears to be to estimate the probability of the response variable assuming a certain value given an observed set of predictor variables. For example, here are some examples in which Logistic Regression is used for Classification problems: Using logistic regression to develop a diagnostic model for COVID‑19: A single‑center study : https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9277749/pdf/JEHP-11-153.pdf Logistic regression technique for prediction of cardiovascular disease : https://www.sciencedirect.com/science/article/pii/S2666285X22000449 A Study of Logistic Regression for Fatigue Classification Based on Data of Tongue and Pulse : https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8917949/pdf/ECAM2022-2454678.pdf Based on surveying such articles, I noticed the following patterns: When Logistic Regression is being used for Regression problems, the performance of the Regression Model seems to be primarily measured using metrics that correspond to the overall "Goodness of Fit" and "Likelihood" of the model (e.g. in the Regression Articles, the Confusion Matrix is rarely reported in such cases) When Logistic Regression is being used for Classification problems, the performance of the Regression Model seems to be primarily using metrics that correspond to the ability of the model to accurately classify individual subjects such as "AUC/ROC", "Confusion Matrix" and "F-Score". The interesting thing being that regardless of whether you working on a Regression problem or a Classification problem - if you do decide to use Logistic Regression, in both cases you can calculate Classification metrics such as the Confusion Matrix. Based on these observations, I have the following question: My Question: Suppose if I am using Logistic Regression in a regression problem (e.g. estimating the effect of predictors such as age on employment vs unemployment) and the model seems to be performing well (e.g. statistically significant model coefficients, statistically significant overall model fit, etc.). Even though I technically still able to calculate Classification metrics such as the Confusion Matrix, F-Score and AUC/ROC - am I still obliged to measure the ability of this Regression model to successfully classify individual observations based on metrics such as ROC/AUC? Or am I not obliged to this since I not working on a Classification problem? I feel that it might be possible to encounter a situation/dataset in which the goal was to build a Logistic Regression model for a Regression problem - and the resulting model might have good performance metrics used in regression problems, but might have poor ROC/AUC values. In such a case, is this a good Logistic Regression model as it performs well for the regression problem as intended - or is it a questionable model as it is unable to perform classification at a satisfactory level?
Let $\{f(\cdot|\theta): \theta \in \Theta \}$ be a family of pdfs and let $\pi: \Theta \to \mathbb{R}$ be a prior. According to Bayes' theorem (as stated in, e.g., Casella and Berger), the posterior distribution $\pi(\cdot|x)$ is given by $$ \pi(\theta|x) = \frac{f(x|\theta) \pi(\theta)}{\int_{\Theta} f(x|\theta) \pi(\theta)\,d\theta}. $$ My questions: How do we define the posterior distribution for values of $x$ such that $\int_{\Theta} f(x|\theta) \pi(\theta)\,d\theta = 0$ ? Or do we just leave it undefined? If my reasoning is correct, $\int_{\Theta} f(x|\theta) \pi(\theta)\,d\theta > 0$ provided that the set $E_x := \{\theta \in \Theta: f(x|\theta)\, \pi(\theta) > 0 \}$ has positive measure. Is there anything else that we can say about the set of $x$ for which $\int_{\Theta} f(x|\theta) \pi(\theta)\,d\theta > 0$ ? If one is working with a model such that $\int_{\Theta} f(x|\theta) \pi(\theta)\,d\theta = 0$ for certain values of $x$ in the sample space, does this indicate that there is a problem with the model (either in our choice of pdf $f(\cdot|\theta)$ or our choice of prior $\pi$) ?
How to deduce the following general result: $$ \operatorname{Var}(\max_i X_i) \leq \sum_i \operatorname{Var}(X_i)\,,$$ where $X_1,\dots,X_N$ are any random variables (maybe not independent) with finite second moment. I have seen this result many times, but I don't know how to prove it. Thank you for your help.
I have a file (corresponding to one and only one person) with n columns (or classes) containing different data: Name (class 1) , address(class 2) , number (class 3), username (class 4), etc (class n). I wish to classify an incoming data as one of these classes. Example : If I get an incoming data to be : "John Smith" ..my ML model should classify it as : "Name". If i get "Ronald Reagan Avenue 3456" : the output should be "address" and so on. I wonder if the naive Bayes model in ML would be suitable for such task? I have been looking at the naive Bayes model that make me skeptical : It assumes independence between features . Would that be a problem in this case?
Decision Tree I have found Misclassification rates for all the leaf nodes. samples = 3635 + 1101 = 4736, class = Cash, misclassification rate = 1101 / 4736 = 0.232. samples = 47436 + 44556 = 91992, class = Cash, misclassification rate = 44556 / 91992 = 0.484. samples = 7072 + 15252 = 22324, class = Credit Card, misclassification rate = 7072 / 22324 = 0.317. samples = 1294 + 1456 = 2750, class = Credit Card, misclassification rate = 1294 / 2750 = 0.470. samples = 7238 + 22295 = 29533, class = Credit Card, misclassification rate = 7238 / 29533 = 0.245. I'm finding it difficult to find AUC value from here. Please help me out with this. I will be grateful.
I'm working on a model for survival prediction and using the concordance index to evaluate the results (https://medium.com/analytics-vidhya/concordance-index-72298c11eac7) . I want to show that my model is better than a baseline model using my test set. My understanding is that for a typical test metric (ie prediction of housing prices for example ) one could use the Wilcoxson signed rank test to see if my model is statistically significantly better than a baseline model. However, in this case, the concordance metric doesn't have any meaning for a single sample-- it's a description of how well the model can discriminate ordering amongst a set of inputs. Therefore, is doing the following valid: divide test set into batches, get each model's prediction on each batch, and run the wilcoxon signed-rank test on the predictions on the batches? Perhaps also vary batch size and shuffle the samples and run the test again to verify the result still holds? Clarifications: I should have clarified that I'm not use the Harrel C-index but the adjusted Antolini index. I know the Harrel C-index is not often used anymore. The baseline model is a machine learning model that provides decent values. My model is also the same type of machine learning model but trained differently . The two models have the exact same architecture, and therefore the same set of predictors (just different weights on the predictors since the two models were trained differently).
I'm trying to make sense of some results I ran on a weighted regression with log-transformed IV1 and DV, and IV2, which is a factor variable. IV2 is restaurant price range, categorized as 1 (low price range) or 4 (high price range), to be exact. I'm trying to interpret the effect of IV2 on the DV. The coefficient for the price range is 0.657: log(DV) = 2.99 + 0.657(price4) + 1.11(log_IV1) - 0.190(log_IV1 * price4) This seems to mean that with a higher price range of 4, the DV will increase (exp(0.657)-1)*100%. Is this correct? I'm feeling confused because when I run a regression where IV1 and DV are NOT log-transformed, results will say that the DV will decrease with a higher price range of 4: DV = 306.87 - 47.44(price4) + 576.77(IV1) - 175.67(IV1 * price4) How can I explain this difference in results?
What's the difference between Locality Preserving Projection (LPP) and Principal Component Analysis (PCA)? This is our data. It's a 3D plot. Here I use LPP and PCA to reduce the 3D data to 2D data. It gives different results. I know that PCA reduces the dimension on maximum variance, but what about LPP? Locality Preserving Projection (LPP) Principal Component Analysis (PCA)
can someone help me please? How can I see the coeficient for municipio6:ano0, considering that I don't want to have a reference level in my model for municipio variable (that's why I set intercept to 0). > m1 <- glm.nb(casos ~ 0 + municipio + ano0 + municipio:ano0 + offset(log(populacao)), data = dataset) > summary(m1) Call: glm.nb(formula = casos ~ 0 + municipio + ano0 + municipio:ano0 + offset(log(populacao)), data = dataset, init.theta = 1.202601993, link = log) Deviance Residuals: Min 1Q Median 3Q Max -1.6965 -1.0652 -0.7363 0.3766 4.0045 Coefficients: Estimate Std. Error z value Pr(>|z|) municipio6 -12.22419 0.26220 -46.622 < 2e-16 *** municipio1 -10.07937 0.16096 -62.621 < 2e-16 *** municipio2 -10.15899 0.16450 -61.758 < 2e-16 *** municipio3 -11.46408 0.20718 -55.333 < 2e-16 *** municipio4 -11.12637 0.21401 -51.990 < 2e-16 *** municipio5 -12.18312 0.26636 -45.739 < 2e-16 *** ano0 0.06878 0.03153 2.181 0.02916 * municipio1:ano0 -0.09080 0.03802 -2.389 0.01692 * municipio2:ano0 -0.10842 0.03845 -2.820 0.00480 ** municipio3:ano0 -0.06582 0.04144 -1.588 0.11223 municipio4:ano0 -0.14021 0.04388 -3.196 0.00139 ** municipio5:ano0 0.01989 0.04447 0.447 0.65471 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 (Dispersion parameter for Negative Binomial(1.2026) family taken to be 1) Null deviance: 24450.50 on 1008 degrees of freedom Residual deviance: 989.11 on 996 degrees of freedom AIC: 2863 Number of Fisher Scoring iterations: 1 Theta: 1.203 Std. Err.: 0.134 2 x log-likelihood: -2837.036
I am building DL model that test data can have some different labels compared to training dataset. My semi-supervised model is kind of terrible to predict test data, and I want to know is there any DL paper about this kind of work? And I'm trying to train with multiple datasets that have common labels but also have other labels as a training set. these datasets have common features but also have other features too. Is there any tip or guide for pre-process these kind of work? Some Deep learning paper, or some guides?
It is well known that if a random variable $X$ has distribution: $$ \mathrm{P}(X = x) = \begin{cases} \frac{1}{2}, & x=0,\\ \frac{1}{2}, & x=1,\\ 0, & \text{otherwise}, \end{cases} $$ (i.e., it is Bernoulli-distributed with probability of success $\tfrac{1}{2}$), it saturates Chebyshev's inequality for $k=1$: $$ \mathrm{P}(|X - \mathrm{E}[X]| \leq \sqrt{\mathrm{Var}[X]}) = \mathrm{P}(|X - \tfrac{1}{2}| \leq \tfrac{1}{2}) = 1. $$ Using Chebyshev's inequality, is it possible to show the following statement? If $X$ is a random variable with $0 \leq X \leq 1$, $\mathrm{E}[X] =\tfrac{1}{2}$, and $\mathrm{Var}[X] = \tfrac{1}{4}$, then $X$ is Bernoulli-distributed with probability of success $\tfrac{1}{2}$. Thanks!
I'm working with a list of performance metrics. Each row represents some # of observations of one leg of the course/route/etc. So each row/leg will have a best (time), worst, avg and stdev. Adding together the best, worst and average values to get the course/route level totals makes intuitive sense to me, but I'm not so sure about stdev. I don't have the individual data points, only the aggregated "per-leg" values. Can I use the values I do have to calculate a valid stdev for the overall course/route?
If a string of characters (S) that is 325384 long, contains 458 As and 22 Bs. What is the probability that if the 458 As and 22 Bs were randomly positioned along the 325384 string of characters that there would be exactly 861 characters between at least one of the AB pairs. Whilst I am interested in the probability of this occurring, I would also like to see how to calculate it for any given value of A,B,S or value of characters between them.
I would like some help with calculating the Fisher Information $I_o(\beta)$ and the expected information for a gamma distribution defined by \begin{align*} f_X(x) = \frac{\beta^\alpha x^{\alpha - 1}e^{-\beta x}}{\Gamma(\alpha)} \; x > 0, \alpha >0, \beta > 0 \end{align*} Where $\alpha$ is a known value and $\beta$ is the parameter of interest. Attempt I have attempted to calculate a likelihood function as follows: \begin{align} L(\beta | X_i) &= \prod_{i = 1}^{n}f(x_i | \alpha, \beta) \\ &= \prod_{i = 1}^{n}\left( \frac{\beta^{\alpha}}{\Gamma(\alpha)}x_{i}^{\alpha - 1}\mathrm{exp}{\{-\beta x\}}\right) \\ &= \left(\frac{\beta^{\alpha}}{\Gamma(\alpha)}\right)^{n}\prod_{i = 1}^{n} \left(x_{i}^{\alpha-1}\right) \mathrm{exp}\{-\beta\sum_{i = 1}^{n} x_i\} \\ L(\beta | X_i) &= \left(\frac{\beta^{\alpha}}{\Gamma(\alpha)}\right)^{n}\left(\prod_{i = 1}^{n} x_{i}\right)^{\alpha-1} \mathrm{exp}\{-\beta\sum_{i = 1}^{n} x_i\} \end{align} Thus the log-likelihood would be the following: \begin{align} \ell(\beta | x_i) &= \ln\left((L(\beta | X_i)\right)) \\ &= n\alpha\ln(\beta) - n\ln(\Gamma(\alpha)) + (na-n)\ln(x_i) - \beta \sum_{i = 1}^{n} x_i \\ \end{align} I understand that the information is found by taking the 2nd derivative of any of the likelihood functions where $I_o(\beta) = -\frac{\mathrm{d}^{2}{\ell}}{\mathrm{d}{\beta}^{2}} $ The derivatives calculated were as follows: \begin{align} &= n\alpha\ln(\beta) - n\ln(\Gamma(\alpha)) + (na-n)\ln(x_i) - \beta \sum_{i = 1}^{n} x_i \\ &= \frac{n\alpha}{\beta} - \beta \\ &= - \frac{n \alpha}{\beta^2} - 1 \end{align} Thus the information would be \begin{align} I_0(\beta) = \frac{n \alpha}{\beta^2} + 1 \end{align} I am unsure what to do once I get to the expectation. \begin{align} \mathbb{E}\{I_0(\beta)\} &= \mathbb{E}\left(\frac{n \alpha}{\beta^2} + 1\right) \end{align} Would I have made a mistake within the derivation process? Any insight would be very much appreciated.
I recently saw some paper about stereo matching : End-to-End Learning of Geometry and Context for Deep Stereo Regression [1] https://openaccess.thecvf.com/content_ICCV_2017/papers/Kendall_End-To-End_Learning_of_ICCV_2017_paper.pdf End-to-End Learning for Omnidirectional Stereo Matching with Uncertainty Prior [2](which isn't free to see this paper. But the DL model is almost as same as Gc-net paper, except the model is for wide angle multi camera ) These papers are about stereo matching in computer vision area, The models in the papers are composed of Unary feature extraction + cost volume (As simple you can think it as concatenating 2 or more data's feature) + 3D convolution. What I don't understand fully is the Unary feature extraction part, the models have some 2D convolution layers with residual connection, and then after this there's a 2D convolution layer with no activation and batch normalization before cost volume part. The gc-net [1]'s unary feature extraction works like (1) 5x5 conv, 32 features, stride 2-> (2)3x3 conv, 32 features -> (3) 3x3 conv, 32 features (4) (1)-(3) residual connection (5) (2), (3) repeat a few times with residual connection (6) 3x3 conv, 32 features (no ReLu or BN) What does this (6) convolution layer without activation and batch normalization contribute to the model? In the 'End-to-End Learning for Omnidirectional Stereo Matching with Uncertainty Prior' paper says that the last layer without the ReLU in the feature extraction part allows the network to distinguish between negative features and invisible areas, where the network is set to zero by the ReLU and warping operations, respectively. But I can't understand how the layer without activation function and normalization contribute above things. And in 3D convolution part of Gc-net [1] also has no-ReLu, or BN too. The paper says The network can also pre-scale the matching costs to control the peakiness (sometimes called temperature) of the normalized post-softmax probabilities (Figure 2). We explicitly omit batch normalization from the final convolution layer in the unary tower to allow the network to learn this from the data. Which means 3D conv layer without BN helps 'softargmin' (or softargmax) function not to have multi-modal distribution. This thing I don't understand how the conv layer without activation and and normalization has such ability.
As far as I know, Co-variance based SEM requires the normality assumption. To avoid violating this assumption, I am considering using Bootstrap Bias-corrected method intervals for hypothesis testing when performing CB-SEM in AMOS. However, I have found not many studies using these two methods combined. I think the main reason is that the authors tend to perform CFA before using SEM and all the indicators in their studies do not violate the threshold values, but this is my first research paper and I have not had enough foundation in Statistics so I am not sure. Is it possible to use Bootstrap Bias-corrected confidence intervals when performing CB-SEM model for hypothesis testing? If yes, is there anything I should perform before doing CB-SEM combined with Bootstrapping? To provide more context: My CFA indicators are fine.
I'm looking for appropriate types of analysis for a data set that contains counts of different crab species across 4 sites with 3 replicates per site (12 in total) over a time period of 1.5 years - 5 sampling time points. The peculiarity of the data set is that each crab species only occurs on specific coral hosts. I'm thus following their hosts throughout time. A brief example: Colony 1 has a single crab at t1, two crabs at t2, and no crabs at t3, whereas Colony 2 has three crabs t1, t2 and t3. This would indicate that the crab abundance in colony 1 fluctuates much more. Currently, I have about 50 Colonies per site, so 200 in total. I'm interested to see how crab abundance on these colonies changes over time depending on the sites and also which impact the coral host species has. For instance: Does the abundance of crabs on coral species "Pavona" fluctuate more than abundance of crabs on coral species "Pocillopora"? In summary, I would like to analyse this data regarding general changes in community composition of crabs over time and across sites but, more specifically, also to follow each colony through time and then compare this data to see whether certain coral species are prone to be colonized/abandoned more frequently than others.
I'm training a BERT sequence classifier on a custom dataset. When the training starts, the loss is at around ~0.4 in a few steps. I print the absolute sum of gradients for each layer/item in the model and the values are high. The model converges initially but when left to be trained for a few hours and sometimes even early as well it gets stuck. I am calculating gradients with the below code. Also the logs are at - https://pastecode.io/s/v2s3mr3e (initial convergence) . I'm printing gradients, loss, metrics and logits. for name, param in model.named_parameters(): print(name, param.grad.abs().sum()) While the model is stuck, the loss value is around ~0.69 with worse performance metrics (precision/recall) on the training set but the gradients are very small compared to the initial training phase. Also it seems that predictions are swinging with most of the values predicted as either 0 or 1. Following are the logs for the stuck phase - https://pastecode.io/s/cjuxog44 Training code Link - Training Code It seems that the model is stuck at a local minima where the gradient values are relatively smaller even though the loss is high. How can I mitigate it ? One option I see is using a higher learning rate or a cyclic learning rate but not sure if that's the right approach since the the learning rate is 5e-5 with LR scheduler disabled. Below is the plot for Loss, Bert pooler and classifier gradients sum over steps. Also the data is 50-50 balanced. Batch size is 32. I'm using AdamW. I have also tried SGD but the convergence is very slow. Or there might be some error/reason which I am not able to identify. Please help
I am working on a classic problem when the posterior probability distribution of a proportion must be obtained. This parameters is assumed to followed a beta distribution, therefore the number of ocurrence is modelled by a beta-binomial distribution. I am using RJAGS to compute the posterior distribution and have some question about the syntax I have to use. The data I have is characterized for having number of occurrence (Y) and sampling size or number of observation (M) for each row. The distribution of the observed failure rate (Y/M) is depicted in the following plot: As can be seen, it is left-skewed distribution with some observation to the right side showing larger proportion values. The quantiles for this parameters is the following: The code of the first model is given by: jags_model_syntax <-"model { # Likelihood for (i in 1:length(Y)) { Y[i] ~ dbin(p,N[i]) N[i] <- M[i] # Population size } p~dbeta(alpha,beta) alpha~dnorm(0.77,0.1) beta~dnorm(76,15) }" After compiling the model and computing the posterior, the results are the following. The probability distribution of p (the proportion) is a beta distribution and it is signficantly different to the distribution of the observed proportion. I have following question: Should I give one probability to each lot, that means fitting a hierarchical model. This would change by adding a p[i] at each iteration Why beta distribution can not fit the original distribution?
I have frequency predictions for a discrete distribution: $p(x_1)=0$, $p(x_2)=0$, $p(x_3)=0.05$, $p(x_4)=0.95$ I need to smooth the distribution so I don't have zero values. I think the solution is to use the Lapalace additive smoothing function: $(x_i + \alpha) / (N + \alpha*d)$ But I am unsure what values to use for $N$ and $d$. I guess $d$ is the domain size? This is supposed to be basic, but I can't seem to find a clear answer. $N$ is supposed to be the number of samples, but I don't have any samples, just this distribution.
We want to combine two truncated distributions to better model one phenomenon. For example, we have a Gaussian distribution, but we want to modify the right hand side tail to make it heavier. So we want to put a Pareto distribution. Now the problem is that we want to make sure that there is continuity at the density level and the total probability should be 1. And it seems that it is impossible to satisfy the two conditions. I would like to know what are some other possibilities to create a such distribution. I use R to create an example : nb_total_sim = 100000 proba_1= pnorm(4,1,6) nb_sim_1 = trunc(nb_total_sim * proba_1) nb_sim_2 = nb_total_sim - nb_sim_1 sim_1 = qnorm( runif(nb_sim_1,0,proba_1),1,6) sim_2 = rPareto(n = nb_sim_2, t = 4, alpha = 1.99,truncation = 100) hist(c(sim_1,sim_2),nclass = 200) The histogram is here and you can see that the "density" is not continuous.
I have the following stochastic differential equation $dX_t=\kappa\left [ \theta-X_t\right ]dt + \Sigma d W_{t}$ I derived formula for $X_t$ which is in the following form $X_{t}=\theta+e^{-\kappa t}\left ( X_0-\theta \right )+\Sigma e^{-\kappa t}\int_{0}^{t}e^{\kappa s}dW_{s} \qquad \forall t \in\left [ 0,T \right ]$ Now I want to derive covariance. The formula for covariance is in the following form $Cov(X_t,X_r)=E\left [ \left ( X_t- E\left [ X_t \right ] \right )\left ( X_r -E\left [ X_r \right ]\right ) \right ] $ $=E\left [ \left ( \Sigma e^{-\kappa t}\int_{0}^{t}e^{\kappa s}dW_{s}\right ) \left ( \Sigma e^{-\kappa r}\int_{0}^{r}e^{\kappa u}dW_{u} \right )\right ] $ $=\Sigma e^{-\kappa (t+r)}\Sigma ^{\top} E\left [ \left ( \int_{0}^{t}e^{\kappa s}dW_{s} \right ) \left ( \int_{0}^{r}e^{\kappa u}dW_{u} \right )\right ]$ And using Using ItΓ΄ isometry it follows that $Cov(X_t,X_r)=\Sigma e^{-\kappa (t+r)}\Sigma ^{\top} \int_{0}^{t}e^{\kappa s}ds$ But result in the book is in the following form $Cov(X_t,X_r)=\int_{0}^{t}e^{-\kappa s} \Sigma \Sigma ^{\top} {e^{-\kappa s}}^{\top} ds$ Can someone please tell me where I made a mistake?
With this very simple data: > A [1] "a" "a" "a" "a" "b" "b" "b" "b" "b" "b" "b" "b" > B [1] "x" "y" "x" "y" "x" "y" "x" "y" "x" "x" "x" "x" > C [1] "l" "l" "m" "m" "l" "l" "m" "m" "l" "l" "l" "l" > response [1] 14 30 15 35 50 51 30 32 51 55 53 55 I try to reproduce the Type-3 car::Anova by using step-by-step term elimination to better understand interactions and analysis of variance. For example I want to assess the term "C" > options(contrasts = c("contr.sum", "contr.poly")) > m1 <- lm(response ~ A*B*C) # the full model > car::Anova(m1, type=3, test.statistic = "LR") Anova Table (Type III tests) Response: response Sum Sq Df F value Pr(>F) (Intercept) 9374 1 1802.78 1.8e-06 *** A 716 1 137.69 0.0003 *** B 182 1 35.00 0.0041 ** C 178 1 34.23 0.0043 ** A:B 178 1 34.23 0.0043 ** A:C 317 1 61.03 0.0014 ** B:C 8 1 1.63 0.2714 A:B:C 0 1 0.00 0.9755 Residuals 21 4 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 For term "C" I got p-value = 0.0043 Now I going to assess term C by elimination: > m2 <- lm(response ~ A + B + A:B) # the term "C", eliminated all terms with C > anova(m1, m2) Analysis of Variance Table Model 1: response ~ A * B * C Model 2: response ~ A + B + A:B Res.Df RSS Df Sum of Sq F Pr(>F) 1 4 21 2 8 648 -4 -627 30.1 0.003 ** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Here p-value is = 0.003. Close but not the same. This is the simplest general linear model, no mixed effects, no transformations. When I typed "test = LRT" > anova(m1, m2, test="LRT") Analysis of Variance Table Model 1: response ~ A * B * C Model 2: response ~ A + B + A:B Res.Df RSS Df Sum of Sq Pr(>Chi) 1 4 21 2 8 648 -4 -627 <2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 The result is totally different. Please note, my goal IS NOT to make any formal inference, only to UNDERSTAND the calculations behind. I understand, that elimination terms from model should be equivalent to the car::Anova() and give equal result numerically. Let's confirm: > drop1(m1, scope = ~A*B*C, test="F") Single term deletions Model: response ~ A * B * C Df Sum of Sq RSS AIC F value Pr(>F) <none> 21 22.6 A 1 716 737 63.4 137.69 0.0003 *** B 1 182 203 47.9 35.00 0.0041 ** C 1 178 199 47.7 34.23 0.0043 ** A:B 1 178 199 47.7 34.23 0.0043 ** A:C 1 317 338 54.1 61.03 0.0014 ** B:C 1 8 29 24.7 1.63 0.2714 A:B:C 1 0 21 20.6 0.00 0.9755 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 It works! It agrees with car::Anova(). So how should I eliminate the terms to obtain the same result using anova()? Evidently comparing A+B+A:B vs. the full model is not enough!
This wikipedia article describes spam filtering using NaΓ―ve Bayes: https://en.wikipedia.org/wiki/Naive_Bayes_spam_filtering It says P(S|W) is given as Pr(W|S)*Pr(S) / (Pr(W|S)*Pr(S)) + Pr(W|H)*Pr(H)). However, one could also get P(S|W) by estimating P(W) instead. Most textbooks simply say it's unnecessary to estimate P(W) which I get, but one could also say it's unnecessary to estimate Pr(W|H)*Pr(H). Why is it that estimating Pr(W|H)*Pr(H)) is preferred? Example: If we use this example, The "correct" estimation for P(yes|rain, good) is 0.143 because P(rain|yes) * P(good|yes) * P(yes) = 1/5 * 1/5 * 5/10 = 0.02 P(rain|no) * P(good|no) * P(no) = 2/5 * 3/5 * 5/10 = 0.12 P(yes|rain, good) = 0.02 / (0.02 + 0.12) = 0.143 Whereas one could also estimate it as follows which gives 0.042 instead: P(rain|yes) * P(good|yes) * P(yes) = 1/5 * 1/5 * 5/10 = 0.02 P(rain, good) = P(rain) * P(good) = 3/5 * 4/5 = 0.48 P(yes|rain, good) = 0.02 / 0.48 = 0.042 My question is, why is the former preferred, even though they seem to make similar approximations
From What's the skewed-t distribution? there seems to be multiple way of defining skew distributions. However I am not sure if these methods are equivalent The original questions show methods from C. Fernandez and M. Steel (1998) P. Theodossiou (1998) - which is the basis of this wikipedia page, and this R package A. Azzalini (1985) In scipy.stats, the package implement scipy.stats.skewnorm, and scipy.stats.skewcauchy however, it is based on Azzalini, and Theodossiou method respectively which define skewed parameter differently So I am wondering if these method of defining skew distribution equivalent? If not, what is the generally most accepted way of defining skew distribution? Update 1 At least I don't think they are. I tried using method 2, and 3 then minimize the equivalent coefficient, and plug it back in. The plot suggest they are different Reproduction code import numpy as np from scipy import stats from tqdm.auto import tqdm import matplotlib.pyplot as plt from scipy.optimize import minimize def logp_skew_cauchy_v1(xs, mu, sigma, alpha): # A. Azzalini (1985) return ( np.log(2) + stats.cauchy.logpdf((xs - mu)/sigma, loc=0, scale=1) + stats.cauchy.logcdf(alpha*(xs - mu)/sigma, loc=0, scale=1) - np.log(sigma) ) def logp_skew_cauchy_v2(xs, mu, sigma, lam): # P. Theodossiou (1998) return stats.skewcauchy.logpdf((xs - mu)/sigma, a=lam) - np.log(sigma) xrange = np.linspace(-50, 50, 1001) mu, sigma = 0, 1 eps = 0.01 lams = np.linspace(-1 + eps, 1 - eps, 1000) matched_alphas = np.zeros_like(lams) result_func_vals = np.zeros_like(lams) for idx, lam in tqdm(enumerate(lams), total=len(lams)): target = logp_skew_cauchy_v2(xrange, mu, sigma, lam) result = minimize(lambda alpha: np.nanmean(np.abs(logp_skew_cauchy_v1(xrange, mu, sigma, alpha) - target)), 0, method = 'Nelder-Mead') if result.success: matched_alphas[idx] = result.x[0] result_func_vals[idx] = result.fun else: matched_alphas[idx] = np.nan result_func_vals[idx] = np.nan idx = 700 plt.plot(xrange, logp_skew_cauchy_v2(xrange, 0, 1, lams[idx]), label=f"P. Theodossiou (1998) - a = {lams[idx]:.3f}") plt.plot(xrange, logp_skew_cauchy_v1(xrange, 0, 1, matched_alphas[idx]), label=f"A. Azzalini (1985) - alpha = {matched_alphas[idx]:.3f}") plt.title("Log distribution plot") plt.legend(loc="best") plt.show()
Suppose that you have a null hypothesis $H_0$ that you want to test. Let $\alpha$ be a given confidence level, for example $\alpha = 0.05$. Suppose that the test statistic $T$ follows (under $H_0$) a $\chi^2$ distribution, say $T \sim \chi^2 (4)$. In my understanding, the critical region of a distribution means a set of 'rare' values of $T$ in the sense that $$ \mathbb{P}\left[T \in \text{critical region}\right]=\alpha.Β $$ How should one choose the critical region? I guess that, in the case of this $\chi^2 (4)$ distribution, one typically chooses a critical value $t_0>0$ such that $$\mathbb{P}[T\ge t_0] = \alpha, $$ and thus the critical region would be $[t_0, +\infty)$. However, when you look at the distribution, the values of $T$ very close zero are rare as well; the pdf is continuous and has value 0 at 0. Thus, it would be tempting to determine two critical values $t_0$ and $t_1$ such that $$ \mathbb{P} [T \le t_0 \text{ or } T \ge t_1 ] = \alpha,$$ and choose the critical region to be $[0, t_0] \cup [t_1, +\infty)$. Is there a rule of thumb, or is this just a matter of taste?
I am conducting various active learning experiments on two Biomedical Relation Extraction corpora: 2018 n2c2 challenge: 41000 test samples DDI Extraction corpus: 5700 test samples and using four different machine learning methods: Random Forest, BiLSTM-based model, Clinical BERT, and Clinical BERT with an extended input. Initially, I evaluated the performance of all 4 methods on both corpora using all the available data (passive learning setting). Then, I conducted additional experiments using 3 different active learning query strategies (random sampling, least confidence, and BatchBALD) on both corpora, using up to 50% of the data. All experiments were repeated 5 times with different random seeds. The specific active learning process followed in the experiments is as follows: Randomly select 2.5% of the total dataset to create the labeled dataset, while the remaining data forms the unlabeled pool. In the active learning step, query 2.5% of the total data, retrain the model from scratch, and test the newly trained model on the test set, measuring precision, recall, and F1-score. Stop the process when 50% of the entire dataset has been annotated, i.e. 19 iterations have been done. Is there a statistical test suitable for this experimental (C=2 corpora, M=4 methods, S=2 query strategies + random baseline, 5 repetitions of each experiment) setup that allows me to determine if one of the query strategies has performed signifantly better?
Def. Let $ \theta \in \mathbb{R}^{\mathbb{Z}}$ be a sequence of real-numbers such that $ \sum_{j \in \mathbb{Z}} | \theta_j | < \infty$ and $\{W_t\}$ be a white noise with variance $\gamma$. Then a time series $\{X_t\}$ is a linear porcess if it can be represented as $$ X_t = \sum_{j\in \mathbb{Z}} \theta_j W_{t-j}.$$ Using the criterion for mean-square convergence, if $S = \sum_{j \in \mathbb{Z}} | \theta_j |$, then for any $ t \in \mathbb{Z}$, $$ E(X_t^2) = E(\sum_{j\in \mathbb{Z}} \theta_j W_{t-j}) \leq \bigg(\sum_{j \in \mathbb{Z}} | \theta_j | \sqrt{E(W_{t-j})}\bigg)^2Β \stackrel{?}{=} S^2 \gamma.$$ Im failing to see why the last equality holds. Should not $ \sum_{j \in \mathbb{Z}} E(W_{t-j}) = \infty$ anyways, if $\gamma β‰  0$?
I want to build a model using a dataset. Then, I edit the dataset by changing the class attribute (let's say I will have a new version of the dataset). After that, I want to apply the same model to the new version of the dataset. Is this procedure correct? Because I did it but the performance of the model improved significantly. I will explain my procedure in detailed steps: I have an imbalanced binary classification dataset (let's call it: a raw dataset). I balanced the raw dataset using SMOTE technique. I built a model and its performance was: accuracy (86.4626), precision (84.7), recall (86.5), F1-measure (84.5), and AUC-ROC (80.9). I changed the class attribute of the raw dataset (let's say I had a new imbalanced dataset). I balanced the new dataset using SMOTE technique. I applied the same model, mentioned in step 3, to the new dataset, and its performance was: accuracy (96.9388), precision (97.6), recall (96.9), F1-measure (97.1), and AUC-ROC (99). I'm afraid there is something wrong with what I did causing overfitting or something.
Let $X_i$ be iid random sample from $exp(\lambda)$ with $f(x;\lambda)=\lambda e^{-\lambda x}$ for $x>0$ and $\lambda>0$. Find the $\alpha$-level uniformly most powerful test for $H_0: \lambda\le \lambda_0$ v.s. $H_1: \lambda> \lambda_0$. I try to use the Karlin-Rubin theorem as follows. Under $H_0$, we take $$T=2\lambda \sum_i X_i\sim Gamma(n ,2)=\chi^2(2n)$$. So I take the test function of UMPT: \begin{equation} \Phi(x)= \begin{cases} 1 & \text{if } T>\chi^2_{1-\alpha}(2n)\\ 0 & \text{if } T\le \chi^2_{1-\alpha}(2n) \end{cases} \end{equation} But the solution used \begin{equation} \Phi(x)= \begin{cases} 1 & \text{if } T\le \chi^2_{1-\alpha}(2n)\\ 0 & \text{if } T> \chi^2_{1-\alpha}(2n) \end{cases} \end{equation} I am confused about why here we choose $T<k$ as the rejection region.
Suppose that we want to know how the price of a house changes per meter square of the area of the house. Further suppose that I have a dataset as the following: Area | Price 100 200k 120 230k ... That is, only the area and the price of a set of houses. Given this setup I can think of two different ways: Fit a linear model (using linear regression) and look at the coefficient of Area. For all pairs of houses $i$ and $j$ find $\frac{price_i - price_j}{area_i - area_j}$ then take the average. My question: Are these solutions different? If yes, in what ways they are different? or pros and cons?
The screenshot below is from a paper that I am reading and the author says it is a non-parametric regression. The explanation below just seems like a normal OLS with some covariate, fixed effects.. etc. What exactly is a non-parametric regression and how do we see it from the equation below? When do we use it? The only noticeable difference from standard OLS seems to be the L function, which I don't understand.. Also, when running a non-parametric regression, is the function at R different from the normal lm function?
Assume that you have made a PCA analysis and you got your eigenvectors inside the projection matrix $W$. If you project your data $X$ with $W$, then you get the desired projected dimension. But PCA and ICA are opposite. If we look at this picture. PCA is projecting the most principal data, while ICA is projecting the most non commonly data. My question is simple: Is it possible to turn PCA into ICA by rotating the eigenvectors in some angles? If yes, how?
Scenario: I have data comparing the number of tree stems in 30 forest plots between two sampling years (1992 and 2012). Each plot received hurricane damage between these 2 sampling years -- this damage was coded as being 0-100% of trees felled/damaged. I ran a linear regression using lm() in R including a centered year term, hurricane damage, and an interaction term between them. I get the following output: Call: lm(formula = Count.Ha ~ I(Year - 1992) * HurrDam, data = dataset, ]) Residuals: Min 1Q Median 3Q Max -368.84 -69.79 -23.01 81.30 413.28 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 147.3300 50.7297 2.904 0.00529 ** I(Year - 1992) -17.2595 3.4007 -5.075 4.73e-06 *** HurrDam -1.4680 1.6764 -0.876 0.38503 I(Year - 1992):HurrDam 0.7634 0.1128 6.766 9.11e-09 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 138.1 on 55 degrees of freedom Multiple R-squared: 0.5886, Adjusted R-squared: 0.5662 F-statistic: 26.23 on 3 and 55 DF, p-value: 1.15e-10 As you can see, Year is significant as is the interaction term, but HurrDam is not. How do I interpret this?? I've seen a number of posts discussing intepretation when discreet variables or even continuous non-bounded variables are involved, but I'm not sure how my inclusion of a time variable and a bounded percentage as a variable impact the way one would interpret these results. Note: my ultimate hypothesis I'm trying to investigate is that the number of stems did not increase with time except in plots with greatest hurricane damage.
With all the concern about reproducibility, I have not seen a very basic question answered. Using the standard hypothesis testing approach, if one experiment results in p<0.05, what is the chance that a repeat experiment will also result in p<0.05? I've seen a related problem approached by Goodman (1) and others, starting with a particular p-value for the first experiment, but I have not seen it more generally as I stated the problem. So my question here is if the approach below has already been published somewhere. Let’s make pretty standard decisions that alpha = 0.05 and power = 0.80. We also need to define the scientific context of the experimentation. Let's say we are in a situation where you expect half the hypotheses tested to be true and half are not. In other words the probability of the null hypothesis is 0.50, which we'll call pNull. Let's compute the results of 1000 (arbitrary, of course) first experiments. Number of experiments where the null H is actually true = 1000 * pNull = 500. Number of these expected to result in p<alpha = 500 * alpha = 25 experiments. Number of experiments where the alternative H is actually true = (1 - pNull)*1000 = 500 Number of these expected to result in p<alpha = 500 * power = 400 Total experiments expected to result in p<alpha = 25 + 400 = 425 Now on to the second experiment. We only run the second experiment for cases where the first experiment resulted in p<alpha. Of the 25 experiments (where null is actually true), how many of the second experiments are expected to result in p<alpha? 25 * alpha = 1.25 Of the 400 experiments (where the alternative is true), how many of the repeat experiments are expected to result in p<alpha? 400 * power = 320 Number of second experiments expected to result in p<alpha = 1.25 + 320 = 321.25 Given that the first experiment resulted in p<alpha, the chance that a second identical experiment will also result in p<alpha = 321.25/425 = 0.756 This assumes you set alpha = 0.05 and power = 0.80, and the scientific situation is such that pNull = 0.50. I like to think things out verbally, but of course, this can all be compressed into equations. But my question is if this straightforward approach has already been published. Goodman, S. N., 1992, A comment on replication, P-values and evidence: Statistics in Medicine, v. 11, no. 7, p. 875–879, doi:10.1002/sim.4780110705.
For GLMs in the exponential family, we can obtain the standard errors for the regression coefficients as a function of the diagonal of the fisher information matrix. Does this still hold if the regression distribution is not in the exponential family (this is of course technically not a GLM but I'm not sure if there is a technical name for this kind of model)? For example, beta-binomial or dirchlet-multinomial? In this case, does it instead become necessary to use the diagonal of the Hessian?
can you help me please? In this model, the interpretation of the continuous variable tmax for an example would be: a increase 1 unit of tmax (exp(coef)=1.06) increases in 6% the incidence of disease in a month. Considering that casos = monthly number of cases of the disease, and populacao variable being used as offset representing the population in each city (municipio). Is this interpretation correct? summary(m1<- glm.nb(casos ~ 0 + municipio + precip_ant + tmax + tmax_ant + umid + umid_ant + enxu_2 + offset(log(populacao)), data = dataset)) Call: glm.nb(formula = casos ~ 0 + municipio + precip_ant + tmax + tmax_ant + umid + umid_ant + enxu_2 + offset(log(populacao)), data = dataset, init.theta = 2.105944887, link = log) Deviance Residuals: Min 1Q Median 3Q Max -2.1763 -1.0107 -0.6286 0.3874 4.2286 Coefficients: Estimate Std. Error z value Pr(>|z|) municipio6 -2.566e+01 1.698e+00 -15.110 < 2e-16 *** municipio1 -2.406e+01 1.706e+00 -14.108 < 2e-16 *** municipio2 -2.424e+01 1.707e+00 -14.205 < 2e-16 *** municipio3 -2.530e+01 1.696e+00 -14.914 < 2e-16 *** municipio4 -2.525e+01 1.701e+00 -14.846 < 2e-16 *** municipio5 -2.524e+01 1.702e+00 -14.829 < 2e-16 *** precip_ant 1.750e-03 6.414e-04 2.728 0.006373 ** tmax 6.291e-02 1.922e-02 3.273 0.001066 ** tmax_ant 1.600e-01 1.995e-02 8.020 1.06e-15 *** umid 2.665e-02 1.230e-02 2.166 0.030297 * umid_ant 5.555e-02 1.454e-02 3.820 0.000134 *** enxu_2 3.154e-01 2.074e-01 1.521 0.128384 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 (Dispersion parameter for Negative Binomial(2.1059) family taken to be 1) Null deviance: 41285.6 on 1008 degrees of freedom Residual deviance: 1002.5 on 996 degrees of freedom AIC: 2702.5 Number of Fisher Scoring iterations: 1 Theta: 2.106 Std. Err.: 0.308 2 x log-likelihood: -2676.499 > exp(coef(m1)) municipio6 municipio1 municipio2 municipio3 municipio4 municipio5 precip_ant tmax tmax_ant 7.171204e-12 3.553106e-11 2.963580e-11 1.033429e-11 1.082516e-11 1.089931e-11 1.001751e+00 1.064932e+00 1.173543e+00 umid umid_ant enxu_2 1.027009e+00 1.057120e+00 1.370790e+00
I was reading the following link (https://en.wikipedia.org/wiki/Scoring_algorithm) on the "Fisher Scoring Algorithm". As I understand, the Fisher Scoring Algorithm is similar to the Newton-Raphson Algorithm, but is used more to optimize Likelihood Functions of Statistical and Probabilistic Models. Here is my understanding of this algorithm: Suppose we have observations: $$y_1, y_2, \dots$$ And suppose these observations have a probability distribution function: $$f(y;\theta)$$ If we consider the "Score Function" as the first derivative of the log-likelihood function, we can take the First Order Taylor Expansion of the Score Function and write it as follows: $$V(\theta) \approx V(\theta_0) - J(\theta_0)(\theta - \theta_0)$$ Note that J(thetha) is the negative Hessian of the log-likelihood function : $$J(\theta_0) \approx -\sum_{i=1}^n \left(\triangledown_{\theta}^2 \log \left(f(y_i, \theta)\right)\right)$$ We can then write the Fisher Scoring Algorithm as: $$\theta_{m+1} = \theta_m + J^{-1}(\theta_m)V(\theta_m)$$ In this article, the following two proofs are claimed about the Fisher Scoring Algorithm: Proof 1: As the number of iterations (i.e. "m") increases, the estimates from the Fisher Scoring Algorithm converges to the estimates that would have been obtained from Maximum Likelihood Estimation. As I understand, this is important for the following reason: Suppose you have some complicated Likelihood Function and have difficulty solving the resulting system of equations (e.g. multidimensional, non-linear, etc.) - then, the results of this proof would permit you to indirectly obtain estimates "close" to the estimates that you would have obtained via Maximum Likelihood Estimation (Note: Estimates obtained via MLE are "desirable" as these estimates have useful properties such as Unbiasedness, Consistency, Asymptotic Normality, etc.). In mathematical notation, this proof can be written like this: $$\lim_{m\rightarrow\infty} \theta_m = \hat{\theta}_{MLE}$$ Proof 2: To reduce the computational complexity of the Fisher Scoring Algorithm, we often replace J(thetha) with the "Expected Value" of J(thetha) - we call this I(thetha): $$\theta_{m+1} = \theta_m + I^{-1}(\theta_m)V(\theta_m)$$ Given this information - the estimates produced from the Fisher Scoring Algorithm (after many iterations) are expected to have same asymptotic distribution properties as the true estimates under Maximum Likelihood Estimation. As I understand, this result is important because it allows for statistical inferences made using the results of the Fisher Scoring Algorithm to have similar properties as statistical inferences made using estimates from MLE. In mathematical notation, this proof can be written like this: $$\sqrt{n}(\theta_{m+1} - \theta_{MLE}) \stackrel{d}{\rightarrow} N(0, I^{-1}(\theta_{MLE}))$$ My Question: I am trying to understand why the ideas captured within Proof 1 and Proof 2 are true. When looking online, I found different references on these topics - but none of these references explicitly explained why these two proofs are true. Can someone please help me understand why these two proofs are true? Thanks!
Let $p$ be a positive integer and suppose that each observation in my data set is a length-$p$ multivariate normal vector, and I have $n$ (an integer) observations of the length-$p$ multivariate normal vector. So $$ \vec{Y} = \beta_0 + \beta_1 \vec{X}_{1} + \cdots + \beta_k \vec{X}_{k} + \vec{\epsilon}, $$ with $\vec{\epsilon} \sim N_p(\vec{0}, \Sigma) $, $\Sigma$ is a covariance matrix of an observation-vector, $\beta_i \in \mathbb{R}$ (for $i \in \{0,1,\cdots,k\}$) and $X_i \in \mathbb{R}^p$. I am in a situation where this model looks relevant to my problem, but I have never been taught how to generalize the usual regression model into one where each observation is itself a vector of size $p>1$. Is this called multivariate multiple regression? How can I find literature for it? If I look up multivariate-, or multidimensional linear regression I only get stuff on the multivariate linear regression model (the case where $p=1$).
I see it is often quoted that the omitted variable bias formula is $$ \text{Bias}\left(\widehat{\beta_1}\right) = \beta_2 \cdot \text{Corr}\left(X_2,X_1\right) $$ where $\widehat{\beta_1}$ is the estimated coefficient in the biased model, $\beta_2$ is the true coefficient of the omitted variable $X_2$ in the full model. I am wondering how this is derived generally. Thanks.
Scenario: I have data comparing the number of tree stems in 30 forest plots between two sampling years (1992 and 2012). Each plot experienced hurricane damage between these 2 sampling years (in 1996) -- this damage was coded as being 0-100% of trees felled/damaged. Interest: my ultimate hypothesis I'm trying to investigate is that the number of stems did not increase with time except in plots with greatest hurricane damage. (So, I'd like to know the effect of the hurricane on stem counts between plots while accounting for changes in time). Data: designed as follows: Plot Year HurrDam Count 1 1992 ??? 11 1 2012 30 115 2 1992 ??? 22 2 2012 60 381 .... I've placed question marks (???) in the above example because I'm not sure how to best enter (and therefore analyze) my data. Technically, all plots had 0% hurricane damage in 1992 because they occurred before the hurricane. So to provide a value here seems kind of artificial. One option is to replace the ??? with `0' for all data rows from 1992. Alternatively, my other thought was to treat the hurricane damage of any given plot as an unchanging characteristic of that plot overall -- i.e., regardless of year of sample. Under this scenario, the ??? would be replaced not with 0 but with the HurrDam value from 2012. So in my example data above, the ??? would be replaced with 30 and 60, respectively. The result would be that HurrDam would be identical for both samples of any given plot although such a value only really applies to the latter sampling period for each plot in real life. Which of these approaches is more appropriate for analyzing this data using linear regression (i.e., lm() in R)? I feel like making HurrDam = 0 for all 1992 data creates a strong temporally-structured trend between years (which I'm not interested in investigating when it comes to hurricane damage -- In fact, this is the whole point of including Year as its own variable: I want to tease the effects of hurricane damage and simple passage of time apart). I could make HurrDam = 0 for all 1992 samples, then eliminate Year as a variable from my model, and instead just rely on the differences in HurrDam between years to account for this change, but this is problematic because 1) it ignores the repeated-measures structure of my data and 2) I feel like it accentuates the differences in HurrDam between years when I'm really only interested in knowing the effects of differences in HurrDam between plots in the latter year (while, again, simply accounting for any changes due to the passage of time across plots). I also noticed that if I want to add an interaction term between Year and HurrDam in my ultimate linear model, the values for that interaction term becomes NA if I zero out the 1992 data. Any suggestions/insights would be appreciated!
After using inverse probability of treatment weighting (IPTW) on the variables of my dataset, there is still an imbalance in one covariate between the two groups. My outcome is binary (yes/no) and it is not a longitudinal study. One example is: library(WeightIt) W.out <- weightit(treat ~ age + married + race, data = lalonde, estimand = "ATE", method = "ps") bal.tab(W.out, threshold=0.1) Age is not balanced. How can I make all the variables balanced? Is it possible to "re-weight"? How? Is it possible to apply directly "entropy balancing" instead of IPTW in this case? Can somebody explain to me entropy balancing? I tried reading the original paper (here) but I didn't understand it so much. How is entropy balance computed? Can it be always used at the same conditions as IPTW or are there particular conditions? If entropy balancing is able to adjust with Standardized differences of almost 0, then why is it so little used in the medical field? I noticed that in some papers there is the cohort after 1st weighting, then 2nd weighting, etc.. can someone explain how you obtain this? how many weighting do you have to do? For instance, if I want to use this code: W.out <- weightit(treat ~ age + married + race, data = lalonde, estimand = "ATE", method = "ebal") What are the parameters that I have to set and that I have to pay attention for in order to know that I applied the method correctly? Is there a way to visualize the scores from which the weights were obtained from? as in the case of IPTW (W.out$ps)
I'm working on getting a read out of a Logistic regression classification model (setup in Python via Scikit-learn's LogisticRegression() wrapped in a OneVsRestClassifier()). I got the confusion matrix running pretty quick, and after a decent amount of effort I got the PR Curve (with a lot of help from https://stackoverflow.com/questions/29656550/how-to-plot-pr-curve-over-10-folds-of-cross-validation-in-scikit-learn) The Algorithm consists of doing KFoldStratified, balancing across the 6 classes present, and doing Leave-one-out cross validation. My test set has a single example of each label in the X and y that gets fed in. The Confusion Matrix is generated based on that, and then I use clf.predict(X_test) to generate y probabilities. I separate them into independent lists per label, then I use 'precision_recall_curve' to calculate precision and recall on the combined list per class, then the list containing everything. Below is the Confusion Matrix and PR Curves I've generated. I don't understand how class 2, for instance, has a seeming perfect classification on the confusion matrix while having a near 0.5 AUC. I'm definitely only using the testing data to calculate both. Any ideas?
Im comparing two multidimensional MDS solutions, the solutions have the same number of dimensions. I don't think I can use the permutation version of procrustes analysis (commonly, PROTEST in R::vegan) because I doubt my two sets are exchangeable. I read that the sets must have similar covariance matrices to be exchangeable, which they don't. I can also not motivate exchangeability from a design perspective, as it is not experimental, rather it is a study comparing human and model-based assessment in lower dimensional space. My endeavor turned towards bootstrapping as it has fewer restrictions, however now I doubt I can use that as it seems to sample individual rows multiple times generates an incredible bias (procrustes of original datasets estimate of correlation is just half of the estimate from the bootstrap) Any ideas? I'm wondering if I have to take another path entirely or if it is possible to run permutated procrustes and I'm just not knowledgeable enough. BR, Eric
Good afternoon, I am trying to convert annual reported rates to daily probabilities. However, I have annual rates that only occur during certain months of the year. For example, I have an annual mortality rate of 0.72 and this rate needs to be converted to a daily probability of mortality across only 4 months (120 days). Is this equation simply: 1 - [(1-Annual Rate)^(1/120)] ? And/or the equivalent: 1-exp((1/365)log(1-Annual Rate)) ? The '120' would normally be 365 to convert an annual rate to a daily probability. Can I simply exchange '365' for the relevant timespan (i.e., number of days) for each rate? Thank you in advance.
I am looking for the right approach to do a sample size calculation for the positive predictive value in a cross-sectional study. From external sources, I know the prevalence in the population of interest. I also have an estimate of the sensitivity and specificity of the test. My question is now: How many subjects do I have to include in the study in order to show that my positive predictive value is above some value, with a confidence level of 95% and a power of 80%? The literature I found so far is on the more complex case of a case-controlled study, such as https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3668447/. That paper seems to refer to Pepe, The Statistical Evaluation of Medical Tests for Classification and Prediction (2003) for my case, but I do not have access to it. I'd be grateful if anyone could share the applicable formulas for this sample size calculation!
I am conducting a regression analysis on TNF-a in relation to a genetic marker. I performed a post-hoc power calculation using a software called Quanto. The mean(SD) of my outcome variable is 34(14). The Minor Allele Frequency of the genetic marker is 0.102. The Beta coefficient of the predictor from the regression model is 2.852 and my sample size is 799 The power calculation result shows that the minimum detectable effect size (Beta) for a sample size of 781 is 3.271, while the observed effect size (Beta) from the regression model is 2.852. What can I conclude from these results? If the observed effect size is lower than the expected effect size, does it mean that the study is underpowered?
Suppose I have a conditional (or any) distribution as so: $$ p(A \mid B,C,D) $$ and in the text, I want to refer to the variables $\{A,B,C,D\}$ associated with that density (or mass). Is there a formal name or notation for this set? What is the formal way of referring to it? Is there a commonly used symbol? It is not the support of the distribution as far as I understand the definition of support, so what would it be called? Thanks