instruction
stringlengths
9
38.7k
Wooldridge Introductory Econometrics: A Modern Approach (2018), pages 561 and 572, gives the following definitions: Latent variable model (LVM): $$ y^*=\beta_0+\mathbf{x} \boldsymbol{\beta}+e, y=1\left[y^*>0\right] $$ where the indicator function takes the value 1 and if the event in the brackets is true and 0 otherwise. So $y$ is 1 if $y^* > 0$. Tobit: $$ y^*=\beta_0+\mathbf{x} \boldsymbol{\beta}+u, u \mid \mathbf{x} \sim \operatorname{Normal}\left(0, \sigma^2\right) \\ y=\max \left(0, y^*\right) $$ where $y=\max \left(0, y^*\right)$ implies that the observed variable $y$ equals $y^*$ when $y^* \geq 0$, $0$ otherwise. Questions: So it seems to me that one difference is that $y^*$ in the LVM takes the value 0 or, while in the Tobit model $y^*$ takes the values 0 or any positive values otherwise. Do I have that right? Are there any other key takeaways w.r.t to the differences between the models?
Suppose that $X_1, X_2, X_3$ are iid random variables. I have seen this fact many times that $$\mathbb{P}(X_1<X_2<X_3)=\frac{1}{6}$$ but I want to know that why every permutation of $X_1, X_2, X_3$ is equally likely and also is this true for both discrete and continuous case and can we generalize this fact for all $n \in \mathbb{N}, n \geq 2$. If yes then how to prove it?
I have a model with a one predictor, one mediator and one outcome. The following are the coefficients i got for my mediation analysis, but I can't understand how to make sense of them. Could someone please help explain what must be going on and how I can report these results. The indirect effect is significant (b = 0.041, CI [0.0103 and 0.0781]) The Total effect is non-significant ((b = 0.016, t = 0.323, p=0.747) The direct effect is non-significant with a flipped sign for the coefficient,( -0.03, p=0.619) is it valid to conduct a mediation in this scenario? how do I report my results. P.S; I ran my analysis with Hayes PROCESS macro P.P.s; I would really appreciate if someone could help soon because I'm on a bit of a time crunch.
Is there a risk of overfitting when hyperparameter tuning a model using Optuna (or another hyperparameter tuning method ), with evaluation on a validation set and a large number of trials? While a smaller number of trials may not find the best combination of parameters, could increasing the number of trials lead to the model being overfitted to the validation set? In both cases, the final model is evaluated on a test set.
Consider $S_n = \sum_{i = 1}^n b_{i,n} X_{i,n}$ where $X_{i,n}$ are random variable neither independent neither identically distribution and $b_{i,n}$ are weights satisfying the Lindeberg condition. I managed to prove that if $E[|X_{i,n}|^s]< \infty$ for all integers $s$, then $S_n$ converges to a gaussian random variable $G$. Now, I need to relax the assumption that all the moments exists. I read that this could be done by a sum of truncated random variables $Y_n$ and proving that $$ \text { if } S_n \rightsquigarrow G \text { and } d\left(S_n, Y_n\right) \stackrel{\mathrm{P}}{\rightarrow} 0 \text {, then } Y_n \rightsquigarrow G \text {; } $$ However, I have no idea on how to formulate the $Y_n$ to make this argument works. Has anyone ever heard of this technique?
I have several time séries couples from which I compute cointegration p-value, then sort these couple by that p-value, starting from the lowest (for further VECM analysis on the top 100). All couples have not the same length : some asset start after others, all finish at the same time (now), leading to time series with different length. I precise, 2 TS from the same couple have always the same length otherwise computing cointegration is not possible. When I find a couple with TSs having different nobs (different starting date), I crop the largest time series to the other and “align” them so that all points correspond in time. We know that p-value shrinks when the sample size increases if H0 is false. I confirm that by generating a cointegrated pair and compute its p-value for different sample sizes, 100, 10k and 1m leading to very different p-value and its normal since the TSs are known to be cointegrated (H0 false). From that context you may already guess the question : Is it relevant to sort p-values of cointegration from time series couples having different length, knowing that the p-value is ”underestimated” when the length is lower ? If not, how to compute a cointegration “score” that would take the p-value and account for their respective sample size, enabling to sort my TS couples with an homogenous unity ? Or is it just not relevant at all and I should always have the same length for all my TS ? I precise that in my case, the sample is still large for all couples. It ranges from about 5k points to 15k points.
I have run a Friedman's test in R and it has given me the following result: > pwc <- psheetv2 %>% + wilcox_test(score ~ time, paired = TRUE, p.adjust.method = "bonferroni") > pwc # A tibble: 3 × 9 .y. group1 group2 n1 n2 statistic p p.adj p.adj.signif * <chr> <chr> <chr> <int> <int> <dbl> <dbl> <dbl> <chr> 1 score bvigilant pvigilant 67 64 28 0.000855 0.003 ** 2 score bvigilant rvigilant 67 51 43 0.012 0.037 * 3 score pvigilant rvigilant 64 51 177 0.105 0.315 ns But despite there being significant differences between the groups, all the group medians are 0. How do I work out the direction of effect if all the means are the same? Have I done this wrong? Any help would be appreciated, thank you!
I'm trying to analyse bullying experiences across three age groups. The DV is scored on a 5-point Likert, and the IV is categorical (ages 11, 13, and 15). Initially I ran an ANOVA to see if there was a significant difference in bullying experiences across three age groups. The results came back as non-significant with a very small effect size. I re-ran it using a Kruskal-Wallis because it was ordinal data, and again found no significance and a very small effect size. Finally, I tried three Mann-Whitney U tests with Bonferroni corrections, and found the same. Normally I'd call it quits there and accept the results, but when I look at my table of means, there's quite substantial differences between the three groups. I'll summarise one example below: Age cat. Mean SD N 11-yrs 1.88 1.18 49 13-yrs 2.38 1.65 64 15-yrs 2.62 1.62 58 Would it not be logical to assume there's some difference between 1.88 and 2.62 when it's only on a 5-point? My main question here is if the large standard deviations can be responsible for this inconsistency? In this example 32/171 participants gave the highest score of 5, so it didn't seem like an outlier.
I have built a generalised linear mixed effects model fitted to a gamma distribution. I am wanting to compare this experimental model to a nested null model to see whether it is a better fit for the data. Here is the experimental model for illustration: fpMod_gamma <- lme4::glmer(reactionTimes ~ Condition + typingStyle + Condition:typingStyle + depression + Condition:depression + Order + Condition:Order + (1 | stimulus) + (1 + Condition | participant), family = Gamma(link = "log"), nAGQ = 1, glmerControl(optimizer="bobyqa", optCtrl = list(maxfun = 500000)), data = fpData) And here is the nested null model: fpMod_gamma_rl_null <- lme4::glmer(reactionTimes ~ 1 + (1 | stimulus) + (1 + Condition | participant), family = Gamma(link = "log"), nAGQ = 1, glmerControl(optimizer="bobyqa", optCtrl = list(maxfun = 500000)), data = fpData) I then run anova(fpMod_gamma, fpMod_gamma_rl_null) to compare these models. However, the results suggest the null model is a significantly better fit for the data. I have significant effects in the output of my experimental model and I'm a bit confused about what this means for these significant effects - are they still valid? What conclusions can I draw from the experimental model bearing this in mind? Here is the output from the anova: npar AIC BIC logLik deviance Chisq Df Pr(>Chisq) fpMod_gamma_rl_null 18 94932 95055 -47448 94896 fpMod_gamma_rl 47 94938 95258 -47422 94844 52.369 29 0.004957 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Any help interpreting this very appreciated.
We have a manufacturing process in which the finished products have the following requirements: individual unit must have a weight within ± 10% of average weight (test with 10 random units). One of the in-process quality control test is: Take 10 units, determine their average weight (X) without measuring individual units. Let's say this test is repeated 100 times throughout the day. So each measurement X is the average of 10 units. From the data of X1, X2,...X100, we can calculate the mean of X and its standard deviation (standard error in this case since X is itself an average). My question is: Can you estimate the standard deviation and range of the population (individual units - 1 million in total) and the probability of passing the required finished tests, from this data and how? Can you use the formula SD = SE * sqrt (n)? (in this case n=10) If anyone can point me to documentation or guides i'm very thankful.
From what I've seen, it is common practice in Deep Reinforcement Learning to standardize certain data. By standardization, I refer to the process of subtracting the mean and dividing by the standard deviation. (Certain Reinforcement Learning projects refer to this as normalization, and I also encountered the term Z-standardization.) One example is the REINFORCE algorithm example code from the Pytorch examples: returns = (returns - returns.mean()) / (returns.std() + eps) The effects are clear: scaling data to have the mean of zero and a variance of one. However, in one project I encountered the mean subtraction being omitted. (It was long ago and therefore can not provide a link.) This is something I did not see anywhere else. Are there cases where omitting the mean subtraction - thus only dividing by the standard deviation - is beneficial in the field of machine learning? If yes, what are those? I would be particularly interested in Deep Learning.
I have around 33 variables, and 300 observations, although of some variables there are some missing data. I would like to obtain the best subset of variables which can separate the best 3 categories in a multidimensional space. I thought about applying some discriminant analysis, but as there is some missing data (I would rather not to perform imputation) I am having some trouble. How would you recommend me to obtain the best subset of variables which can separate these three groups?
We have data $X_1, \dots, X_n$ which are i.i.d copies of $X$. Where we denote $\mathbb{E}[X] = \mu$, and $X$ has finite variance. We define the truncated sample mean: $\begin{align} \hat{\mu}^{\tau} := \frac{1}{n} \sum_{i =1}^n \psi_{\tau}(X_i) \end{align}$ Where the truncation operator is defined as: $\begin{align} \psi_{\tau}(x) = (|x| \wedge \tau) \; \text{sign}(x), \quad x \in \mathbb{R}, \quad \tau > 0 \end{align}$ The bias for this truncated estimator is then defined as: Bias $:= \mathbb{E}(\hat{\mu}^{\tau}) - \mu$ And I saw the inequality: $\begin{align} |\text{Bias}| = |\mathbb{E}[(X - \text{sign}(X)\tau) \mathbb{I}_{\{|X| > \tau\}}]| \leq \frac{\mathbb{E}[X^2]}{\tau} \end{align}$ But I am not sure how this was derived.
I am comparing calcium intake among 5 groups and want to test whether their knowledge of calcium is associated with a higher intake. I have asked 3 questions to assess knowledge e.g. do they know the RDA for calcium, do they know why we need calcium etc. Is a 3 way anova correct? The group sizes are unequal. Details I carried out a food frequency questionnaire and I have tallied all that up and carried out a one way anova test to compare the means between the 5 groups. I am happy with that. I now want to see whether there is an association between higher intakes and knowledge of calcium. I have 3 questions. 2 of which are likert style ie of they say strongly agree with the two statements then it shows they have good knowledge. I have coded the answers and transferred into excel. Do I just carry out individual one way anova tests for each question?
I am looking for an intuition on why ANOVA indicates that the difference between two variables is significant although I see that the variance within each variable is quite large. This is how the mean and error bars showing the standard deviation look like for the two variables: And the one-way ANOVA for the two samples (using this library https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.f_oneway.html) indicates: F-statistic: 9.584232820104393 p-value: 0.0020408541413054304 Do you think the result looks odd?
I am working on hourly-weather data. It contains four features: rain, wind speed, humidity, and temperature. Obviously, all of them are continuous values. The number of records is around 17000. Other than highly skewed precipitation (almost 90% are zero), all parameters are distributed normally. To cope with skewness, I added a one to all precipitation records and then executed a log and improved skewness a bit, but it is still highly skewed anyway. After performing various preprocessing techniques (Standardization, PCA, ...), I want to determine the optimal number of clusters to benefit KMeans. I tested Silhouette, Gap Statistics, Elbow, and calinski_harabasz, and the numbers of clusters they identified are 2, 6, 8, and 3, respectively. It is clear that the results of Silhouette and Calinski are not correct. Because it is impossible for weather conditions to be 2 or 3 types. But, what about Elbow and Gap statistics? How can I determine the right number of clusters? 6 or 8? EDIT Edit 2 Although the elbow is more primitive than others, its suggestion seems much closer to real-world conditions. Indeed, the question was raised because we always require a technique to compare the results of Gap statistics (7 clusters) and Elbow (8 clusters). As you know, many clustering techniques need cluster numbers. In this case, luckily Silhouette and CH show unacceptable results, What if elbow=8, Silhouette =9, Gap = 10, and CH=11? so in, a real project, again you prescribe the same approach "Be open to the answer being "k-means cannot cluster this data set well"!! Probably, you come up with another solution! The main advantage of this question could be sharing experiences about your approaches if you face a similar issue!!
I am modelling the occurrence of a species at 5 different sites on an hourly basis (presence/absence), based on a range of temporal predictors (e.g. time of the year, day/night cycle, tides ...). Covariates are indicated by x1, x2, ... in the code below. For information, I have ~70 000 data points. I am using a HGAM structure, as introduced in Pedersen et al.2019. For each covariate, I first investigated different specification options (global smoother or not, shared penalty or not ...), and selected the best one based on AIC and my research question. When putting all of the terms together in the model, I end up with a structure like this: model <- bam(response ~ offset(log(offset)) + s(year, bs="re") + Site + s(x1, m=2, bs="cc", k=8) + s(x1, Site, bs = "fs", xt=list(bs="cc"), m=2) + s(x2, bs = "cc", by=Site, m=2, k=8) + s(x3, m=2, bs="cc", k=10) + s(x3, by = Site, bs = "cc", m=1, k=10) + s(x4, bs = "cc", by=Site, m=2, k=8) + s(x5, bs = "tp", by=Site, m=2, k=8), family = "binomial", data = data.all, method="REML", cluster=cl, select=TRUE) The explained deviance of the model is only about 13%. Given the high temporal resolution (hourly), I expect some temporal autocorrelation in the residuals. I read in the bam() documentation, and in other posts (here and there), that this could be specified using the rho argument in bam() following the order of the dataset. However, I am not sure this applies when the model as a hierarchical structure ? I know it is feasible in gamm(), but the problem then is that I cannot specify multiple factor-smoother interaction terms as currently written in the model ... Is it something that could be tackled in brms ? Or any other way ? For information, here is the output of the acf and pcaf functions plot:
I want to use self-normalized importance sampling methods to estimate $$\int_{1}^{\infty} \frac{x^2}{\sqrt{2\pi}}e^{\frac{-x^2}{2}} \,dx$$ I choose exponential distribution with rate $\lambda=1$ as my importance function which is $$f(x)=e^{-x}$$ The true value of the integration is about $0.400$ but I get 0.799. The following is my R code. I follow the algorithm in page 32. http://people.sabanciuniv.edu/sinanyildirim/Lecture_notes.pdf I still can't find the error in my code. N=10000 f=function(x){ return((x^2)*(x>=1)) } p=function(x){ return(dnorm(x)) } #((1/sqrt(2*pi))*exp((-x^2)/2)) q = function(x) { return(exp(-x)) } x = rexp(N, rate =1) theta.hat2=sum((p(x)/q(x))*f(x))/sum((p(x)/q(x))) theta.hat2 Update: I use t distribution as the important function because it has same support with the target density and I get the desired value. How can I compute the variance of this estimator? N=10000 f=function(x){ return((x^2)) } p=function(x){ return(exp((-x^2)/2)) } q = function(x) { return(dt(x,df=3)) } x = rt(N,df=3) w_u=p(x)/q(x) w=w_u/sum(w_u) theta.hat2=sum(w*f(x)*(x>=1)) theta.hat2 # 0.4064571
I have observed N (a_i, p_i) pairs each drawn from a different Bernoulli distribution. Here a_i are observed amplitudes and p_i are observed probabilities of success for the i^{th} draw. I would like to model the full likelihood distribution (and not just via the MLE) with a view to identifying which draws belong to the successful class (and hence the statistics of just these draws, especially any irregular uncertainty distributions etc.). For example, I have a_i p_i p_true 10 0.2 0 100 0.9 1 11 0.1 0 99 0.93 1 12 0.25 0 I know the PDFs of the model and of the data (both are obviously Bernoulli distributions), but how do I combine them to obtain residuals that I can use to explore the joint distribution? What distribution does the joint distribution follow and how? I have tried to unpack the cross-entropy and data and model but don't have a clear solution. Is the continuous Bernoulli distribution a complete red herring? Note that ordering doesn't matter in my example. Some other points in response to comments: The p_i values come from an oracle - these represent the probabilities that the datapoints belong to a common class. The amplitudes a_i are just weights for the corresponding Bernoulli distributions. The higher the amplitude, the higher the scaling of the Bernoulli distribution in its contribution to the overall process. See also: "Weighted" Poisson binomial distribution Weighted sum of Bernoulli distributions https://math.stackexchange.com/questions/3481907/sum-of-weighted-independent-bernoulli-rvs What is the CDF of the sum of weighted Bernoulli random variables? Please let me know if you need any more information to assist in answering. Thanks as ever!
I have a dataset with destructive follow-up. That is, with a population starting at time 0, we are taking out a proportion at predetermined time points to see whether the event has occurred to them. The sampling is destructive, so we can't sample the same individuals repeatedly. In this case, we need to dissect a fish to determine whether the event has happened and we do this to a set number of fish at predetermined times. For example, we dissect 15 fish at 24h to know which animals have had the event, then we dissect another 15 animals at 48h and so on. I have done a logistic regression with time as one of the predictors and binary outcome (binomial family glm), but I wanted to ask if it's possible to use survival analysis for this type of data. I think what I have is left censored data for the animals where the event has occurred at time of dissection, and right censored for the animals where it hasn't occurred. In that case, there is only censored data, right? Is there a correct way of using this kind of data for survival analysis? Edit: In this experiment, I have size of fish and temperature as covariates, so ideally, I would like to test whether these two variables affect the time to event. Both temperature and weight can be stratified ("small, medium, large" and "cold, medium, warm") as they are semi-controlled variables, but I'll probably get more information out of using the exact measurements rather than creating dummy variables. I can also add that the event is certain to happen eventually with the experiment designed to keep going until >95% of the fish are expected to have had the event. I also know for certain that none of the fish had the event at time 0. This then also would suggest that I have reasonable priors, so could take a bayesian approach.
I would like to determine CDF and PDF from quantiles that I have determined via quantile regression. I have read here in the forum (Find the PDF from quantiles) that it is possible to interpolate this via the integral of a B-spline The PDF should then be determined via a normal evaluation. Unfortunately I did not understand why I have to use the integral of the B-spline, how can I ensure that the CDF is monotonically increasing and how I then get to the derivative (the PDF)? Can someone help me please? This is how it currently looks for me: import scipy.interpolate import numpy as np x = np.array([ 38.45442808, 45.12051933, 46.85565437, 47.84576924, 49.50084204, 50.09833301, 51.3717386 , 54.85307741, 59.91982266, 63.11786854, 66.90037244, 67.84446378, 72.96120777, 73.92993279, 81.63075081, 85.42178836, 90.70554533, 91.2393176 , 110.03872988]) y = np.array([0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50, 0.55, 0.60, 0.65, 0.70, 0.75, 0.80, 0.85, 0.90, 0.95]) t,c,k = scipy.interpolate.splrep(x, y) spline = scipy.interpolate.BSpline(t, c, k, extrapolate=False) d_spline = spline_.derivative() N = 100 xmin, xmax = x.min(), x.max() xx = np.linspace(xmin, xmax, N) fig, ax = plt.subplots(2,1, figsize =(12, 8)) ax[0].plot(x, y, 'bo', label='Original points') ax[0].plot(xx, spline(xx), 'r', label='BSpline') ax[1].plot(xx, d_spline(xx), 'c', label='BSpline') My approach doesn't really work well unfortunately and I can't find any numerical examples to help me. I am grateful for all comments and remarks! Thank you!
I have a dataset consisting of time periods, at the end of each one the individual either develops a disease, or doesn't and is right-censored. I suspect that the rate of developing the disease is determined by a power function of another known variable, N, a an unknown power b, and an unknown multiplier a determined by a random effect across the different individuals id. I found this page very helpful, but I would like to use a survival model as the response variable (lhs), and a mixed non-linear model as the input (rhs). The data look something like this (where I've made the power parameter 0.4). library(tidyverse) library(survival) library(lme4) Nsamples = 100 Z = tibble(ev = sample(0:1, size = Nsamples, replace = T), #event outcome N = rpois(n = Nsamples, lambda = 100), #known input variable dur = N^0.4*rnorm(n = Nsamples, mean = 1, sd = 0.1), #length of follow-up id = rep(letters[1:2], each = Nsamples/2) #a random effect ) I use a deriv function to define the non-linear part: power.f = deriv(~a * N^b, namevec=c('a', 'b'), function.arg=c('a', 'N','b')) And a survival function to define the output: surv_obj <- with(Z, Surv(time = dur, event = ev)) Putting these together, and using the lme4::nlmer function with format Output ~ Non-linear part ~ Random effect nlmer(surv_obj ~ power.f(a, N, b) ~ (a|id), data = Z, start=c(a=1, b=0.5)) However, I get the following error # Error in resp$ptr() : dimension mismatch In principle this should work I think, but I am not sure if these functions are compatible with each other and the error message doesn't mean much to me. Please help if you know how to get this to work, or some other way to fit a survival rates model like this with a power law in the input variables. Thank you!
Short version Can bootstrap be used to find disconnected confidence regions when MLE is not unique? Long version Let $\theta$ be a parameter and $P_\theta=\mathrm{Normal}(\theta, 1)$ be a distribution. In the frequentist approach to statistical inference one has access to the sample $X_1, \dotsc, X_n\sim P_\theta$ and can construct the maximum likelihood estimator $\hat\theta(X_1, \dotsc, X_n) = \frac{1}{n}\sum_i X_i$ as well as construct the 95% confidence interval $\mathrm{CI}(X_1, \dotsc, X_n)$ around $\hat \theta$ in various manners (in this case an analytic formula is available, but one could also use likelihood profile, Fisher information matrix, or bootstrap). My understanding is that: We work with an identifiable model (for $\theta_1\neq \theta_2$ we have $P_{\theta_1}\neq P_{\theta_2}$), so for large $n$ we will find (approximately) unique $P_\theta$ and from this $\theta$ in turn. The confidence intervals based on any of the above methods have their usual meaning, i.e., if we repeat the procedure of sampling the data from $P_\theta$ and construct the confidence interval for each sample, then 95% of them will cover the true value $\theta$. However, in more complex situations (especially when the model is non-identifiable) a maximum likelihood solution may not be unique. For example, consider $P_\theta=\mathrm{Normal}(\theta^2, 1)$ with two maximum likelihood estimates, $\hat \theta_{\pm}(X_1, \dotsc, X_n) = \pm\sqrt{\frac 1n\sum X_i}$. In this case I can still define 95% confidence regions (which often will be disconnected in this case, consisting of two intervals) for parameter $\theta$ adjusting the analytical formulas. However, I do not know how to construct the confidence regions when (a) analytical formulae are not available or (b) I do not even know how many maximum likelihood solutions exist. Could you recommend me some references on finding confidence regions with non-unique maximum likelihood estimates? I do not know whether methods such as likelihood profile, Fisher information matrix, or (most importantly for me) bootstrap would still work and retain the usual meaning of confidence regions. I know that in Bayesian statistics identifiability poses different kinds of issues (as label switching and how to understand the multimodality in the posterior), but I would like to learn how this problem can be tackled from the frequentist perspective.
Small area estimation (SAE) techniques combine information from household surveys with existing auxiliary information at population level to make inferences of certain indicators for population groups who represent disaggregations for which the survey was not designed. Most SAE methods, both direct and indirect (for more info read this document), require the survey to record the small area where each respondent lives. In other words, when trying to estimate a variable $y$ for each respondent $r$ in a small region $s$, most methods rely on the knowledge of $y_{rs}$. I am studying a survey that does not provide $y_{rs}$, and only aggregated zoning information is provided for each respondent. According to this Eurostat study, the only SAE methods that work for this case are called synthetic estimators, and basically consider the small areas to be homogeneus in that they have common parameters, without allowing any degree of heterogeneity between them. As stated in this other question, the assumptions of these methods are very strong and the bias of the estimation can be considerably high. Are there any other methods that can be applied when no information regarding the target small area is covered within the survey?
I am learning Gibbs Sampling for GMMs. Particularly, given $\boldsymbol \theta$, I must sample from the latent $\boldsymbol z$ before sampling $\boldsymbol x$. The PDF of $\boldsymbol z$ is given as$$ P(z=k|\boldsymbol x,\boldsymbol\theta)\propto P(z=k)\mathcal N(\mu_k,\Sigma_k) $$ for $k=1\cdots.K$ and by assumption, $\mu,\Sigma,P(z_k)$ are given. Gibb's sampling states that I should iteratively sample from $\boldsymbol z$ and $\boldsymbol x$ and accept all sample (since it is a MH with proposal probability $1$). The problem is, how do I sample from $\boldsymbol z$? Sampling from $P(\boldsymbol x|\boldsymbol\theta)$ is easy once I have $\boldsymbol z$ since I will simply select a Gaussian component corresponding to $z_k$ and call packages for sampling a Gaussian distribution. Do I need to perform another MCMC for a every single sample of $z$ or perhaps rejection or importance sampling?
I am using xgboost (in R) to predict a continuous non-negative target variable using standard root mean squared error as an evaluation metric (and loss function). However, as pointed out in this post, it is possible that a non-negative target sometimes receives negative predictions. Since I have many training values in the target that are close to zero, there are a few observations that have received a prediction below zero. So, is there any way to train my xgboost model so that predictions are >= 0? Maybe through different loss functions?
In Rubin 1990, Donald Rubin describes four different modes of statistical inference for causal effects: Randomization-based tests of sharp-null hypotheses - in the tradition of Fisher, if you've got an unconfounded assignment mechanism combined with a sharp null hypothesis of no treatment effect, you compute the value of a test statistic in your sample and compare that to the sampling distribution of the test statistic under the null to get a p-value (which can also give you a confidence interval by inverting the null hypothesis test) Randomization-based inference for sampling distributions of estimands (aka repeated sampling randomization-based inference) - in the tradition of Neyman for survey sampling, where you define an estimand of interest Q, select a statistic $\hat{Q}$ that is an unbiased estimator of the estimand, find a statistic $\hat{V}$ that is an unbiased estimator of the variance of $\hat{Q}$, assume the randomization distribution of $(Q - \hat{Q}) \sim N(0, \hat{V})$, and perform inference using that distribution (sometimes you assume a t-distribution instead of a normal distribution). Bayesian inference (aka Bayesian model-based inference) - take the assignment mechanism from the potential outcomes framework, supplement it with a joint probability model $Pr(X,Y)$ (factored in such a way that $Pr(X,Y) = \int \prod_{i=1}^{N} f(X_i, Y_i | \theta) Pr(\theta) d\theta$, where $\theta$ is a parameter such that it's straightforward to compute the causal estimand Q as a function of $\theta$) for your covariates and outcome, specify the prior distribution of the parameter $Pr(\theta)$ and calculate the posterior distribution of the causal estimand of interest Q. Superpopulation frequency inference (aka repeated-sampling model-based inference)- take the assignment mechanism and the probability model $\prod_{i=1}^{N} f(X_i, Y_i | \theta)$, but discard the prior distribution and draw frequency inferences about $\theta$ using tools of mathematical statistics like maximum likelihood, likelihood ratios, etc. Suppose I am using an inverse probability of treatment weighted (IPTW) estimator to fit a marginal structural model to estimate the average treatment effect (ATE) of an active treatment relative to a control treatment using observational data. Let's make the usual assumptions needed to do this kind of inference (treatment version irrelevance, no interference, positivity, conditional exchangeability/no unmeasured confounders, no measurement error in X, correct specification of the nuisance model to estimate the weights, any missing data satisfy the stratified MCAR assumption). If I want to do frequentist inference and get a 95% confidence interval for the ATE, am I appealing to inference mode 2 (repeated sampling randomization-based inference) or 4 (repeated sampling model-based inference)? Or is there some other argument used in this setting to justify variance estimation. Rubin, Donald B. (1990). Formal modes of statistical inference for causal effects. Journal of Statistical Planning and Inference. 25. 279-292.
I'm working on a project that does simulations / measurements where we measure values between [-1, 1] (but we also use absolutes [0, 1] to make life simpler). It's 'good' when everything is as close as possible to zero. It's 'bad' when a few measurements (even a single one) is larger than 0.05 and 'critical' when close to 1. The amount of measurements ranges from 20,000 to more than 1,000,000 per test. When we make a change we simulate the effects. When plotting/graphing it's possible as a human to easily "see" if the changes made improvements (less 'peaks') or made it worse (more 'peaks'), e.g.: 11 peaks: improvements which resulted in 4 peaks: I never needed to analyse such data mathematically, but I'm struggling to find some mathematical concepts, functions, algorithms or approaches to say anything meaningful about the reduction in peaks. As a human I can "see" the improvement is around ~2.75, but mathematically I tried using a few statistics: total area of graph mean values standard deviation values combination of values above, either multiplied or divided by others or the total measurement count I think the problem is that the amount of data points when peaks occur is very small, but I also can't hardcode the 0.05 and 1 values for filtering peaks out, because it's possible to have a test that's 0.001 everywhere but with peaks that are 0.05 (also bad) - I think normalizing such data would allow me to apply any solutions I get in this post (so increasing all values by a factor one divided by the biggest value). But in none of the above cases I got any human comprehensible values. I'm wondering what algorithmic approaches would allow me to automate such 'peaks' analysis, and allow a program to "see" the improvement is ~2.75x in the above graphs?
I am running a basic difference-in difference (DiD) model. I would like to explain the effect of increasing the price of fares for students only. I use DiD with adults as a control group. I have monthly data of tickets sold for each category. To avoid seasonality I have data before treatment, that is from April 2021 - December 2021 and after treatment. It started the 1st of April, so the post-periods are from April 2022 - December 2022. Can I do DiD like this? When I did a calculation by hand (not modelled), I realized that the ratio of students/adults was 0,518 before treatment and 0,35 after the treatment. I calculated that the decrease that the treatment meant for students was around 31%. I calculated that with an assumption that the ratio before would stay the same because with DiD I presumed that the only thing that changed is the TREATMENT. Also, I calculated how much demand for adults went up, it was around 81%, while the students went up only 21,4 %. So, I used the same logic, what is the difference between the counterfactual when the students would rise 81% too and the real state of world. Then, I ran a DiD model using the standard lm() model using no other predictors, all the averages of the groups stayed the same as expected. But, when I calculated the counterfactual to know how many students would be there if there was no intervention, it gives me a way higher number. In the DiD analysis, the demand went lower by 50% and not 31%. How is this possible? The DiD counterfactual somehow presumes that the students should increase the demand even more! Why? This is how the data look: And here my dif in dif results: Thanks for your help! Ratio of students to adults:
How to understand and explain the sign of variable changes after including its square in a regression? For example, in a regression, X loads negatively, but X loads positively after further including the square of X. Does this means something wrong with the regression specification or not? Do we need to take some measures to address this issue?
I found the predicted hazard (the h(t) of Cox regression) through Predict() and cph() in rms package was different from common coxph(). URL <- "https://socialsciences.mcmaster.ca/jfox/Books/Companion/data/Rossi.txt" Rossi <- read.table(URL, header=TRUE) Rossi[1:3, c("week", "arrest", "fin", "age", "prio")] # looks like this #week arrest fin age prio #1 20 1 no 27 3 #2 17 1 no 18 8 #3 25 1 no 19 13 library(survival) fitCPH <- coxph(Surv(week, arrest) ~ fin + age + prio, data=Rossi) risk<-predict(fitCPH,type="risk") print(head(risk),digits = 6) #[1] 0.852426 2.531330 3.842432 0.649217 1.458157 0.946252 library(rms) dd<-datadist(Rossi) options(datadist = 'dd') fitCPH2 <- cph(Surv(week, arrest) ~ fin + age + prio, data=Rossi) #Take the 1st individual in **Rossi** as a sample:fin="no",age=27,prio=3 rms::Predict(fitCPH2,fin="no",age=27,prio=3,fun=exp,type = "predictions",ref.zero =FALSE,conf.int=0.95,digits = 4) # fin age prio yhat lower upper #1 no 27 3 1.013916 0.8171016 1.258137 #Response variable (y): #Limits are 0.95 confidence limits I thought yhat should be 0.852426 (the first number of risk), but why it was 1.013916?
I'm consulting with a local quantitative person about some data, it's double-censored. Left censored is below limit of quantitation. Right censored is saturation of the assay. I want to regress it against a simple numeric predictor. I was just told that using survival analysis may not be appropriate because "the censoring is not independent of the outcome". Aren't so-called Tobit models also censored "independent of the outcome", but they can be run via a wrapper function in AES that just calls survreg? I'm simply confused.
I have a large dataset (1,465 observations, >50,000 predictors). I conducted a LASSO regression using glmnet in order to perform variable selection, then I ran a simple logistic regression model to test associations with the 15 predictors that were left with non-zero coefficients after LASSO. After the simple logistic regression model, I wanted to conduct multiple test correction, so I did, using the Benjamini-Hochberg Procedure. However, I've been told this is improper, as it is "double-dipping," so to speak. The p-values are supposed to be derived directly from the LASSO regression (which can then be corrected). However, I don't know how to get those p-values. I've tried the islasso package, but I run into a stack overflow issue. Running RStudio from the command line to increase max-ppsize as well as increasing "R_MAX_VSIZE" and options(expressions = 5e5), I still run into the issue. How can I calculate these p-values and correct them? I have the proper lambda value, which is 0.0412638 (obtained through elastic net cross-validation). Any help you all can provide is greatly appreciated. Thank you for reading!
I want to build a database of devices performance on various tests so that I can compare devices on a single test. Unfortunately not all reviewers review every device, this leaves holes in my data that prevents comparing every device in the dataset on every metric. But there is enough overlap of measurements of the same devices that I can estimate how different tests are related. i.e. everyone reviews the iPad, but not everyone reviews the middle size of Huawei device. What kind of statistical model can I use to help predict the performance on missing data points based on past data? (Bonus points if I can generate a confidence interval for predicted points). I have used regression between two sources based on the overlap to help predict what the results might look look like for missing points, but I assume there is a way to exploit all the data at once to get better predictions, as well as avoid using two "hops" of estimation in some cases. Toy Example for battery life: Model Bob's Guide Footbook Check The Berge iPad 10.95 12 10 Tab S8 12.88 13.5 ? Tab S8 Ultra ? 10.85 ?
Fundamental question: I have one dataset which I have used to build and generate a model for survival prediction. Here I get like 40 genes as my predictor which I have tested both in my test and train dataset so for this I have used the TCGA cohort. Now to test in an independent data I have chosen another patient cohort bigger one beatAML dataset which i want to make it as my validation dataset. Now the issue I'm facing is for some of the gene which i have got coefficient are missing in my validation data set. So simple question is it conceptually correct to eliminate those genes which are not present in my validation cohort and go ahead with the ones which are present.
I calculate models that describe measured data. Most people in the field don't attempt anything statistical (the models have many potential sources of uncertainty). However, I want to do a better job and see hints that expectations are changing. I can't find any resources (that are simple enough for me to understand) that answer my big-picture questions though. Could someone please point me in the right direction? Here's my situation. I typically have (1) a set of measured data and (2) a model for describing the data. As far as I know, I can't propagate uncertainties because my "measured data" are actually calculation results based on methods that can't be verified at certain conditions. What I can do is evaluate "goodness of fit." I felt proud of myself when I wrote down the "diagonal of covariance matrix (variance)" from something I found in Matlab, but my colleague doesn't understand what it is and doesn't think anyone else will either. Also, the values look weird (some are negative). My colleague thinks I should use "chi squared." So I looked that up. According to a description I found online, it is appropriate for "categorical" data. I'm a little iffy on what "categorical" means, but although I can divide my data and model results into categories (like measurements/model results at particular temperature), it seems like I don't have categorical data. My data are a property, Z, at a temperature and another condition. After all that, I'm thinking that I should be using "r squared." Is this what people well-versed in statistics would expect to see in an evaluation of a model fit to measured data? I have calculated it but don't understand what it means on a fundamental level. I realize I'm asking a few broad questions but any tips would be very welcome as I'm feeling like I'm not getting any further on this.
Edit: after skimming this paper6, I narrowed the scope of this question to NLP problems. Relevant excerpt from the abstract (emphasis my own): We demonstrate that unsupervised preprocessing can, in fact, introduce a substantial bias into cross-validation estimates and potentially hurt model selection. This bias may be either positive or negative and its exact magnitude depends on all the parameters of the problem in an intricate manner. Motivation It's obviously wrong to train on test set features with test set labels. But in many ML competitions, it's standard to release test set features and allow participants to train on them. One example is the Real-world Annotated Few-shot Tasks (RAFT) benchmark in NLP.1 Here's an excerpt from the RAFT paper (emphasis my own): For each task, we release a public training set with 50 examples and a larger unlabeled test set. We encourage unsupervised pre-training on the unlabelled examples and open-domain information retrieval. In the RAFT competition, you submit predictions by running your model on the same set of unlabeled texts which you may train on. In NLP, a common way to train on unlabeled text is to train a language model which predicts tokens conditional on other tokens. I understand that releasing test set features is helpful for those hosting the competition, as it allows participants to submit predictions rather than models/code. I also understand that in real-world model development, you may have observed lots of unlabeled text. But I think the critical difference is that in the real world, you don't have access to out-of-sample text. Question Is training a model on (in-sample) test set texts, and then evaluating that model on the same test set an optimistic estimator of out-of-sample performance? A reasonable-sounding hypothesis is that training on (in-sample) test set texts results in correlation between test set predictions and test set labels, which is an optimistic estimator (at least for linear regression, see equation 7.21 in ESL2). But I don't have an argument for how exactly that dependence arises from training on test set texts without test set labels. The result of my experiment with PCA here has an important implication for ML competitions: if there are few test set observations and features exhibit high rank, then one can artificially reduce error on the test set by fitting a PCA on test set features. I'm curious to see if a similar type of result can be observed in NLP, where it's standard practice to train language models on unlabeled text before classification tasks.3 I have a feeling that part of the answer lies somewhere in the paper On Causal and Anticausal Learning4 or its child Causal Direction of Data Collection Matters: Implications of Causal and Anticausal Learning for NLP5. These papers establish that semi-supervised learning should only help for data where text causes the target. References Alex, Neel, et al. "RAFT: A real-world few-shot text classification benchmark." arXiv preprint arXiv:2109.14076 (2021). Hastie, Trevor, et al. The elements of statistical learning: data mining, inference, and prediction. Vol. 2. New York: springer, 2009. Gururangan, Suchin, et al. "Don't stop pretraining: Adapt language models to domains and tasks." arXiv preprint arXiv:2004.10964 (2020). Schölkopf, Bernhard, et al. "On causal and anticausal learning." arXiv preprint arXiv:1206.6471 (2012). Jin, Zhijing, et al. "Causal direction of data collection matters: Implications of causal and anticausal learning for NLP." arXiv preprint arXiv:2110.03618 (2021). Moscovich, Amit, and Saharon Rosset. "On the cross-validation bias due to unsupervised preprocessing." Journal of the Royal Statistical Society Series B: Statistical Methodology 84.4 (2022): 1474-1502.
In the case of a point treatment, the average treatment effect among the treated (ATT) is the treatment effect standardized to the subset of the sample that received treatment $A=1$ as an estimate of the treatment effect among patients who would receive treatment in the population. In the potential outcomes formulation, $\mathbb{E}[Y^{a=1}|A=1] - \mathbb{E}[Y^{a=0}|A=1]$ on the risk difference scale. Can the this estimand be extended to the time-varying case, where we may be interested in the effect of perfect adherence to a treatment regimen $f_a$ in the subset of patients who completely adhered? Analogously, if the inverse probability weights to estimate the ATT for a point treatment are $\frac{P(A=1|\boldsymbol{X})}{1-P(A=1|\boldsymbol{X})}$ for baseline confounders $\boldsymbol{X}$, what would these be for a time-varying ATT?
I need to calculate the values for certain return periods of a flood event (up to 5000). It has to be GEV with method of moments and Gumbel with L-Moments. But I am not sure about how to calculate Gumbel, because I don't know how to implement the parameters and for the GEV I also didn't find any good example where GEV is used in combination with the method of moments. Usually I would calculate Gumbel like this, but there I am not using the parameters directly: Frequency_factor = -(sqrt(6)/pi)(euler_mascheroni + log(log(Q)-log(Q-1))) Q = mean(Maxima) + Frequency_factorsd(Maxima) I am looking for any examples where this already has been performed or explanation so I can understand and do it as well.
I make this kind of scatterplot a lot and was curious if it had a proper name. The diagonal line is when the two variables have the same value. The observations that occur above the line have a greater value in the y axis and below the line have a greater value in the x-axis. The vertical and horizontal distance from the line represents the absolute difference between the groups. And the diagonal distance half of that difference. Generally, the idea is further from the line greater the difference between the variables. The plots makes the most sense when both variables have the same scale.
I am doing propensity score weighting with the package "Weightit" and I want to have a Table 1 "Before and After Weighting" like this one: As you can see, data are displayed EITHER as MEAN±SD or frequency(percentage) depending whether the variable is continous or categorical. I want to do the same thing. This is my code: data("lalonde", package = "cobalt") library(cobalt) library(WeightIt) W.out <- weightit(treat ~ age + educ + race + married + nodegree + re74 + re75, data = lalonde, estimand = "ATT", method = "ps") W.out bal.tab(W.out, stats = c("m", "v"), thresholds = c(m = .05), disp=c("means", "sds")) However, by using bal.tab from cobalt package the problem is that it only shows means (SD) and DOESN'T SHOW frequency(percentage) for categorical variables as I want to. How can I overcome this problem? I can also use another package both for IPTW and for creating tables. I just want to perform IPTW and create a table before and after.
I am trying to build a regression model to find the correlation between the features of thumbnails and the popularity of videos. I am proposing that category of videos is the moderator of the correlations, and I selected two categories for comparison (dichotomous). As I have 3 independent variables (features of thumbnails), do I include an interaction term for each feature? I understand that it could cause my model to be overly complex and multicollinearity could cause problems. Alternatively, is it possible to only test for one moderation effect in each model, and build three of them - is this an appropriate way to do it? I would also like to ask why can't I just separate my dataset by category and construct two models and compare the coefficients? Thank you!
This is the example I'm referring to, it is taken from Mostly Harmless Econometrics: An Empiricistís Companion by Angrist and Pischke: Suppose that we are interested in whether children do better in school by virtue of having started school a little older. Maybe the 7-year-old brain is better prepared for learning than the 6 year old brain. This question has a policy angle coming from the fact that, in an effort to boost test scores, some school districts are now entertaining older start-ages (to the chagrin of many working mothers). To assess the effects of delayed school entry on learning, we might randomly select some kids to start first grade at age 7, while others start at age 6, as is still typical. We are interested in whether those held back learn more in school, as evidenced by their elementary school test scores. To be concrete, say we look at test scores in first grade. The problem with this question - the effects of start age on first grade test scores - is that the group that started school at age 7 is...older. And older kids tend to do better on tests, a pure maturation effect. Now, it might seem we can fix this by holding age constant instead of grade. Suppose we test those who started at age 6 in second grade and those who started at age 7 in first grade so everybody is tested at age 7. But the first group has spent more time in school; a fact that raises achievement if school is worth anything. There is no way to disentangle the start-age effect from maturation and time-in-school effects as long as kids are still in school. The problem here is that start age equals current age minus time in school. [...] The effect of start age on elementary school test scores is most likely FUQ. by FUQ the authors mean: "fundamentally unidentified question", namely "research questions that cannot be answered by any experiment". I'm mostly interested in the bold part of the transcript, I tried to understand it but I was a bit puzzled about this example at first, since it seemed to me that the maturation effect was actually what we wanted to estimate, since after all that's what can be considered the cause of higher learning ability. Thinking about it a bit more, I realized that probably the authors are referring to a situation as the one depicted in this (not very graphically pleasing) causal graph I made. I admit I have just a broad understanding of this type of graphs, so I could have made some error in the design: What the authors have in mind is probably to only estimate the difference in the learning ability between 7 years old children and 6 years old children, but the maturation effect can be considered a confounder which causes both the learning ability and the result of test scores to grow, this last one through other factors relevant to test scores, such as better focus (even if I now realize that better focus could cause better learning ability, but I don't think this changes the matter). The problem resides in the fact that controlling for the maturation effect is impossible since this would mean fixing the age of the children, eliminating doing so the possibility to learn about the improved learning ability due to the age difference. Is this what the authors have in mind? If not, what's the correct interpretation? Thank you for the help in advance!
if I consider the Bayesian network in the picture below: https://i.stack.imgur.com/nZKN2.jpg So A is the parent of C,D and B. But B is also the parent of D. Is it then correct to write: P(D | A) = P(D | A,B) * P(B | A)? Also, if I hav this formula for the full joint probability P(A,B,C,D) = P(D | A,B) * P(B | A) P(C | A) * P(A), how can I know the number of parameters required to specify this probability? I have been looking for the rules on the internet but I keep getting confused. Is it (333)+(33)+(3*3)+2=47 parameters? Would appreciate the feedback!
Background: Let $X=[x_{j,k}]_{j=1,\ldots,p, k=1,\ldots,n}$ be a $p\times n$ data matrix. Each column is an i.i.d. $p$-dimensional centred random vector. One can compute the sample covariance matrix $$S=\frac{1}{n}XX^\top$$ as well as the sample correlation matrix $$R=(\mathrm{diag}(S))^{-1/2}S(\mathrm{diag}(S))^{-1/2}.$$ One can also obtain the partial correlations from this data matrix, while the expressions are more involved. Question 1: My first questions is, if the distributions of $X$ is given, say, each entry is an i.i.d. standard Gaussian, can one write down the distribution from the correlation coefficients? Note that $R$ is also a random matrix now. Question 2: Moreover, can one write down the distributions for the partial correlations? And what about the joint distribution of all partial correlations? I presume that the answer to the first question should be known from the literature, while the expression itself could be very much involved.
I am new to inverse probability treatment weighting (IPTW) and I am trying to understand it. However, I am not a statistician and I am having some troubles with advanced statistics concepts. As far as I understand, with IPTW you can achieve a randomization-like effect. First you calculate the propensity scores and afterwards, depending on the group of the patients (treated vs non treated) you weight each patient. The main difference with propensity score matching (PSM) is that with IPTW you don't lose patients. Thus the question: why does it happen that in my dataset or in others (example below), there is a reduced effective sample size in the balanced group? Moreover, is it possible to obtain in the balanced population the number of patients for each baseline variable? For instance data("lalonde", package = "cobalt") library("WeightIt") W.out <- weightit(treat ~ age + educ + race + married + nodegree + re74 + re75, data = lalonde, estimand = "ATT", method = "glm") W.out In this example I balanced for the following covariates: age, educ, race, married, nodegree, re74 and re75 My questions are: Why is the effective sample size of the balanced population reduced? Does this mean that the frequencies of the variables have changed compared to the unbalanced population? Is there a way to obtain the number of patients in the balanced population in the group of educ, race, married, nodegree...? Since the weighted population is reduced I assume that there have been some changes also in the patients for each variable. Please try to explain in a simple way, I am really new to this.
This definition is excerpted from Wikipedia: The Matérn covariance between measurements taken at two points separated by d distance units is given by $$C_\nu(d) = \sigma^2\frac{2^{1-\nu}}{\Gamma(\nu)}\bigg(\sqrt{2\nu}\frac{d}{\phi}\bigg)^\nu K_\nu\bigg(\sqrt{2\nu}\frac{d}{\phi}\bigg),$$ where $\Gamma$ is the gamma function, $K_{\nu}$ is the modified Bessel function of the second kind, and $\phi$ and $\nu$ are positive parameters of the covariance. One can translate the math expression directly into the R code as kernel.matern <- function(d, nu, phi, sigma2) { d <- d z <- sqrt(2*nu)*d/phi return(sigma2 * (z)^nu * besselK(z, nu) / (2^(nu-1)*gamma(nu))) } However, our first version of kernel.matern will return Inf at $d=0$. This is because $besselK(z, nu)$ returns 0 at z=0. The value $C_\nu(0)$ is actually defined as the limit of $C_\nu(d)$ as $d\to 0^+$. The solution can be adding a small perturbation eps to d, which gives our second version: kernel.matern <- function(d, nu, phi, sigma2, eps) { d <- d + eps z <- sqrt(2*nu)*d/phi return(sigma2 * (z)^nu * besselK(z, nu) / (2^(nu-1)*gamma(nu))) } The second version basically solves the problem. One tiny issue left is that the choice of eps seems to be quite arbitrary. A large eps may give incorrect answer for a small nu > kernel.matern(0, 1, 0.07, 1, 1e-6) # correct [1] 1 > kernel.matern(0, 1, 0.07, 1, 1e-5) # biased [1] 0.9999998 while a small eps may yield NaN for a large nu > kernel.matern(0, 20, 0.07, 1, 1e-20) # NaN [1] NaN > kernel.matern(0, 20, 0.07, 1, 1e-15) # correct [1] 1 My question is, whether can I implement a more robust function for the matern kernel? (I am also not sure if my question fit cross-validated or should I post it on other communities)
I am using the ss function from the npreg package in R to fit a smoothing cubic spline to my data, where the smoothing parameter is selected by the REML method. The "equivalent degrees of freedom" that is returned after fitting the model is different from the degrees of freedom listed in the model summary printout created after running the summary function on the model object. Screeenshot attached. Does anyone know why these degrees of freedom values are different from each other? One is 15.96, the other is 13.97.
I am attempting to perform a CCF analysis between a predictor time series(P_a_cm) and a response time series (delta_total_cm) which represent annual precipitation and annual water table elevation change (one observation per year), respectively. My task is to asses if water table elevation change is dependent on precipitation inputs AND if precipitation inputs from antecedent years have any effect on a given years water table elevation change. When looking at the the acf/pacf for the predictor I find no auto-correlation and have also determined the time series to be stationary. The response time series on the other hand produces the following acf/pacf plots... My understanding is that these plots are indicating auto-correlation at lag 1 for this time series. I have found plenty of information on prewhitening, however it always assumes there is auto-correlation or a trend in the predictor, which you would account for and then use to filter your response variable. My question is do I need to do anything about the auto-correlation in the response time series or can I safely run the ccf simply using the two original time series? When i do this, it returns... I feel the correlation at lag -1 is likely a result of this auto-correlation. I attempted to apply an AR(1) to the response time series and then filter the predictor using that model, however, it resulted in a CCF that suggested hydrologicly implausible relationships. Any direction would be greatly appreciated.
Suppose I have a dataset $\{S_t\}_{t=1}^T$, where $S_t\overset{i.i.d.}{\sim}Binomial(n,p)$, how to consistently estimate $n$ and $p$ using this dataset? It would be great if you could provide a method with theoretical guarantee for consistency. I thought about MLE, which does not give me consistency via standard argument, because the loglikelihood is not differentiable in $n$. Another approach I thought about is to consider the estiamtion of $n$ and $p$ separately: do MLE for each given $n$ and select $n$ using AIC or BIC. Is there any hope for consistency using this approach?
I am trying to use a multi level model for a study where participants take part in 3 separate sessions where they complete a cognitive task and I measure reaction time. During each session, they complete 3 blocks of the task, so I have 3 measurements per session. Participants receive a different treatment condition each session: Placebo A, Drug B, and Drug C. It is a Latin Square design, and order is counter balanced to 6 orders: ABC, ACB, BAC, BCA, CAB, CBA. There are several participants that received each order, but it isn't perfectly balanced (i.e. some orders have 9 participants, others have 12, others have somewhere in between). We had a washout period that was selected so there wouldn't be a carryover effect. I want to know whether there is an effect of treatment condition on reaction time. One problem that I am having trouble accounting for is the fact that there is a large session effect. The version of the task they completed during session 2 has a reaction time that is much lower than the reaction time during sessions 1 and 3. I initially had planned to analyze using RM ANOVA, but when I was unblinded, I became aware of the session effect which has made it difficult to isolate whether a treatment effect is present because it instead picks up on the within-subjects session*treatment interaction. I have been trying to learn about multi level modeling as I have heard it is better able to handle this. However, with my level of understanding, I am unsure whether I am on the right track. I have read many helpful answers on the site, but am concerned my limited understanding that I am making an obvious mistake or misunderstanding what the models I made are doing. I do not care about session order effect--only want to account for it so I can isolate the within-subjects effect of treatment. Currently, I believe the following is capturing the simplest version what I want, which based on my understanding should be what I go with unless I have good reason to change to something more complex. Is this correct? model1 <- lmer(formula=RT ~ Condition + (1|participant) + (1|session) + (1|Block), data=data) I also considered whether this may be the right approach--if block should be nested in participant. model2 <- lmer(formula=RT ~ Condition + (1|Block/participant) + (1|session), data=data, na.action=na.exclude) Based on some reading, I wondered if I need to include session as a fixed factor. But again, I'm really not interested in it and am just trying to account for that session 2 issue. One thing that confuses me about this model is that I have convergence issues if I try to add condition or session as a random effect here. This has been one clue that I have a fundamental misunderstanding. model3 <- lmer(formula=RT ~ Condition*session + (1|Block/participant), data=data, na.action=na.exclude) From reading this... Crossed vs nested random effects: how do they differ and how are they specified correctly in lme4? I also considered whether this was the proper way. Thus, I keep going back and forth on the structure the random effect is capturing, and whether I need sometime like this instead. But, if I need to specify that blocks are nested in person, then this doesn't converge. model4_fit <- lmer(formula= RT ~ Condition + (1|participant/Condition) + (1|session) + (1|Block), data=data) I'm also considering whether all of these are wrong and I need to fit a multiple membership model as in here https://github.com/jvparidon/lmerMultiMember as each participant completes 3 sessions and 3 treatment conditions. Thank you so much for any help anyone is willing to provide.
I am a graduate student and now I have a trouble to understand p-value. I watched youtube videos, but they didn't really help. I want to know it a bit in depth. Could you please recommend a book for me?
I have a XGBoost model that consumes ~150 features for a classification problem. Recently, I have a set of ~30 candidate features. I tried to dump everything in the XGBoost with the same hyper parameters (somehow I did not get improvements when tuning the parameters). The performance looks worse than the original model. I wonder how I should proceed. Is there a schematic procedure to follow when adding new features to a model. Below are a few questions that I have. In general, I wonder why the model performance got worse and whether there is an off-the-shelf procedure to follow. One plausible explanation for the worse performance is that the quality of new features may be bad. Namely, the new features are not indicative enough for the model to predict a correct label. However, theoretically, XGBoost should be "smart" enough to learn that the new features are bad only pick the new features. Why is the model performance degraded in practice? Will correlation among old and new features affect model performance? And how shall we verify that? I checked pairwise Pearson correlation coefficient for linear correlation among features. But there might be nonlinear correlation as well. Should I check mutual information instead? How should I select the best subset of features to add? Is it just a trial-and-error process? What happens if my model is a neural network? Is there anything changed for the above questions?
Let $(X_1, X_2, X_3)$ be a sample from $Poisson(\lambda)$. Here we test $H_0: \lambda\le 1$ v.s. $H_1: \lambda>1$. Fix $\alpha=0.05$. Find the $\alpha$-level uniformly most powerful test (UMPT). My work: note that the density function has monotone likelihood ratio on $T=\sum_{i=1}^3 X_i$. By Karlin-Rubin theorem, the UMPT for test $H_0: \lambda\le 1$ v.s. $H_1: \lambda>1$ is given by \begin{equation} \Phi(X_1,\dots,X_n)= \begin{cases} 1 & \text{if } T>k\\ \gamma & \text{if } T=k\\ 0 & \text{if } T<k \end{cases} \end{equation} where the rejection region is the likelihood ratio $\Lambda>k$, and $k$ and $\gamma$ are decided by $$ \alpha=P_{\lambda_0}(T>k)+\gamma P_{\lambda_0}(T=k). $$ Now my problem is, it seems that I can't solve the gamma. I read this theorem, and some people can't write this gamma, especially for continuous functions. Do we need to consider gamma? I mean the test function is: \begin{equation} \Phi(X_1,\dots,X_n)= \begin{cases} 1 & \text{if } T>k\\ 0 & \text{if } T\le k \end{cases} \end{equation} where the rejection region is the likelihood ratio $\Lambda>k$, and $k$ and $\gamma$ are decided by $$ \alpha=P_{\lambda_0}(T>k). $$ If this one holds, we can solve $k$...
How do I condition the variance of a normally distributed random variable on two other normally distributed random variables? How do I condition the expectation of a normally distributed random variable on two other normally distributed random variables? NOTE: $Y$ and $Z$ are correlated ($\rho_{Y,Z}$)
If there is a Elastic-net criterion function: $$\mathcal{L}(\boldsymbol{\beta}) = \frac{1}{2}\sum_{n=1}^N(\boldsymbol{\beta}^{\top}\boldsymbol{x}_n - y_n)^2 + \frac{1}{2}\lambda(1-\eta)\|\boldsymbol{\beta}\|_2 + \lambda\eta\|\boldsymbol{\beta}\|_1,$$ and $$\boldsymbol{\beta}_{opt} = arg\min\limits_{\boldsymbol{\beta}} \mathcal{L}(\boldsymbol{\beta}).$$ My question is : Is there an upper limit $M$ such that $$\|\boldsymbol{\beta}_{opt}\|_2 \leq M$$ and what is $M$ related to?
I'm doing reading around regularisation techniques for neural networks. My intuition is that dropout is essentially adding noise into the network by zeroing out activations according to a given probability. Can the same result be achieved by introducing noise into training examples/layers given the same probability?
I am performing a meta-analysis and have a dataset of hazard ratios and odds ratios collected from primary studies. Suppose for a study we have a recorded hazard ratio (HR) (or odds ratios (OR)) (both point estimates and confidence limits) for control (A) vs. several treatments (B,C,D) e.g. A vs. B, A vs. C, A vs. D. The reference group is always A. Is it possible to estimate point estimates and standard errors or confidence intervals for a contrast that doesn't involve A using the information from the study e.g. B vs. C? It looks to me that we can indeed estimate the point estimate e.g. on the log scale: B-C = (B-A) - (C-A). But am I correct that we cannot reliably estimate the standard error or confidence limits of the corresponding HR or OR's? There is sample size information for each group A through D, I'm not sure if that will be enough to estimate the standard error or confidence limits?
Given a set of IID samples $X = \{x_i\}_{i=1}^n$ assumed to be from the density $p(\cdot)$, and the function $h:\mathbb{R} \xrightarrow{}\mathbb{R}$, its expectation can be approximated as $$\mathbb{E}[h(X)] = \int h(x)p(x)dx \approx \frac{1}{n}\sum_{i=1}^nh(x_i) $$ as usual. However, I have recently come across this approximation $$\mathbb{E}[h(X)] = \int p(x)h(x)dx \approx \frac{1}{\sum_{i=1}^n p(x_i)}\sum_{i=1}^np(x_i)h(x_i)$$ in the paper "Density-Weighted Nyström Method for Computing Large Kernel Eigensystems" by Kai Zhang and James T. Kwok. Neural Computation 21, 121–146 (2009). I can't understand how this approximation makes sense, as for a start we have $$\mathbb{E}[p(X)h(X)] = \int h(x)p(x)^2dx$$ which is the wrong integral to be targeting. They don't justify their approximation, it is just stated and is shown to work empirically in their context. I'm not sure how to proceed here.
I've been reading about some psychology stuff and statistics, was just wondering if I should add these percentages. psychopathy prevalence in males 15.8%, and 5.9% in females should I add these to get the general population or should I get the sum divided by two? Would this be 21.7% or 10.85% ?
I have trouble undestanding how to interpret my results. Working with glmm and poisson (link) models. I found one example that is given for negative binomial and how they interpret the results: "Every one-unit increase in the value of math, then daysabs(y) will decrease (-) by 0.9940249 times, provided that other predictor variables are constant" Based on that and looking at my results I could say that: "Every one-unit increase in the value of Field_A1, then birds_count(y) will decrease (-) by 1.4288 times, provided that other predictor variables are constant. But here i have a small problem, as my variables are categorical/factors and not a numeric outcomes. So it dosen't make sensce that if Field_A1 increases (as it can not increas in area size, or whatever, it constant) the bird_count can't decrese either. From the analysis I would like to know: How two fixed effects (Field & Distance) are affecting abundace. If distance increases the abundace or not? Is ther a difference in abundace depending on the field type? How two fixed effects (Field & Distance) interaction is affecting the abundace Can anyone help me? Example of the estimates and the code I am using: Data explanatsion: LS - total 18 sectors in landscape Field_Type- is a factor (A1;A2;A3). A1- grassland; A2-production field; A3- not sown area Distance- 50m, 75m, 150m, 200m sampling.round - total 3 rounds, so in three different time points were the data collected abundace- in that spot counted number of individuals model1<- glmmTMB(Abundace~Field_Type +distance+Field_Type*distance+(1|sampling.round)+(1|LS), family=poisson(link="log"), data=OWL6) # with interaction model used Estimate Std. Error z value Pr(>|z|) (Intercept) 0.7480 0.6922 1.081 0.279918 Field_A1 -1.4288 0.5392 -2.650 0.008052 ** Field_A2 -0.2888 0.4766 -0.606 0.544524 distance_50m -0.4925 0.1712 -2.877 0.004009 ** Field_A2:distance_50m 1.8741 0.3247 5.772 7.82e-09 *** Example of the data:
Let's say, I have a binary classifier. Usually, if a model doesn't learn anything useful, the accuracy would be around 50%. Anything above 50 is better than a random guess. My question is: what are some situations where the model will show below 50% accuracy (or some performance metric), let's say 30%? Doesn't that mean, I can just always reverse the answer and get 70% accuracy? Can this happen (a model showing some 20-30ish % performance)? If yes, what has the model learnt actually?
I am running a nested 3-block logistic model. In block 1, the odds ratio for Variable 1 (V1) is 2.0 but in block 2 when I enter other variables (demographics) it jumps up to 4.0 which is odd (pun intended). When I run the full model with vars in block 3, the odds ratio returns to the normal values I see in block 1. Something in block 2 is amiss. I've tried systematically removing variables in block 2 to figure out which one is the offender thinking it's a violation of linearity of logit assumption, but several different variables seem to be the issue albeit inconsistently. For example, I figured out V2 was the issue by removing vars one by one from the model. Then I reran without v2 and the effect was still there. Repeat and ID another variable, reran without v3, the effect is still there. I think the issue is some sort of suppressor or interaction effect between the variables in block 2. So 1) how problematic is my issue in block 2? It sets my spidey sense off. 2) any suggestions on how to fix it or what's going on? Also, my hosmer is significant though I know that that isn't always indicative of bad model fit.
I have a few questions regarding the formulation of hypotheses. For example, here are 2 hypotheses: Age will influence the choice of food individuals order at the restaurant. Younger individuals are more likely to order fried food at the restaurant. These 2 are almost the same, yet different. Does one have an advantage over the other? How does the stats analysis change, if at all? Which one would I be better off choosing? Another example could be a hypothesis on gender and safety. Gender impacts the individuals' feeling of safety when walking home at night. Women are more likely to feel unsafe when walking home at night. Thank you for your help.
I am currently doing a study examining on how the outcome variable Y changed over time, and I want to control two time-invariant (gender and ageL2) variables in my model. But I am not sure about adding the "Cov*Time" item Hopefully, the formula should be likelmer(Y~Time+gender+ageL2+(Time|id). But when reading a similar journal article (they had one time-invariant covariate in the level2), I found they made the lme4 like thislmer(Y~Time+Cov+Cov*Time+(Time|id) So, I was wondering why a time-invariant variable should have an interaction with Time? Should I include an Cov*Timein my formula and use lmer(Y~Time+Gender+ageL2+Gender*Time+ageL2*Time+(Time|id)? If I am correct, under such condition, the Level2 equations will be like:β0i = γ00 + γ01 * Gender1i + γ02 * ageL22i + u0i and β1i = γ10 + γ11 * Gender1i + γ12 * ageL22i + u1i So, how should the results be interpreted? what do γ00, γ01, γ02, γ10, γ11 and γ12 mean? I suppose the interpretation of these are pretty different
I recently trained an xgboost model for a binary prediction task; the dataset had rougly 900 class 1 and 100 class 0 rows. The model didn't fare too well (AUC 0.64) and none of the features had SHAP value to speak of (below 0.01 for every feature). All the predictions were between 0.49 and 0.51, so I got the impression the model was basically useless, a coinflip if you will. What got me thinking is that on the same dataset, I also trained two binary classifications using xgboost as well with different target variables, with much better results (AUC ~ 0.75, reasonable feature importances, predicted probabilities all over the range 0-1). All three targets were similiar in nature (patient survey results, all from the same surgical procedure, basically asking the patients in diffeent ways how they felt and how well they function). Now, I need to find why the model (and also lots of variations I tested) fails on this one particular target variable. How could i do that? The feature variable data seems fine (as it works with different targets); how could I analyze the data quality for the remaining target? Are there other sources where something could have gone wrong? Basically, I'm asking what can be done as a sort of data/model "autopsy" to find what killed my modeling for this particular target variable.
I have the following problem. I have 3 vectors $u,v,w$ of n dimensions. I'm able to find cosine similarities between $u$ and $v$, and between $v$ and $w$: $cosine(u,v)$ and $cosine(v,w)$. Can i use these cosine similarities to compute the cosine similarity between $u$ and $v$ $cosine(u,w)$? Thanks
I did a man Whitney to investigate the difference in BDI score between males and females. I got an r effect size of -0.09, what type of effect size is this ?
I find it quite hard to understand what is the state of the art method for a regression task involving a small data set (e.g., ~1000-5000 data points). In some books, we can find linear regression, tree based algorithms, support vector machines, etc. I am wandering if there is now a common agreement on which method has the most benefit.
My problem:- I'm writing a test to see if a series of numbers come from certain theoretical distributions. And I need a p value so that software can automatically accept or reject $H_0$ on an $\alpha=0.05$ basis. The context:- Does the 2-sample KS test work? If so, why is it so unintuitive? Is the Kolmogorov-Smirnov-Test too strict if the sample size is large? Kolmogorov–Smirnov test: p-value and ks-test statistic decrease as sample size increases Is normality testing 'essentially useless'? Is there a rule of thumb regarding effect size and the two sample KS test? Given the number of unspecific KS sample size answers here, it seems that this is still a valid (open) problem. My question:- Given the context and learned comments, it appears that the KS test only holds for a small sample size, $n$. Yet I can't find any quantitative recommendation on this site for $n$. So if I have a total sample size of one million values, should I just randomly pick a hundred of them for the KS test?
This study shows that the average penis mean is 13.24cm and the standard deviation is 1.89cm. Let's suppose we have a population with this mean and standard deviation for penis length. Suppose we ask 30 men in this population what their penis lengths are. Suppose that the selection of 30 men is so that all men from the population have the same probability of being selected. Can we know how many of them are lying about their penis length? Of course, if we calculate a sample mean of 18 cm, we can, based on the central limit theorem say that this sample is an outlier or that there is people lying about their penis size. But can we create a model to estimate how many of them are lying?
Suppose X1 is one observation from a population with Beta(θ,1) PDF. Would X1 also have Beta(θ,1) PDF?
probably answered before but I would I want to see if my reasoning is correct, as my textbook skips the calculations but the answer coincide. Q: Let $Z_t \sim \text{WN}(0, \sigma^2)$ (white noise), and define the MA(1) process $X_t = Z_t + \theta Z_{t-1}, t\in \mathbb{Z}, \theta \in \mathbb{R}. $ Find the covariance function $\gamma_X(t, t+h)$. Sol: $E(X_t) = 0, E(X_t^2) = \sigma^2(1+\theta^2)$. Then \begin{equation} \begin{split} \gamma_X(t,t+h) &= Cov(X_t, X_{t+h}) = E[(X_t - E(X_t))(X_{t+h}-E(X_{t+h})] \\ &= E(X_t X_{t+h}) = E[(Z_t + \theta Z_{t-1})(Z_{t+h} + \theta Z_{t+h-1} )] \\ &= E(Z_tZ_{t+h}) + \theta E(Z_t Z_{t+h-1}) + \theta E(Z_{t-1} Z_{t+h}) + \theta^2 E(Z_{t-1}Z_{t+h-1}) \end{split} \end{equation} Since $Z_t$ is white noise, it holds that $\gamma_Z(s,t) = E(Z_s E_t) = \sigma^2 \delta(s-t)$. Hence \begin{equation} \begin{split} \gamma_X(t,t+h) &= \sigma^2\delta(h) + \theta \sigma^2\delta(h+1) + \theta \sigma^2\delta(h-1) + \theta^2 \sigma^2 \delta(h) \\ &= \begin{cases} \begin{split} &\sigma^2(1+\theta^2), \quad &h = 0 \\ &\sigma^2 \theta, \quad &h = \pm 1 \\ &0, \quad &\text{else} \end{split} \end{cases} \end{split} \end{equation}
Say I want to compare the means of two groups in SPSS (men and women). The particular thing is that for these groups in the context that I'm researching, there are no theories to formulate a decisive hypothesis. So that means I don't know beforehand whether one group will have a higher means than the other or if they will be the same. If I HAD to formulate a hypothesis, I would say that “H1: the mean of group A is NOT equal to the mean of group B”: so a two-sided hypothesis. Now, in the results I see that the two sided p-value is higher than 0,05, so I would say that I keep the nulhypothesis that there is insufficient evidence to say there is a difference between both groups. BUT: I see that the one-sided p-value is less than 0,05. So what do I do? Can I still say that there is a significant difference between both groups?
I was looking at the Informer model implemented in HuggingFace and I found that the model is implemented with negative log-likelihood (NLL) loss even though it is a model for a regression task. How can NLL loss be used for regression? I thought it is a loss used for classification only.
I work for a company that helps online retailers to group their inventory into google ad campaigns. I am using Causal Impact to determine whether the release of a new feature within our software had an impact on the total impressions that an online retailer received through google ads - an impression is counted each time the retailers ad is shown on a google search result page. Approach To begin with, I just have one X variable. y - impressions over the past 365 days. X - daily searches for the term ‘garden furniture’ using google trends I expected searches for 'garden furniture' to have a good correlation with the impressions of this particular retailer (correlation was +0.58). Importantly, google search terms won't be influenced by the change we made to our software and therefore satisfies the key requirement that the X variables are not affected by the intervention. After standardising, the data looks like this. And running Causal Impact shows that the intervention did not quite have a significant effect (p=0.07). pre_period_start = '20220405' per_period_end = '20230207' post_period_start = '20230208' post_period_end = '20230329' pre_period = [pre_period_start, per_period_end] post_period = [post_period_start, post_period_end] ci = CausalImpact(dated_data, pre_period, post_period) ci.plot() Questions How can I verify that my X variables are doing a good job of predicting y? From this notebook, in section 2.5: Understanding results, it suggests the below code gives the values for beta.X. The value here 0.06836722 seems quite low and suggests that the garden_furniture searches don't explain impressions very well. Is this the correct interpretation? tf.reduce_mean(ci.model.components_by_name['SparseLinearRegression/'].params_to_weights( ci.model_samples['SparseLinearRegression/_global_scale_variance'], ci.model_samples['SparseLinearRegression/_global_scale_noncentered'], ci.model_samples['SparseLinearRegression/_local_scale_variances'], ci.model_samples['SparseLinearRegression/_local_scales_noncentered'], ci.model_samples['SparseLinearRegression/_weights_noncentered'], ), axis=0) <tf.Tensor: shape=(1,), dtype=float32, numpy=array([0.06836722], dtype=float32)> When adding another X variable to the model, how can I determine whether adding that variable was useful or not? I’ve also attempted to backtest the model by selecting the first 90 data points and an imaginary intervention date. As shown below, we do not get a significant effect. However, I’m concerned that the predictions don’t seem to align that closely with the actual y. Does this look like a problem? General advice - Any suggestions on improving the analysis would be greatly appreciated as this is the first time I’ve used Causal Impact. In particular, I'm struggling on selecting additional X variables to use whether changing the data frequency to weekly would help with predictions
If X ~ Exp(3), Y ~ Exp(1) and h = X / (X + Y) then h ~ beta(1/3, 1) and E(h) = 1/4. But when I draw random deviates using the following R code, I find mean(h) ≈ 0.324 and the histogram doesn't resemble the beta distribution. Am I making some dumb mistake? fn <- function() { X <- rexp(n = 1, rate = 3) Y <- rexp(n = 1, rate = 1) h <- X / (X + Y) return(h) } h_vec <- replicate(1e6, fn()) mean(h_vec) # ≈ 0.324
Hello everyone and good day to you, Could you please explain how to use principal components in principal component analysis in a correlation analysis? I performed a PCA on my dataset and extracted 2 components form 8 variables; Now I'm wondering how to use pearson or spearman correlation analysis to find out the correlation between PC1(or PC2) and, let's say, another variable. I mean, I want to do a regression analysis in which my two extracted components are independent variables and each participant's age (for example) is the dependent variable. How should I do this? Many thanks.
Countably additive probability is defined on sigma field. However a finite additive probability needs only a "finite-additive" field: the finite additive probability does not need the countable infinite union of events to still be in the field To be specific, sigma field requires that countable union of subset is also in the field; a "nonsigma" field only requires that finite union of subset is also in the field. I think it is enough to have this field to then define a finite additive probability measure. While the finite-additive measure is well-covered in literature, the finite additive field is not. Question: Are there any reference or discussion on this "finite additive field"? Does this "non-sigma" field already has a name that I can look up? This is a long comment. I think a "finite additive field" $F$ defined like this suffices: If $A\in F$, then $A^C\in F$. If $A,B\in F$, then $A\cup B\in F$ and $A\cap B \in F$. Here $A,B$ are sets. However, when I try to google relevant info, I got no luck. Here are the key words that I tried:
In scikit-learn's LogisticRegression docs they write This class implements regularized logistic regression using the ‘liblinear’ library, ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ solvers Logistic regression doesn't have a closed form solution. So it must use some optimization algorithm like gradient descent or Adam. So all, we need are the partial derivatives and we should be good to go. So what are these "solvers" and where do they fit into the picture?
Let's say I have three distributions, P1, P2, and P3, which are probability distributions with domains defined between 0 and 1. Generically these are not Gaussian (more like Beta distributions). I can sample from these three distributions, generating samples p1, p2, and p3, such that I impose the constraint that p1+p2+p3<1, and I'm wondering what the most proper way of doing so is. I've thought of two solutions: Sample independently from each many times, and then reject all correlated draws which don't obey the constraint Sample from P1, then crop and renormalize P2 such that the constraint is fulfilled (call the new distribution P2'), then sample from P2', then do the same for P3. I think both methods have problems: the first introduces bias, and I'm not sure if the second does as well. Is there a more proper way to perform this type of correlated-sampling-with-constraints?
I did an extensive research on more than 50 papers in Finance and Economcis using Propensity Matching Scores. However, there is no paper so far tell me how to match the unit-by-unit(firm-by -firm) due to propensity score. Clearly, the steps of working on PSM are: Calculation propensity for each OBSERVATION based on pre-treatment unit characteristics (I highlighted here because in panel data, one unit (or firm) can have many observations (e.g. if we have 3 years before event, we can have upto 3 observations per firm, meaning 3 propensity score for each observation). Afterward, according to tons of paper in finance, health, and economic, we match the treated and control units based on propensity score. However, the propensity score at the first step is at the observational level when it is at the unit level at the second step. I tried my best but there is no paper explicitly tell me how to match firms by firm. I found a paper that implicitly tell me that we can we can calculate the mean of characteristics of all observation per firm. Then we estimate the propensity score for each firm based on these average-values. However, that paper state something crucially wrong so it really harmful if I used that paper as reference for my choice. The paper I mentioned is Howell, 2016 In that papers, the authgors mentioned twice supporting my idea about average characteristic above : Firms in the control group are matched to the treatment group on the basis of the pre-treatment (1998–2001) mean of these variables All covariates are measured by the mean before the policy treatment But the problem here is he stated wrong here, his sample is from 2001 to2004 is pre-treatment. I am convinced it should be a typo, but it is a wrong statement. In another word, we cannot use that paper as reference in this case.
I have 18 different groups that I need to compare. I used the raw values from a neuroimaging file to represent synchronization between two brain regions (which ranged from 0 to 1) and Fisher Z transformed them in the hope that I would be able to use parametric statistics (namely an ANOVA to compare the means of these groups. However, it appears that my fisher z transformed data does not pass the Shapiro-Wilk test. I'm kind of at a loss here, as I know that other non-parametric statistical tests such as the Kruskal-Wallis test exist, I don't think they were made in order to compare Z-scores. I've also read the you shouldn't perform an ANOVA if there is heteroscedasticity, and some of the data forms a slightly skewed distribution (box number 2, 6, the last one). I'm not quite sure what to do now I've plotted the values using a boxplot (N=50):
I have a questionnaire to check the implementation of several recommended health improving measurements. It will look approximately like this: Did you do your 20 minute Yoga routine on a daily basis? Answers possible: Yes / No / partly Did you increase your protein intake? Answers possible: Yes / No / partly Did you employer provide ergonomic office furniture? Answers possible: Yes / No / partly And so on. The population consists of 7000 participants. My hypothesis is, for each measure separately, that the majority of participants did not implement the measures or only partially implemented them I have two questions: Which test to use to test this hypothesis? How to calculate the necessary sample size?
I am estimating a model with exogenous variables using ARIMA, from the statsmodels package. But I can't interpret the results, the coefficients are very different from what I expected. Consider the following model: $$ y_t = 1 + 0.5 y_{t-1} + 4 z_t + \varepsilon_t $$ where $z_t$ is a dummy and exogenous variable. I've simulated the model using the following code: np.random.seed(42) c = 1 fi = 0.5 sig = 1 d = 4 y = [2] n = 1000 z = [0, 1] * int(n/2) for i in range(n): y.append(c + fi*y[i] + d*z[i] + np.random.normal(scale=sig)) y = pd.Series(y[1:]) df = pd.DataFrame({'y': y, 'z': z}) m = ARIMA(df['y'], exog=df[['z']], order=(1, 0, 0)).fit() m.summary() The output is: const 4.686270 z 2.689766 ar.L1 0.494128 sigma2 0.960973 dtype: float64 The estimated value for the coefficient of $y_{t-1}$ is OK. But the values for the constant and for the coefficient of $z_t$ is very different from what I expected. Estimating the model using OLS gives a very different result: df['y_lag'] = df['y'].shift(1) df = df.dropna() X = sm.add_constant(df[['y_lag', 'z']]) m2 = sm.OLS(df['y'], X).fit() m2.params const 1.058504 y_lag 0.492424 z 4.012447 dtype: float64 That is, the values for the coefficient of $y_{t-1}$ are very similar, but the values for constant and the coefficient of $z_t$ are differents. Probably the ARIMA estimates the model using a different parameterization. My two questions are: In an ARMA(p, q) with exogenous variables, what parameterization does ARIMA use in the estimation? In a model like $$ y_t = c + \phi_1 y_{t-1} + \cdots + \phi_p y_{t-p} + d_1 z_{1, t} + d_2 Z_{2,t} + \cdots + d_k z_{k, t} + \varepsilon_t $$ how can I calculate the $d_j$ parameters from the ARIMA output?
I have data that look like this. And my goal is to reduce this 3D dimension into 2D dimension so it might looks like this. Turning the angle so the distance between all classes becomes maximum. So therefore I have made a MATLAB-code to use: function [W] = lda(varargin) % Check if there is any input if(isempty(varargin)) error('Missing inputs') end % Get impulse response if(length(varargin) >= 1) X = varargin{1}; else error('Missing data X') end % Get the sample time if(length(varargin) >= 2) y = varargin{2}; else error('Missing class ID y'); end % Get the sample time if(length(varargin) >= 3) c = varargin{3}; else error('Missing amount of components'); end % Get size of X [row, column] = size(X); % Create average vector mu_X = mean(X, 2) mu_X = mean(X, 2); % Count classes amount_of_classes = y(end) + 1; % Create scatter matrices Sw and Sb Sw = zeros(row, row); Sb = zeros(row, row); % How many samples of each class samples_of_each_class = zeros(1, amount_of_classes); for i = 1:column samples_of_each_class(y(i) + 1) = samples_of_each_class(y(i) + 1) + 1; % Remove +1 if you are using C end % Iterate all classes shift = 1; for i = 1:amount_of_classes % Get samples of each class samples_of_class = samples_of_each_class(i); % Copy a class to Xi from X Xi = X(:, shift:shift+samples_of_class - 1); % Shift shift = shift + samples_of_class; % Get average of Xi mu_Xi = mean(Xi, 2); % Center Xi Xi = Xi - mu_Xi; % Copy Xi and transpose Xi to XiT and turn XiT into transpose XiT = Xi'; % Create XiXiT = Xi*Xi' XiXiT = Xi*XiT; % Add to Sw scatter matrix Sw = Sw + XiXiT; % Calculate difference diff = mu_Xi - mu_X; % Borrow this matrix and do XiXiT = diff*diff' XiXiT = diff*diff'; % Add to Sb scatter matrix - Important to multiply XiXiT with samples of class Sb = Sb + XiXiT*samples_of_class; end % Use cholesky decomposition to solve generalized eigenvalue problem Ax = lambda*B*v Sw = Sw + eye(size(Sw)); L = chol(Sw, 'lower'); Y = linsolve(L, Sb); Z = Y*inv(L'); [V, D] = eig(Z); % Sort eigenvectors descending by eigenvalue [D, idx] = sort(diag(D), 1, 'descend'); V = V(:,idx); % Get components W W = V(:, 1:c); end And a working example % Data for the first class x1 = 2*randn(50, 1); y1 = 50 + 5*randn(50, 1); z1 = (1:50)'; % Data for the second class x2 = 5*randn(50, 1); y2 = -4 + 2*randn(50, 1); z2 = (100:-1:51)'; % Data for the third class x3 = 15 + 3*randn(50, 1); y3 = 50 + 2*randn(50, 1); z3 = (-50:-1)'; % Create the data matrix X = [x1, y1, z1, x2, y2, z2, x3, y3, z3]; % Create class ID, indexing from zero y = [0 0 0 1 1 1 2 2 2]; % How many dimension c = 2; % Plot original data close all scatter3(X(:, 1), X(:, 2), X(:, 3), 'r') hold on scatter3(X(:, 4), X(:, 5), X(:, 6), 'g') hold on scatter3(X(:, 7), X(:, 8), X(:, 9), 'b') % Do LDA - Now what? W = lda(X, y, c); The $W$ matrix contains a lot of eigenvectors. What I need to do is to multiply $W$ with $X$, but the problem is that It's not possible. I can make the $W$ into transpose, but still, I don't think that's the right method to use. So how can I project the data with the eigenvectors from LDA?
I have different series from banks balance sheets in monetary terms covering various years. I think is best to use a price deflator to control for inflation. I'm attaching two graphs the before and after of applying the deflactor to the data. For every data in the series I'm applying the following formula: row value from the balance sheet *( deflator -from base year- / deflator -from year of the row data). Both deflators are from the world bank. Is it correct to do what I just mentioned?
I am currently learning about confidence intervals for the population mean. Assume we do not know the variance of the population. Let $\bar{x}$ be the sample mean, $s$ be the sample variance and $n$ the sample size. I learnt the following: Use $\bar{x}\pm t_{\alpha,n-1}\frac{s}{\sqrt{n}}$ if the data is normally distributed, where $t_{\alpha,n-1}$ is the $\alpha$ t-score from the $T$ distribution with $n-1$ degrees of freedom. If the data is not normally distributed, then with a large enough sample we can use the z-interval $\bar{x}\pm z_{\alpha}\frac{s}{\sqrt{n}}$ where $z_{\alpha}$ is a suitable z-score. The reason is that the T-ratio $T=\bar{x}-\mu/(s/\sqrt{n})$ becomes approximately normal with a large enough sample. The lecture notes mention that we can consider $n=30$ a large enough sample size but without providing any explanation. So my question is simple: Why $n=30$?
I've seen a similar questions posted but I wasn't sure on the answers that were provided. Similar to when I try and look these methods up, I've seen more general abstract descriptions that were hard to understand. I think I've read that G-method is just a general term for a model that estimates ATE and so encompases the G-formula, G-estimation and G-computation and also methods such as propensity scores and inverse weighting. If that is the case, what is the method in G-formula, G-estimation and G-computation? Additionally, I also see structural models and structural nested mean models quite a lot. Are these the same as G-methods in that they aren't actualy methods, just conceptulisations of types of causal models? Any help would be really appreciated, thanks
I am working with data collected in a community. We know the basic information about the community, let's say the racial composition is 60% black and 40% white, in a city of population = 500,000. I now have a list of incidents (fewer than 200 of them in total). Let's assume people involved in these incidents are either black or white. I want to be able to test if probability of a racial group involved in these incidents are unusually high/low, considering the racial composition that we know of this city. We don't have data for people who are NOT involved. My question is: what tests should I use? I recently attended a talk about bayesian analysis so I'm leaning that direction. Given the small number of incidents perhaps poisson regression will be more appropriate? Also, these incidents happened in different neighborhood, maybe 10-15 each neighborhood. It seems we need to rely on some mixed effect modeling. Is this right? Or I may be overthinking. Thank you very much.
I am trying to calculate the variance of a truncated normal distribution, var(X | a < X < b), given the expected value and variance of the unbound variable X. I believe I found the corresponding formula on wikipedia (see picture below), but as a Psychologist I am not trained in mathematics and cannot read the formula. Could somebody show me how to do the calculations for an exemplary case? Let's say for example if a=0, b=1, var(X) = 0.5, E(X) = 0.5, then what is var(X | a < X < b)? I would be super greatful for help. All the Best, ajj
Chapter 10, Problem 2 from The Analysis of Time Series by Chatfield and Xing: The problem says, "Consider the following special case of the linear growth model:" $$ X_t = \mu_t + n_t $$ $$ \mu_t = \mu_{t-1} + \beta_{t-1} $$ $$ \beta_t = \beta_{t-1} + w_t $$ where $n_t$ and $w_t$ are independent normal with zero means and respective variances $\sigma_n^2$ and $\sigma_w^2$. The problem has the solver show that the initial least squares estimator of the state vector at time 2, i.e., $[\mu_2, \beta_2]^{T}$, is $[X_2, X_2-X_1]$, and I'm good with that. We are also to show that the covariance matrix for this vector is $$P_2 = \begin{bmatrix} \sigma_n^2 & \sigma_n^2 \\ \sigma_n^2 & 2\sigma_n^2 + \sigma_w^2 \end{bmatrix} $$ This latter is tripping me up. The answer in the book seems to suggest that the expected value of $X_2 - X_1$ is $\beta_2$, for it computes $\mathrm{Var}[X_2-X_1]$ as $$E[(X_2 - X_1 - \beta_2)^2] = E[(n_2-n_1+\beta_2-w_2-\beta_2)^2] = 2 \sigma_n^2 + \sigma_w^2 $$ But using the given system, one could either write $X_2 - X_1 = \beta_2 - w_2 + n_2 - n_1$ or as $X_2 - X_1 = \beta_1 + n_2 - n_1$. If I pretend that $\beta_2$ and $\beta_1$ are values that have been fixed by time 2, then the first leads to the expected value of $X_2 - X_1$ being $\beta_2$, but the second leads to it being $\beta_1$. Using $\beta_1$, the same computation as above would lead to $$E[(X_2 - X_1 - \beta_2)^2] = 2\sigma_n^2 $$ It seems wrong to me to pretend the values $\beta_1$ and $\beta_2$ are fixed and non-stochastic, and getting two different values seems to confirm my impression. I guess I am confused about how any of these recursively-defined systems are started off, and the book doesn't seem to say. It only talks about estimating the initial values from the first few values of $X_t$. I would think that the initial value of $\beta_1$ would be $w_1$, and then I am happy with the value $$\mathrm{Var}[X_2-X_1] = \mathrm{Var}[\beta_1 + n_2 - n_1] = 2 \sigma_n^2 + \sigma_w^2 $$ given as the answer. But then I would think that $$ \mathrm{Var}[X_2] = \mathrm{Var}[\mu_1 + w_1 + n_1] $$ I don't know what to consider $\mu_1$ to be for this model, but I have trouble reconciling this thinking the the answer the book gives, that $$ \mathrm{Var}[X_2] = \sigma_n^2 $$ Any guidance here is appreciated.
I've built a Logistic Regression model in Python for the likelihood of an individual doing an action in the next n days. I am not very experienced at this! My data comprises one row per individual. I have >500k individuals. I have five individual-level features, and a sixth time-based feature: median gap between actions. I've built a model with a test F1 score >0.95. My question: is this sufficient to predict individuals doing this action in the next n days -- or am I accidentally classifying them according to whether they'll do this at all?
Can anyone point me in the right direction for doing sample size/power calculations for hierarchical log-linear analysis of nominal (frequencies) data? Would I get in trouble with reviewers for just using the old rule-of-thumb to get 100 cases per cell?
I have daily sales (total volume in dollars) from 200 stores of the same franchise, over two years. I would like to identify any store with anomalies or special patterns, which could be the sign of a fraud or just a different management (good or bad). My problem is that the stores are not of the same size, hence the daily sales have different volumes. The distribution of the sales may need a transformation before comparison. I thought I could just divide the sales by their mean at each given store, and then proceed with comparisons such as the Anderson-Darling test. I am not sure of the statistical consequences of dividing by the mean. Another option is to standardize the distributions, but that loses information on the standard deviations. What preprocessing approach would you recommend for the cleanest and most informative comparison between all these distributions?
Consider the $\{N(\theta,1):\theta \in \Omega\}$ family of distributions where $\Omega=\{-1,0,1\}$. I am trying to show that this is not a complete family. That is, if $X\sim N(\theta,1)$, I need to find a non-zero function $g$ for which $E_{\theta}[g(X)]=0$ for every $\theta \in \mathbb \Omega$. Now, $$E_{\theta}[g(X)]=0,~~\forall\,\theta \iff \int_{-\infty}^\infty g(x)e^{-(x-\theta)^2/2}\,dx=0,~~\forall\, \theta.$$ But any $g$, I can think of depends on $\theta$. I realize $g$ must be chosen in a way such that the elements of $\Omega$ are among the solutions of the equation $E_{\theta}[g(X)]=0$. How does one come up with an appropriate choice of $g$ from the above equation? A hint would be great. The source of the problem is Ex.21 (page 23) of this note where I have modified the parameter space to make things slightly simpler.
Is the following statement correct? The Central Limit Theorem (CLT) states that as the sample size tends to infinity, the standardized sample mean distribution approaches the standard normal distribution. Motivated by this theorem, we can say that for a large sample size (greater than 30), the standardized sample mean is approximately normally distributed. However, we cannot use this theorem to state that the sample mean by itself is approximately normally distributed for large sample sizes. This is because, according to the Strong Law of Large Numbers (SLLN), the sample mean converges almost surely to the population mean.
I know that the covariance of two random variables, such as X and Y, is calculated as follows: $$ Cov(X, Y) = \frac{\sum{(X - \bar{X})(Y-\bar{Y})}}{n} $$ where $n$ is the size of the sample and $\bar{X}$ and $\bar{Y}$ are the means of X and Y, respectively. What I do not understand is how this formula measures the dependence or the correlation between these two variables. In other words, What does this formula have to do with the dependence between X and Y?
I have a few questions concerning what to do next after balancing your population with IPTW. This is a code just to have an example: library(WeightIt) W.out <- weightit(treat ~ age + educ + race + married + nodegree + re74 + re75, data = lalonde, estimand = "ATT", method = "ps") Outcome = re78 First, if I want to calculate the odds ratio (e.g. odds of re78 in treated vs non-treated), I write the following formula: d.w <- svydesign(~1, weights = W.out$weights, data = lalonde) fit <- svyglm(re78 ~ treat, design = d.w) My questions are: "Treat" itself doesn't have a weight since patients have been grouped by the presence or absence of treatment. How does the formula use the weights that I have provided? Why does the result change if I write: fit <- svyglm(re78 ~ treat + age + educ + race + married + nodegree + re74 + re75, design = d.w). Shouldn't this variables already been accounted for if the "weights" are considered when I write just "treat"? (as I said in question 1) In some papers I saw that there is a "weighted rate" for the outcomes stratified by treatment. For example, at the end of the IPTW table there is a weighted rate for death (which is the outcome) in treated patients and non-treated. Of course death has not been balanced so how is this possible? Where do these weights come from? Does it make sense to put it like this?
I want to make sure I am choosing the right statistical test. I am working with 7 semesters of exams scores for a particular course. Each semester the course ran with many sections. However, the students, no matter what section they were in, no matter which teacher, were given the same exams. Let A = exam 1 and 2 average (two exams were given within the semester and they were averaged) Let B = final exam score (the course ends with a final) Overall for each semester, I want to compare A with B to see if a significant difference is found. Which non-parametric test should I use? What should my level of significance be? Which test should I use to see if there is a correlation between A and B for each of the semesters? I tested semester 1 (A & B) through semester 7 (A & B) and found that the data sets for all is not normal. I used the Shapiro-Wilk test (all p<.01). Note: Just to clarify,... ** A is the average of exams 1 and 2 area given within the semester before the final exam The final exam score equals B. ** The data sets are not normally distributed as per the Shapiro-Wilk test for normality as well as histogram, box plots. ** Overall I wish to compare the averages to see differences if there are any.