instruction
stringlengths 9
38.7k
|
---|
Ranger documentation states that if the importance mode is set to 'Impurity', then the estimated measure is '...the variance of the responses for regression..." Could someone expand on this or provide a relevant publication? As a starting point for an answer, I'm assuming it is something like the the sum of all the differences in response variance between nodes pre/post split where the feature of interest is used...maybe normalized by the number of trees?
|
So I'm running into an issue I can illustrate as follows. Lets say you have a shipment of fruit, of various kinds. You want to compare across shipments. You, for some reason, decide to Z-score these shipments. In the first shipment, you have 10 tons of Apples and 10 tons of Oranges. In the second shipment, you have 10 tons of Apples and 20 tons of Oranges. If you Z Score within shipments, It seems like you'll run into an issue, that the score for Apples will end up negative, as it is now below the mean, whereas it is 0 in the original case, Despite the fact the only change was amount of Oranges, you'll get the false impression Apples went down.
Does this problem have a formal name? What are the approaches to preserve these kinds of changes? Are there alternatives to Z scoring?
|
Suppose you sample N people with unequal probabilities from some superpopulation. Your sample contains W_sample the probability with which the person was sampled from the superpopulation and their outcome Y. For our purposes, let's assume W in the superpopulation is a draw from a dirichlet distribution, so W is a probability distribution while W_sample is not and need not sum to 1.
When confronted with unequal sample selection probabilities, Bayesians usually advise to condition on the variables generating the sampling weight. This makes sense as it renders the sampling design ignorable. However, it isn't clear what this would mean in this case - what should be conditioned on?
|
Let's say I have a range of formulations, and each formulation contains a different starting rate of water "x", and I want to test how fast the formula dries out over time (ie. loss of water over time), and put this on a plot. Let's suppose the starting rate of x for each formulation is known, but there is no control for over the starting rate of x. In order to make the regression on the plot between each formulation more comparable, is it appropriate to scale x for each formula to be the same starting rate?
For instance: x1: 20, x2: 35, x3: 22
And then, to scale each resulting regression I could just adjust each data point by ~57% for x2, and ~91% for x3; so that they all have a starting rate of x = 20 and scale the remainder of the data points by those same percentages for x2 and x3?
Thank you!
This is also similar to a question asked here (also on stackexchange!) incase my post doesn't make sense.
|
Odds ratio, as the word itself demonstrates, refers the ratio of odds. Hence, we need 2 events in computing odds ratio.
But in simple logistic regression, given that we are interested in estimating is the relative likelihood of event A over the event of not A, why should we call it odds “ratio” not just “odds”?
Perhaps is it just because odds with the denominator of 1 is called odds ratio?
|
Some papers I see take the uncertainty estimation of a prediction as simply its softmax/sigmoid output, whereas some papers will use techniques such as MC Dropout and calculate the variance across the predictions.
The softmax function is typically used in machine learning models to convert a set of input values, often called logits or scores, into a set of output probabilities. These output probabilities can be interpreted as the model's confidence but I have often heard they cannot be used to check the confidence so other methods such as MC dropout is used , why is this the case , what causes the softmax to give high confidence even for predictions that are false?
Is it because the soft max can be intuitively thought of as an ensemble of various activations of neurons and this leads to some noise creeping in and making the softmax make wrong predictions confidently.
Why wouldn't the MC dropout lead to such problems , During inference, the input image is passed through the network, and each neuron in the network computes an activation value based on its weighted inputs. The weights in the network are learned during training, so they are optimized to produce the correct output for a given input image.
So, as the input neurons are removed during MC dropout, the pattern of activation in the neurons will also change which would lead to varied predictions and it should technically give high variance for all inputs but it dosent happen often.
|
In Mann-Kendall test there are 3 values, p-value, Z-value and Kendalls Tau value.
What I want to know is what the difference in concept between Z and Kendalls Tau, as I read that both of them indicate the positive or negative relation between the variable?
|
I created a CausalForest (from econml) model to estimate non-binary outcomes (similar to daily sales amount) given covariates and a binary treatment. I evaluate the model using the following procedure:
use the model, denoted $\tau(X, w)$, to estimate the effects for my dataset;
rank and bin the estimated effects of the units/examples from high to low;
for each bin $b$ of size $n_b$, compute its estimated effect as the mean of model's estimates of all units in that bin, i.e. $\hat{\tau}_b = {1 \over n_b} \sum_{i \in b}{\tau}(X_i, w=1)$;
for each bin $b$, compute its "empirical" effect by subtracting the mean outcomes of the treated from that of the untreated, i.e. $\tau_b = E[Y_b(1)] - E[Y_b(0)]$.
I found that the effects estimated by the CF has rather small values (small mean effect and small variance) of every bin compared to the "empirical" effects of the corresponding bin.) However, the estimates seem to be consistent in the sense that bins with higher estimated effects also have higher empirical effects as computed above; only the magnitudes of the estimates are too small. Further it does not seem to be an overfitting issue since the same is observed for both training set and validation/test sets.
It is said that applying typical ML algorithms to causal effect estimate may bias the estimated effects to zero. Does the same happen to CausalForest? What would be the remedy?
|
I am trying to fit a RJAGS zero-inflated negative binomial model. The data I am using has 451 observation and only 12 of them have values different to 0, which means that 97% of my observations are 0. My objective is obtaining the the posterior distribution of the probability of the data belonging to the non-structural zero $\pi$, the expected value of the negative binomial part $\mu$ and the size parameter $r$. My data is distributed as:
I have created a model in RJAGS with the following structure:
negativebinom <-"model {
# Likelihood
for (i in 1:length(Y)) {
Y[i] ~ dnegbin(p1[i],r)
p1[i]=r/(r+mu1[i])
mu1[i]=z[i]*mu
z[i]~dbern(pro)
}
log(mu)=eta
pro=1-zero.prob
logit(zero.prob)=theta
theta~dnorm(2,1)
eta~dgamma(1.2,0.7)
r ~ dnorm(7.280611e+06,1.430766e+04)
}"
Where $Y$ stands for the count variable, $r$ the size parameter of the negative binomial, $p1$ is the probability parameters of each observation which depends on $\mu_1$. $\mu_1$ is the expected value of each observation which depends on $z$ a Bernoulli variable modelling if we are in the non-structural zero part or not. $z$ takes value 0 if Y is 0 and 1 otherwise. However, I think I am forcing the model to consider all 0's part of the non-structural 0 and not leaving any chance for them to be generated by the negative binomial part. In fact, while computing the posterior distribution probability zero.pro, which should be the probability of non-structural zero, is the same or matches with the actual proportion of 0's of our data:
What should I modify in my model to make this probability modelling the non-structural zero probability and not the overall probability of being zero?
|
I am trying to generate evenly distributed particles in an $n$-dimensional flat torus or a periodic hypercube.
I am not sure if any of this approaches suffices. Can you suggest alternative methods for generating evenly distributed particles in this space or how to correct any of these?
First approach:
Sampling $\varphi = \arccos(1-2u)$ with $u \in U[0,1]$ for the azimutal angle of the 3D unit sphere $\left( \mathcal{S}^{2} \subset \mathbb{R}^{3} \right)$ prevents accumulation of points near the poles being the polar angle sample, where $\theta = 2\pi v$ with $v \in U[0,1]$.
I was wondering if "copying" this behaviour by sampling $$\vec{x} = \{ 2 \cdot \arccos(1-2u_{i}): u_{i} \in U[0,1] \}_{i=1}^{n} \in [0,2 \pi]^{n}$$
would be enough to generate $n$ random points in an $n$-dimensional flat torus or hypercube, where each dimension has length $2\pi$.
I have run a little simulation for 2D and 3D and it looks like the corners (which represent the same point) aren't very likely to have much particles, which is reasonable given the functional form of the $\arccos$.
Another approach:
Considering the map into the torus
$$\sigma: (x_1,x_2) \in [0,2\pi]^{2} \mapsto \left[ (1+\cos(x_1))\cdot \sin(x_2), (1+\cos(x_1))\cdot \sin(x_2), \sin(x_1) \right] \in \mathbb{T}^{2} \subset \mathbb{R}^{3}$$
Obtaining the differential volume element $\| \frac{\partial \sigma}{\partial x_1} \times \frac{\partial \sigma}{\partial x_2} \| = 1+\cos(x_1)$, which also gives the enclosed area $A = \int \int \| \frac{\partial \sigma}{\partial x_1} \times \frac{\partial \sigma}{\partial x_2} \| dx_1 dx_2 = (2\pi)^2$
Then, obtaining the marginal probability density functions $f(\cdot)$ and the cumulative probability functions $F(\cdot)$,
$$f(x_1) =\frac{1}{(2\pi)^2} \int_{0}^{2\pi} 1+\cos(x_1) dx_2 = \frac{1+\cos(x_1)}{2\pi} \Longrightarrow \\ \Longrightarrow F(x_1) = \int_{0}^{x_1} f(x_1) dx_1 = \frac{x_1 + \sin(x_1)}{2\pi}$$
$$f(x_2) =\frac{1}{(2\pi)^2} \int_{0}^{2\pi} 1+\cos(x_1) dx_1 = \frac{1}{2\pi} \Longrightarrow \\ \Longrightarrow F(x_2) = \int_{0}^{x_2} f(x_2) dx_2 = \frac{x_2}{2\pi}$$
I tried to also sample $(x_1,x_2)$ from $u_1,u_2 \in U[0,1]$ by computing $x_1 = F^{-1}(u_1)$ (numerically solved) and $x_2 = F^{-1}(u_2)= 2\pi u_2$.
Note that the inverse function makes sense since $F(\cdot)$ is monotone and bijective (inyective and surjective) in $[0,1]$.
For higher dimensions ($n>2$) the mapping $\sigma = (\sigma_1, \dots, \sigma_n) $ would be modified accordingly:
$$
\begin{array}{ll}
\sigma_1(\vec{x}) &= \left[ 1+\cos(x_1) \right] \cos(x_2) \\
\sigma_2(\vec{x}) &= \left[ 1+\cos(x_1) \right] \sin(x_2) \cos(x_3) \\
\dots & \\
\sigma_{n-2}(\vec{x}) &= \left[ 1+\cos(x_1) \right] \sin(x_2) \cdot \dots \cdot \sin(x_{n-3}) \cos(x_{n-2}) \\
\sigma_{n-1}(\vec{x}) &= \left[ 1+\cos(x_1) \right] \sin(x_2) \cdot \dots \cdot \sin(x_{n-2}) \cos(x_{n-1}) \\
\sigma_{n}(\vec{x}) &= \left[ 1+\cos(x_1) \right] \sin(x_2) \cdot \dots \cdot \sin(x_{n-1}) \sin(x_{n-1}) \\
\end{array}
$$
Note:
I have used as reference http://corysimon.github.io/articles/uniformdistn-on-sphere/
|
Data health for PLS modeling
Hi,
I am working on a manufacturing data that is fairly new (only has 60 batches produced so far) dataset size is 60 observations of 150 variables and I am building a PLS model to predict the Final Product quantity in Kgs that meets minimum specifications. After removing intermediate product measurements, redundant variables, calculated variables to avoid Colinearity I am left with 60 observations of 110 variables. This PLS model has predictability only at 23% and more than 70% of the variables have huge variations in their data so far. My thoughts are this process data is too early and not sufficient to make a predictive PLS model but I would like to get some expert opinions on this situation. Can I assume adding more observations to this data helps the model? Is there any basic data health check I am missing for PLS modeling before I submit the outcomes to my manufacturing team? Thank you
|
I have a business use case around adverse news detection.
We have set-up an experiment where we compare the human vs a bot and we need to test it. The experiment results will come out something like this:
Case #, Human (0,1), Bot(0,1)
0 - No adverse news detected
1- Adverse news detected
Some quirks about our case:
H0 is that the human is better at detection. So there has to sufficient evidence to disprove it
We do not have ground truth label. In a way the human is the ground truth
What I need to know:
How to test this hypothesis?
What is the minimum sample size for this test?
|
Suppose I'm running a regression that looks something like
$$\log(price)=\beta_0 + \beta_1\log(population)+\beta_2\log(population)^2.$$
I have found the residuals, grouped them according to the number of sellers in the observation's town, and calculated the mean residual for each group. Suppose the residuals are 0.05 for 1 seller, -0.01 for 2 sellers, -0.02 for 3 sellers.
I want to make a statement about the %markup to the average price for each group.
Since these are logged residuals, can I just interpret the mean residual as the %markup? (eg. there is a 5% markup from the average price when there is 1 seller, -1% when there are 2, etc.)
Or do I need to use the mean log-price, and then calculate the %change using: $$100 \times \frac{(meanLogPrice+residual)-meanLogPrice}{meanLogPrice} = 100 \times \frac{residual}{meanLogPrice}? $$
|
Screenshot from page 80 of the textbook "Introduction to Linear Regression Analysis" fifth edition by Douglas C. Montgomery
Let $X$ be $n \times p$, $y$ and $\hat{y}$ be $n \times 1,$ and $\hat{\beta}$ be $p \times 1$ matrix in the multiple linear regression model. From the matrix calculation, we can easily find $\hat{\beta} = (X'X)^{-1}X'y$ and $\hat{y}=X\hat{\beta}$. However, in the later part of this chapter, the estimation of $\sigma^{2}$, while calculating the residual sum of squares, the textbook says $X'X\hat{\beta} = X'y$ which indicates $X\hat{\beta}=y$. But I don't understand why it is not $\hat{y}$. Can anyone answer this question?
|
I am using ANOVA and T-Test to compare wheat grain characteristics between states, agroecological zones, soils, etc.
Most of this data is grain mineral concentrations but some of it is ratios describing grain mineral bioavailability. For example, a phosphorus fraction expressed as a percentage of total phosphorus.
My question is, can I use compare means if the means are generated from ratios? Or do I have to transform the data first? I'm asking because my supervisor gave me this comment:
I believe the estimates of mineral bioavailability as a molar ratio is that a ratio. Data that involves ratios should be transformed accordingly for proper comparison. There is no reporting on data transformation in the statistical analysis.
|
I have a confusion related to the likelihood function. I suppose that users waiting time $W$ follows an Exp distribution with the rate $\lambda$, and the prior of $\lambda$ follows Gamma($\alpha$, $\beta$). We have the information that after the user has waited for 5 min, he still needs to wait for another 10min. But, I am confused that if I want to use this information to make bayesian updating on the waiting time, should it be
$$P(\lambda|D)=P(X=5+10|X\geqslant 5) \times P(X\geqslant 5)\times P(\lambda)=P(X=5+10)\times P(\lambda)$$ or $$P(\lambda|D)=P(X=5+10|X\geqslant 5)\times P(\lambda)$$ or other.
|
I have carried out a designed agricultural experiment with two treatments and recorded the effect on the abundance of a pest insect on three dates. The field experiment was divided into four blocks with two plots (replications) per block, resulting in 2 x 2 x 4 = 16 plots. Pest insects were counted per plant on the same 15 plants in a row in each of the replications (I wrote another post, where asking how to deal with this spatial correlation). The pest insects originated from one or two nearby fields and are therefore not evenly distributed. The data looks like this:
Treatment
Block
Plot
Date
Plant
Insects
A
1
1
2019-06-18
1
0
A
1
1
2019-06-18
2
5
A
1
1
2019-06-18
3
2
...
B
4
16
2019-07-10
15
1
I have three Date levels "2019-06-18", "2019-06-25", "2019-07-10". I have already analysed the data date by date. I used the suggested error structure from Jones, Harden, Crawley (2022) "The R Book", chapter 13, for nested (hierarchical) structures:
single_date_model <- glmmTMB(insect ~ treatment + (1 | block/plot), family="poisson",
data = subset(mydata, DATE == "2019-06-25"))
However, I would like to analyse all dates at once. The above book suggests adding + (time | random) to the model, which is in my case should be + (cDATE | BLOCK / PLOT) (I changed DATE to cDATE by mydata$cDATE <- as.integer(mydata$DATE) - 18064 to get a continuous variable starting at 1). As an alternative, they suggest adding time as a fixed effect and comparing the models. These are the models:
t_random_model <- glmmTMB(insect ~ treatment + (fDATE | block / plot), family = "poisson", data = mydata, control=glmmTMBControl(optimizer=optim, optArgs=list(method="BFGS")))
t_fixed_model <- glmmTMB(insect ~ treatment + cDATE + (1 | block / plot), family = "poisson", data = mydata, control=glmmTMBControl(optimizer=optim, optArgs=list(method="BFGS")))
(I added control=glmmTMBControl(optimizer=optim, optArgs=list(method="BFGS")) to both as suggested in the vignette Troubleshooting with glmmTMB in order to avoid convergence problems.)
Compring the models with anova() shows, that the t_random_model is significantly better with a far lower AIC. However, unlike my data with three irregular time points, the example in the book has five equidistant time points, and I'm not sure that the way I did it is still valid.
I have also tried a model with an Ornstein-Uhlenbeck covariance structure from the vignette Covariance structures with glmmTMB, which is said to be able to handle irregular time points. For the model I prepared the data with mydata$numDATE <- numFactor(mydata$cDATE) and then ran:
t_corr_model <- glmmTMB(insect ~ treatment + ou(numDATE + 0 | block / plot),
data = mydata)
Comparing it with anova() says that it's a worse model. However, I'm not sure if one model is nested within the other and if it's valid to compare them.
Are all of these models valid ways to analyse the data? Is anova() the right way to find out which model is best or should I go another way?
|
I am currently conducting a set of analyses examining the relationship between two predictors and an outcome. For example, the relationship between motivation (predictor 1), revision (predictor 2), and performance in an exam (outcome).
I have reason to believe that predictor 2 (e.g. revision) may mediate the relationship between predictor 1 (motivation) and the outcome (performance on the exam). I have therefore ran a mediation model and find evidence of full mediation after controlling for covariates.
I am also interested in whether a model containing the predictor (e.g. motivation) and the mediator (e.g. revision) is more predictor of the outcome than the mediator alone. Can I obtain this from the mediation model, or would I need to conduct additional analyses to examine this (e.g. separate regression analyses including only the mediator (model 1) and then the mediator and the predictor (model 2), and then compare these models)?
|
I have an experiment which comprise a numerical dependent variable, say a feature such as growth_rate, and three independent factor variables describing where the samples were collected, i.e. locality, were the collected samples were growth, i.e. medium, and which taxonomical group they belong to, i.e. taxa. plus, there is variable that is adding some random noise which will be considered the random effect.
What I need to test is the combined effect of taxa and medium, interaction, while controlling for locality. Further complication is that I need to test all the possible combinations between taxa and medium.
To solve such problem I am thinking about a model like:
lme(growth_rate ~ medium*taxa + locality, block=random, data)
but then how to construct the matrix for the multcomp function glht?
I am reading the vignette for the multcomp package and I can understand the two way anova and how it works but I am unable to extend it to what i need. Specifically, i was looking at the Two-Way Anova part but I am still missing the how to add the locality variable.
I was also thinking about merging the medium and taxa variables into one, as mentioned in this thread but I am not sure how to manage the fact that i also have the locality variable
These are some data, to give an idea what i am dealing with. I randomised the whole table, so it's not the real data.
structure(list(locality = c("L1", "L1", "L1", "L1", "L1", "L1",
"L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1",
"L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L2", "L2", "L2",
"L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2",
"L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2",
"L2", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3",
"L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3",
"L3", "L3", "L3"), medium = c("M1", "M2", "M3", "M1", "M2", "M3",
"M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2",
"M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1",
"M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3",
"M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2",
"M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1",
"M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3",
"M1", "M2", "M3"), random = c("rnd9", "rnd9", "rnd9", "rnd9",
"rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9",
"rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9",
"rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd7",
"rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd6", "rnd6", "rnd6",
"rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7",
"rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7",
"rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7",
"rnd8", "rnd8", "rnd8", "rnd8", "rnd8", "rnd8", "rnd6", "rnd6",
"rnd6", "rnd6", "rnd6", "rnd6", "rnd6", "rnd6", "rnd6"), taxa = c("g1",
"g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1",
"g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1",
"g2", "g2", "g2", "g2", "g2", "g2", "g2", "g2", "g2", "g2", "g3",
"g3", "g3", "g2", "g2", "g2", "g2", "g2", "g2", "g2", "g2", "g2",
"g2", "g3", "g3", "g3", "g3", "g3", "g3", "g3", "g3", "g3", "g3",
"g3", "g3", "g3", "g3", "g3", "g3", "g2", "g2", "g2", "g2", "g2",
"g2", "g1", "g1", "g1", "g3", "g3", "g2", "g2"), growth_rate = c(7L,
2L, 7L, 4L, 5L, 1L, 6L, 10L, 0L, 5L, 4L, 0L, 10L, 0L, 1L, 3L,
8L, 8L, 0L, 0L, 0L, 5L, 0L, 6L, 5L, 3L, 10L, 1L, 7L, 5L, 0L,
1L, 7L, 10L, 3L, 3L, 6L, 6L, 6L, 2L, 2L, 1L, 10L, 0L, 5L, 7L,
1L, 2L, 8L, 5L, 9L, 1L, 4L, 10L, 0L, 4L, 3L, 3L, 5L, 7L, 3L,
5L, 10L, 5L, 2L, 0L, 10L, 0L, 9L, 9L, 3L, 1L, 10L, 1L, 0L)), class = "data.frame", row.names = c(NA,
-75L))
|
For my Master thesis I have to perform the DCC-GARCH model to examine the correlation between real estate house prices and the stock market. I tested the data for normality (both not normal) and stationarity (both not stationary) and variance ratio test (was significant).
I used the log function because of the non-normality and took the first difference. After this, the real estate house prices were still not stationary, so I took the second difference which resulted in stationary data. The stock market data was stationary after taking the first difference, but I think I need to take the second difference as well in order to use it for the DCC-GARCH model.
The code I used for the DCC model is:
#perform DCC
model1=ugarchspec(mean.model = list(armaOrder=c(0,0)),variance.model = list(garchOrder=c(1,1),model="sGARCH"),distribution.model = "norm")
modelspec=dccspec(uspec = multispec(replicate(2,model1)),dccOrder = c(1,1), distribution = "mvnorm")
modelfit=dccfit(modelspec,data=(data.frame(ts_nominal,ts_share)))
modelfit
I'm not sure if I took the right steps to perform this analysis or if my code is even correct. Compared to other papers, I find it strange that only 3 parameters are significant and that alpha1 for stocks and dcca1 are almost equal to 1.
Can anyone help me with this?
Update: Further Research
I have proceeded by taking the log return of both the property price index and stock price index. Then I used the diff() function, resulting in both time series being stationary. The results, however, are barely different than the last results I showed in the post.
Distribution : mvnorm
Model : DCC(1,1)
No. Parameters : 11
[VAR GARCH DCC UncQ] : [0+8+2+1]
No. Series : 2
No. Obs. : 130
Log-Likelihood : 502.7599
Av.Log-Likelihood : 3.87
Optimal Parameters
-----------------------------------
Estimate Std. Error t value Pr(>|t|)
[ts_prop].mu 0.000547 0.003082 0.177488 0.859125
[ts_prop].omega 0.000011 0.000072 0.156417 0.875704
[ts_prop].alpha1 0.349696 0.649448 0.538451 0.590266
[ts_prop].beta1 0.649304 0.387195 1.676944 0.093553
[ts_share].mu 0.000031 0.008641 0.003615 0.997116
[ts_share].omega 0.000004 0.000007 0.561343 0.574564
[ts_share].alpha1 0.000000 0.000673 0.000009 0.999993
[ts_share].beta1 0.999000 0.000882 1132.157095 0.000000
[Joint]dcca1 0.000000 0.000007 0.000353 0.999719
[Joint]dccb1 0.895265 0.119982 7.461638 0.000000
Information Criteria
---------------------
Akaike -7.5655
Bayes -7.3229
Shibata -7.5784
Hannan-Quinn -7.4669
Elapsed time : 0.909425
The next step in our analysis is to apply linear regression to see which determinants (like the long term interest rate) have a significant effect on the dynamic correlation between the property return time series and stock return time series. For this, I extracted the dynamic correlations from the DCC-GARCH model, but as you can see in the graph 'fcor', these correlations all have the same value of 0.133784 with minimal changes.
mod1=lm(fcor~long)
> summary(mod1)
Call:
lm(formula = fcor ~ long)
Residuals:
Min 1Q Median 3Q Max
-0.000000010205 -0.000000003962 -0.000000001022 0.000000004495 0.000000019842
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.1337839573855 0.0000000005038 265555616.311 < 0.0000000000000002 ***
long 0.0000000045540 0.0000000016445 2.769 0.00646 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.000000005664 on 128 degrees of freedom
Multiple R-squared: 0.05652, Adjusted R-squared: 0.04915
F-statistic: 7.669 on 1 and 128 DF, p-value: 0.006455
I also performed regression on other determinants with all having a significant effect on the dynamic correlations. Can someone explain to me why the dynamic correlations barely change and why this has an effect on the linear regression result?
|
I'm trying to predict some financial feature (continuous) and there are two or more good regression models. Is it possible to combine multiple regression models? If so, what kind of method is it called?
|
For example, I am interested in specific brain regions' development over time.
I have data, but data is one-time point data. it is not longitudinal.
But in the data, I have participants aged from 0~17.
Also, I have the depression score of each participant.
So, my research question is:
if there is any statistically significantly different in certain brain regions' development trajectories between depressed children and healthy children.
But my data is collected at one-time point, and it is not longitudinal.
So I an concerned that if I can use longitudinal data analysis methods to compare developmental trajectories between two groups. (although sample size is enough, and participants are ranged from 0~17)
|
Motivating question
I have a high-dimensional state space $\Omega \subseteq \mathbb R^n$ with an admissible subset $S\subseteq \Omega$, which is connected. I would like to draw a uniform random sample from $S$.
In my application, it is easy to verify whether a state $\vec x$ is in $S$, but difficult to find a state in $S$ ad hoc. However, it is known that $\vec 0 \in S$.
Solution idea
I think the problem should be relatively easy to solve with a Metropolis-Hastings algorithm.
Start at $x_0 = \vec 0$. Set $i:=0$.
Set $i:=i+1$.
Randomly generate a close-by state $\vec x_i:= \vec X( \vec x_{i-1})$ based on $\vec x_{i-1}$.
Accept the step if $\vec x_i \in S$, else set $\vec x_i = \vec x_{i-1}$.
Add $\vec x_i$ to the sample.
Repeat the procedure after step 2.
We may need to through away a large number of steps from the burn-in period.
Question
I am wondering which properties the random variable $\vec X(\vec x)$ needs to have to lead to a uniform sample over the state space. Does the step distribution need to satisfy detailed balance? Why?
Example
Draw a uniform sample from the unit disk in 2D and use the bivariate standard normal distribution centred at $\vec x$ for the steps, i.e., $\vec X(\vec x) \sim \vec N(\vec x, \underline{1})$. I think this should work, but I also think that biasing the steps towards the centre would lead to a different sample.
The question is so basic that I would expect to find lecture notes that help me, but so far I was unsuccessful in finding something where the Metropolis algorithm was interpreted in a Bayesian framework with prior and posterior distribution. A corresponding reference might be perfectly fine as an answer.
|
That’s a sequel to my previous question Does Gaussian process functional regression fulfill the consistency condition?
The conclusion was that:
Gaussian process regression with i.i.d. Gaussian noise returns the same posterior Gaussian process for any partition of the data;
... but with completely different calculations/algorithms. In particular, GP regression with full $n-$update (i.e. the trivial partition) has $O\left( {{n^3}} \right)$ generic computational complexity but GP regression with $n$ sequential $1-$updates (i.e. the atomic partition) has exponential computational complexity in $n$. That’s the reason why we never do $n$ sequential $1-$updates but a $(n-1)-$update followed by a $1-$ update in sequential/online learning, see e.g. Using Gaussian Processes to learn a function online.
Now, consider a Bayesian problem with data $D = \left( {{d_1},...,{d_n}} \right)$ and parameters $\Theta $:
$p\left( {\left. \Theta \right|D} \right) \propto p\left( {\left. D \right|\Theta } \right)p\left( \Theta \right)$
Proposition $1$: if the likelihood factorizes
$p\left( {\left. D \right|\Theta } \right) = \prod\limits_{i = 1}^n {p\left( {\left. {{d_i}} \right|\Theta } \right)} $ and $\Theta$ is fixed once and for all
then the posterior calculations are exactly the same for any partition $D = \bigcup\limits_{j = 1}^p {{D_j}} $ of the data and any of its $p!$ permutations.
Proof: we have
$p\left( {\left. \Theta \right|D} \right) \propto p\left( {\left. D \right|\Theta } \right)p\left( \Theta \right) = \left( {p\left( {\left. {{D_p}} \right|\Theta } \right)...\underbrace {\left( {p\left( {\left. {{D_2}} \right|\Theta } \right)\underbrace {\left( {p\left( {\left. {{D_1}} \right|\Theta } \right)p\left( \Theta \right)} \right)}_{ \propto p\left( {\left. \Theta \right|{D_1}} \right)}} \right)}_{ \propto p\left( {\left. \Theta \right|{D_1},{D_2}} \right)}...} \right)$
Therefore, the only difference from one partition to another and from one permutation to another are the parentheses and the order of the products that are totally useless by the associative and commutative properties of the product. QED.
Proposition 1 just says that the likelihood $\prod\limits_{i = 1}^n {p\left( {\left. {{d_i}} \right|\Theta } \right)} $ and the full posterior remain the same regardless of how the data are grouped together and of their order of arrival.
Corollary $1$: GP regression with i.i.d. Gaussian noise is not a Bayesian method.
Proof: We have ${d_i} = \left( {{x_i},{y_i}} \right)$ and for i.i.d. Gaussian noise the likelihood factorizes
$p\left( {\left. D \right|\Theta } \right) = \prod\limits_{i = 1}^n {p\left( {\left. {{x_i},{y_i}} \right|f,\sigma } \right) = } \prod\limits_{i = 1}^n {p\left( {\left. {{y_i}} \right|{x_i},f,\sigma } \right)p\left( {\left. {{x_i}} \right|f,\sigma } \right)} \propto \prod\limits_{i = 1}^n {p\left( {\left. {{y_i}} \right|{x_i},f,\sigma } \right)} \propto {\sigma ^{ - n}}\prod\limits_{i = 1}^n {{e^{ - \frac{{{{\left( {{y_i} - f\left( {{x_i}} \right)} \right)}^2}}}{{2{\sigma ^2}}}}}} $
Moreover, $\Theta$ is fixed once and for all: $\Theta = \left( {f,\sigma ,m,k,{\rm M},{\rm K}} \right)$, see Is Gaussian process functional regression a truly Bayesian method (again)?
But the posterior calculations are not exactly the same from one partition/update scheme to another. QED.
In the same way, we have
Proposition $2$: if the likelihood factorizes and $\Theta$ is fixed once and for all, then Bayesian inference has $O(n)$ computational complexity.
Proof: Computing the prior $p\left( \Theta \right)$ has $O(1)$ computational complexity because it does not depend on $n$. Computing the likelihood $p\left( {\left. D \right|\Theta } \right) = \prod\limits_{i = 1}^n {p\left( {\left. {{d_i}} \right|\Theta } \right)} $ has $O(n)$ computational complexity. Computing the normalization constant $p\left( D \right) = \int {p\left( {\left. D \right|\Theta } \right)p\left( \Theta \right){\text{d}}\Theta } $ has $O(1)$ complexity because that's a $|\Theta|-$ dimensional integral that has nothing to do with $n$ (moreover, we don't need to compute it, it cancels out by Leibniz rule/Feynman trick). Therefore, computing the full posterior $p\left( {\left. \Theta \right|D} \right)$ has $O(n)$ computation complexity. Finally, drawing posterior inferences, taking Bayes estimators and computing credible intervals has $O(1)$ computational complexity because it involves $\left| \Theta \right|-$dimensional integrals whose complexity basically does not depend on $n$ (we just integrate different functions that depend on $n$ but the complexity of those integrals basically does not depend on $n$). All in all, Bayesian inference has $O(n)$ computational complexity. QED.
For one example of such truly Bayesian $O(n)$ functional regression algorithm, see Bayesian interpolation and deconvolution.
Corollary $2$: again, GP regression with i.i.d. Gaussian noise is not a Bayesian method.
Proof: GP regression does not have $O(n)$ computational complexity.
Is that correct please?
|
I run a null binomial generalized additive models (gam) using mgcv and it gives negative deviance explained!
As far as I know deviance explained is analogue of R^2 so it should be between 0 and 1. So is this negative deviance explained occurred by the package error? If so, then how can I manually estimate deviance explained?
My code is given below
library(mgcv)
x1 = rnorm(100)
x2 = rnorm(100)
y = rbinom(100, 1, 0.5)
Data = data.frame(y, x1, x2)
model = gam(y ~ 1, data=Data, family=binomial)
summary(model)$dev.expl
output:
[1] -2.050785e-16
|
Consider a simple regression model $Y_i = \alpha + \beta X_i + u_i, (i=1,...,n)$ where $(Y_i,X_i)$ is a random sample. Let $\hat{\beta}$ be the OLS estimator of $\beta$ and $\bar{X}$ be the sample mean of $X_i$ given by $\bar{X}=n^{-1} \sum_{i=1}^n X_i$.
I'm trying to derive $Var((\hat{\beta}-\beta)\bar{X})$. If $X_i$ is fixed(nonrandom), it is easy to derive. But when $X_i$ is random, I have no idea how to derive this variance since $X_i$ appears both in the numerator and denominator in $\hat{\beta}$.
Any comments/answers would be appreciated!
|
Suppose I have a likelihood maximisation problem
$$ \hat{\theta} = \max L_n(\theta;y) $$
where $\theta = [\theta_1, \theta_2, ...., \theta_k]^T$.
What if I would estimate instead estimate the maximisation problem leaving out a parameter
$$ \hat{\theta_{-k}} = \max L_n(\theta_{-k};y) $$
but a loop over each value of $\theta_k$ and pick the specification with the highest likelihood. Would this be identical to estimating the problem jointly?
|
I am developing a credit risk decisioning model, i.e. a model that assesses the risk of default of an incoming transactions and decides whether to accept it or not. Of course my dataset is imbalanced : the minority class (i.e. defaulted transactions) represents ~5% of my data.
What I care about is discrimination power rather than good probabilities because I will use the model to make decisions given an acceptance rate target (e.g. I want to accept 90% of incoming transactions), not to make financial predictions (in which case well calibrated probabilities would be important).
Because of that, I evaluate my model with ROC AUC (or PR AUC, I am still unsure which would be best). However, I saw that even though I evaluate my model with AUC, I should still keep binary:logistic as the objective function of my XGBClassifier. The reasons to me are unclear why, but one difficulty I can foresee with having an "AUC objective function" is that it's not possible to define a loss function (and let alone a differentiable one) that would give the "AUC loss" of one given sample as AUC is an aggregate loss rather than an individual one, and I understand that XGB needs an individual loss to compute the losses at the leaf level.
Knowing that, it means that the only way to optimize my model for discrimination power is to use AUC as an evaluation metric in the process of hyperparameter optimization. I find that quite disappointing because as per my experience, hyper-param optimization is not a real game changer and usually only allows to earn a few basis points of AUC.
Therefore my questions are :
Is it possible to re-define the objective function to optimize XGBClassifier for discrimination power rather than probabilities ?
If not, what are other ways to "boost" the discrimination power of my model besides hyper-parameter optimization ?
Conceptually, the ability to discriminate 2 samples is close to the task of "learning to rank". Therefore, is there a way to use XGBRanker for standard classification ? Have you tried it ?
PS : Don't think it is important here but mentioning it just in case : I actually only care about partial AUC : https://en.wikipedia.org/wiki/Partial_Area_Under_the_ROC_Curve
because areas where the False Positive Rate is too high (say above 20%) is not applicable in my case.
|
I conducted a moderation analysis on repeated-measures data using the MEMORE macro for SPSS (https://www.akmontoya.com/spss-and-sas-macros). However, I need standardized effect sizes but I haven't managed to figure it out and it's quite urgent.
So each participant read 2 character descriptions about a healthy (Condition C1) and unhealthy (Condition C2) male (independent variable) and had to judge the likability (outcome). They also had to score their gender system justification beliefs once (moderator).
MEMORE recalculates the outcome by taking a difference score of likability_C1 - likability_C2 at various levels of the moderator and then calculates a t-statistic to check significance.
I got this output, with a mean-centered moderator:
Conditional Effect of 'X' on Y at values of moderator(s)
SystemX Effect SE t p LLCI ULCI
-1,2917 ,7257 ,0877 8,2786 ,0000 ,5535 ,8980
,0000 ,4285 ,0620 6,9172 ,0000 ,3068 ,5503
1,2917 ,1314 ,0877 1,4984 ,1347 -,0409 ,3036
How do I now get effect sizes?
|
In the book Mathematical Methods for Physics and Engineering it is said that the likelihood function tends to a Gaussian (centred on the maximum-likelihood estimate) in the large sample limit. The way it is phrased makes it seem like they are saying this is due to the central limit theorem, but I am struggling to see how it is relevant. It relies on the random variable being a sum of a sequence of other random variables, which I don't think is the case here.
I believe this is often misunderstood, for example in this question, which I have several problems with. The arguments use the central limit theorem to find the distribution of the likelihood and show that it is asymptotically normal. However, we are not interested in its distribution as a random variable; we instead care about its functional form as the parameters are varied for given observed sample values.
As an example of what I mean, suppose we draw $n$ sample values $x_i$ from a distribution $P(x|\tau)=(1/\tau)\exp(-x/\tau)$. The likelihood function is then $$L(\boldsymbol{x};\tau)=P(x_1|\tau)P(x_2|\tau)\dots P(x_n|\tau)=\frac{1}{\tau^n}\exp{\left[-\frac{\sum_i x_i}{\tau}\right]}.$$
Suppose we now evaluate this using the observed values of $x_i$ and consider it as a function of $\tau$. In general this will obviously be different every time, but the book says that in the limit $n\to\infty$, the function tends to a Gaussian with peak centred on the maximum likelihood estimate $\hat{\tau}$ and width inversely proportional $\sqrt{n}$. Why should we expect this to be the case?
|
I am looking for ways of estimating or mitigating the risk of applying a classification model (say logistic regression for simplicity) in a certain population (the inference set) that is known to be different from the training population. We know that our metrics estimated in the test set are not directly applicable to the inference set since many of the features have different distributions. We are struggling to find ways of measuring how the model will be impacted and if/how the metrics measured can be translated to this inference set or which actions we should take before doing so. One idea was to build a simple distance based model to first filter cases that are close to the training set but it ends up filtering out too many cases, so I am open to suggestions :)
@Update
I have tried the following procedure to get more understanding of my data
Selected the N most important features of my model
Trained an IsolationForest with the inference set and predicted on the training set
Trained an IsolationForest with the training set and predicted on the inference set
Compared the % rejections between 2 and 3. The rejection rate is much lower on 3., which leads me to believe that the inference set is "contained" by the training set, despite the distributions being different.
In this scenario, it should be safe(ish) to apply my model
Can you please criticize the approach above?
|
I have an lme model with a significant categorical variable. I have recently been advised to test for autocorrelation in the residuals of that model separately for each level of that variable. The result is that some levels show autocorrelation and some don't and I am not able to deal with this autocorrelation by specifying relevant correlation structures.
I am not entirely sure, whether it is really necessary to perform these tests independently. For illustration, I have attached an fairly random example, using the sp::meuse data set. Any opinions will be much appreciated!
easypackages::libraries("sp", "tidyverse", "gstat")
data("meuse")
mod <- lm(lead ~ soil*elev, data = meuse)
summary(mod)
meuse$E1 <- resid(mod)
coordinates(meuse) <- c("x", "y")
###separate tests
v1 <- variogram(E1 ~ x + y, data = meuse[meuse$soil == "1",]) %>%
mutate(soil = "1")
v1.fit <- fit.variogram(v1, vgm(psill = 8000,
model = "Sph",
range = 1500,
nugget = 2000))
vario_line1 <- variogramLine(v1.fit, maxdist = 1800) %>%
mutate(soil = "1")
v2 <- variogram(E1 ~ x + y, data = meuse[meuse$soil == "2",]) %>%
mutate(soil = "2")
v2.fit <- fit.variogram(v2, vgm(psill = 3000,
model = "Sph",
range = 1200,
nugget = 500))
vario_line2 <- variogramLine(v2.fit, maxdist = 1800) %>%
mutate(soil = "2")
v3 <- variogram(E1 ~ x + y, data = meuse[meuse$soil == "3",]) %>%
mutate(soil = "3")
v3.fit <- fit.variogram(v3, vgm(psill = 300,
model = "Sph",
range = 50,
nugget = 10))
vario_line3 <- variogramLine(v3.fit, maxdist = 1800) %>%
mutate(soil = "3")
mrg_fit <- rbind(v1, v2, v3)
mrg_line <- rbind(vario_line1, vario_line2, vario_line3)
ggplot() +
geom_point(aes(x = dist, y = gamma), data = mrg_fit) +
geom_line(aes(x = dist, y = gamma), data = mrg_line, color = "blue") +
facet_wrap(~soil, scales = "free_y") +
labs(x = "Distance", y = "Semi-variogram")
###combined test
v <- variogram(E1 ~ x + y, data = meuse)
v.fit <- fit.variogram(v, vgm(psill = 8000,
model = "Sph",
range = 1500,
nugget = 2000))
vario_line <- variogramLine(v.fit, maxdist = 1800) %>%
mutate(soil = "1")
ggplot() +
geom_point(aes(x = dist, y = gamma), data = v) +
geom_line(aes(x = dist, y = gamma), data = vario_line, color = "blue") +
labs(x = "Distance", y = "Semi-variogram")
|
So, I need to do some exploratory data analysis and I picked MDS to figure up if there were trends in the data.
The structure of my data looks like this:
$ Generation: int 2 2 2 2 2 2 2 2 2 2 ...
$ Panel : chr "A" "A" "A" "A" ...
$ Line : int 1 1 1 1 1 1 1 1 1 1 ...
$ Rep : int 2 2 2 2 2 2 2 2 6 6 ...
$ Sex : chr "F" "F" "F" "F" ...
$ Size : num 1662 1720 1721 1778 1565 ...
$ ILD12 : num 1930 1954 1947 1932 1915 ...
$ ILD15 : num 1524 1567 1575 1539 1528 ...
$ ILD18 : num 427 414 420 389 418 ...
$ ILD23 : num 732 706 702 733 749 ...
$ ILD25 : num 1380 1386 1383 1393 1391 ...
$ ILD29 : num 1544 1584 1554 1568 1531 ...
$ ILD37 : num 1586 1546 1575 1568 1611 ...
$ ILD39 : num 2070 2060 2046 2061 2060 ...
$ ILD46 : num 1515 1481 1498 1493 1532 ...
$ ILD49 : num 1970 1973 1953 1971 1962 ...
$ ILD57 : num 673 695 705 691 697 ...
$ ILD58 : num 1117 1166 1172 1164 1127 ...
$ ILD67 : num 192 194 188 196 178 ...
$ ILD69 : num 611 644 623 642 585 ...
$ ILD78 : num 522 552 531 545 497 ...
$ ILD89 : num 97.5 99.2 97.9 99.9 96.9 ...
How would I deal with categorical data in my dataset if I am using R to analyse the data? I am using ggPlot too - would I just fit a model first using cmdscale and then plot the x and y coordinates?
So something like this:
ggplot(df, aes(x=x, y=y, color = Panel)) +
geom_point() +
ggtitle("Metric MDS Results") +
labs(x="Coordinate 1", y="Coordinate 2")
theme_bw()
Am I correct to assume the color parameter in ggplot shows the similarity of categorical variable Panel?
|
I want to conduct a meta-analysis of single means. However, these means are restricted mean survival time (RMST) of cerebrospinal fluid shunt inserted to treat hydrocephalus.
For this, I have digitized published survival curves with https://apps.automeris.io/wpd/.
Then I have extracted the individual patient data with the R package IPDfromKM.
Finaly, I have recontructed the survival curve and calculated the RMST at 12 months as decribed here: https://stackoverflow.com/questions/43173044/how-to-compute-the-mean-survival-time.
I have yet the following RMST and associated standard error.
study <- c("study1", "study2", "study3", "study4", "study5")
n_patients <- c(535, 209, 111, 599, 434)
rmst_12 <- c(10.54759, 11.36175, 10.50244, 10.51183, 8.716552)
se_12 <- c(0.1463532, 0.1439506, 0.3246873, 0.1471398, 0.2374582)
> data
study n_patients rmst_12 se_12
1 study1 535 10.54759 0.1463532
2 study2 209 11.36175 0.1439506
3 study3 111 10.50244 0.3246873
4 study4 599 10.51183 0.1471398
5 study5 434 8.716552 0.2374582
Now I am performing the meta-analysis of my RMST.
# Compute standard deviation (SD) from standard error (SE) ----
data $ sd_12 <- data $ se_12 * sqrt (data $ n_patients)
# Download useful package
require (meta)
# Compute the meta-analysis with the metamean function ----
mm_12 <- metamean (n = n_shunts,
mean = rmst_12,
sd = sd_12,
studlab = author_year,
data = data,
method.mean = "Luo",
method.sd = "Shi",
sm = 'MRAW',
random = TRUE,
warn = TRUE,
prediction = TRUE,
method.tau = "REML")
> mm_12
Number of studies combined: k = 5
Number of observations: o = 1888
mean 95%-CI
Common effect model 10.5757 [10.4247; 10.7268]
Random effects model 10.3373 [ 9.4868; 11.1878]
Prediction interval [ 7.0212; 13.6534]
Quantifying heterogeneity:
tau^2 = 0.8975 [0.2937; 7.7528]; tau = 0.9474 [0.5419; 2.7844]
I^2 = 95.6% [92.3%; 97.5%]; H = 4.78 [3.61; 6.34]
Test of heterogeneity:
Q d.f. p-value
91.39 4 < 0.0001
Details on meta-analytical method:
- Inverse variance method
- Restricted maximum-likelihood estimator for tau^2
- Q-Profile method for confidence interval of tau^2 and tau
- Prediction interval based on t-distribution (df = 3)
- Untransformed (raw) means
Everything works fine but, I am wondering myself if this is correct and right from a methodological point of view ?
Thank you in advance for your help.
PS. I am unsure if it is the right place for this post.
Charles
|
I'm working on Cox regression in my PhD research and I would like to know some references about applying the stratified-extended cox regression model on a real life data.
I'm interested about combining the two approach: stratification and the extended cox PH in a single model and not separately.
|
I have used RandomForestClassifier from Sklearn to solve a multiclass classification problem (12 classes in total). I get my x and y from a pandas dataframe.
label_bin = LabelBinarizer()
unique_classes = np.unique(y)
label_bin.fit_transform(unique_classes)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=1)
classifier_RF = RandomForestClassifier(n_estimators=100,
criterion='entropy',
min_samples_split=2,
min_samples_leaf=1,
random_state=1)
y_train_one_hot = label_bin.transform(y_train)
classifier_RF.fit(x_train, y_train_one_hot)
y_pred = classifier_RF.predict(x_test)
# Converting predictions to original form
y_pred_orig = label_bin.inverse_transform(y_pred)
Here is a normalized Confusion Matrix obtained from y_test and y_pred_orig:
And if make predictions using x_train, like this:
y_train_pred = classifier_RF.predict(x_train)
y_train_pred = label_bin.inverse_transform(y_train_pred)
and get a confusion matrix using y_train and y_train_pred, this is the result:
Question: By looking at both confusion matrices, can i confirm that my model is overfitting (can't generalize to new unseen data)? If this isn't proof enough, how can i be sure that overfitting is really happening (or not happening)?
As aditional info, this is the classification_report:
precision recall f1-score support
0 0.22 0.96 0.35 2863
1 0.84 0.17 0.29 1918
2 1.00 0.98 0.99 1987
3 0.97 0.02 0.04 2020
4 0.33 0.00 0.00 1928
5 0.97 0.49 0.65 1995
6 0.84 0.26 0.39 1951
7 0.98 0.99 0.99 1997
8 0.94 0.74 0.83 1987
9 0.99 0.99 0.99 1967
10 0.96 0.71 0.81 1985
11 0.79 0.32 0.46 1916
accuracy 0.57 24514
macro avg 0.82 0.55 0.57 24514
weighted avg 0.80 0.57 0.56 24514
EDIT: Here is how i got the log loss values (not sure if those are the correct steps):
y_train_pred_prob = classifier_RF.predict_proba(x_train)
y_train_pred_probb = np.concatenate([arr[:, 1].reshape(-1, 1) for arr in y_train_pred_prob], axis=1)
log_loss_train = log_loss(y_train, y_train_pred_probb)
y_test_pred_prob = classifier_RF.predict_proba(x_test)
y_test_pred_probb = np.concatenate([arr[:, 1].reshape(-1, 1) for arr in y_test_pred_prob], axis=1)
log_loss_test = log_loss(y_test, y_test_pred_probb)
print('Logloss Train:', log_loss_train)
print('Logloss Test:', log_loss_test)
Logloss Train: 0.20318181358124715
Logloss Test: 0.9682325123617269
|
I got contrary results from my log-rank (not significant) versus my cox regression (significant) regarding the effect of my treatment variable with the aim of hypothesis testing. That's no real wonder, considering that the cox regression adjusts for 6 covariates other than my treatment. However, I'm confused about the proposition I can make. Without putting too much emphasis on the statistical significance, is the proposition correct that the treatment has a significant effect on survival?
I know that the logrank test equals a univariate cox regression (only considering treatment) and that it's critized by experts (The logrank test statistic is equivalent to the score of a Cox regression. Is there an advantage of using a logrank test over a Cox regression?). Further, the statement made by the logrank test compares the survival curves while the cox regression models the relationship of the variables used to the survival time. But what is the implication if the aim is hypothesis testing?
|
I have a dataset containing a fair amount of continuous and categorical variables. I one-hot encode these variables to be used in various machine learning algorithms.
Let's presume a variable has n categories, which we one-hot encode into n columns. If we work with penalized models, we want to standardize all variables. However, when we standardize a one-hot encoded variable, for one variable, we get n standardized columns. Does this mean we are giving an advantage to categorical variables in terms of regularization, especially if a variable has many categories?
This problem seems especially relevant when using KNN algorithms (not only for prediction but also imputation). Without standardization, the distances would be biased towards high-valued variables. However, the distances seem to become biased towards categorical data when we standardize, especially if the variable has many categories. If, say, we have a binary categorical variable with an equal number of samples in each category, after standardization 0's would be replaced with -1's, and 1's would remain 1's. Then, the Euclidean distance between two samples with a different categorical value ([-1, 1] and [1, -1]) would be 2*sqrt(2), whereas the distance between a standardized continuous variable would likely be closer to one or two standard deviations (1-2), presuming it's normally distributed (is that an important assumption in this case?).
Following the above logic, a simple solution that comes to mind is dividing each one-hot encoded column by the number of categories. So, in the above example, the distance between two samples with a different binary category would be sqrt(2) ([-0.5, 0.5] and [0.5, -0.5]). That way, the total distance between samples seems to be more evenly distributed between variables, with less bias towards categorical variables with a large number of categories. Another solution that comes to mind is, instead of standardizing categorical variables, simply replacing the 0's with -1/n, and the 1's with 1/n.
Naturally, simply treating all of these ideas as hyperparameters would likely be the "best" solution when trying to get the best model, but I'm interested if there's any literature on the subject. Has anyone had any experience with this?
Thanks.
|
Suppose $x$ is an isotropic random variable in $\mathbb{R}^d$ with $E[\|x\|^2]=d$ and $v$ is some vector. It appears that $\sum_i x_i^2 v_i \approx \sum_i v_i$ when $d \approx \infty$.
What is an easy way of showing it?
The hard way is to follow this answer or this paper.
|
I am currently running a mixed log-linear model which is in this form:
Log yit = Xit + X2it + (1|individu)
I suspect a multicollinearity ( cor (Xit , X2it) close to 1 ). Do you think it makes sense to scale the explanatory variables (Xit). Or is there another alternative? If yes, how may I interpret the result( 1 SD?).
I already tried to linearize (Log yit = log Xit + (log Xit )2 + (1|individu)) but the result is not better.
Thanks
|
Suppose I'm running a regression that looks something like
$log(price)=β_0+β_1log(X)+β_2log(X)^2$.
I have found the residuals, grouped them according to the number of sellers in the observation's town, and calculated the mean residual for each group. Suppose the residuals are 0.05 for 1 seller, -0.01 for 2 sellers, -0.02 for 3 sellers.
I want to make a statement about the %markup to the average price for each group.
Since these are logged residuals, can I just interpret the mean residual as the %markup from the average price? (eg. there is a 5% markup from the average price when there is 1 seller, -1% when there are 2, etc.)
Or will I need to calculate the %change by comparing the average+markup to the average? Which would take the form of: $100×\frac{residual}{meanLogPrice}?$
|
We know that for a sample (assume it's a data set that has two variables $x$ and $y$ of size $n$),
$$R = \frac1{n-1}\sum_{i=1}^n\left(\frac{x_i-\overline{x}}{s_x}\right)\left(\frac{y_i-\overline{y}}{s_y}\right)$$
Say we add in a data point $(\overline{x}, \overline{y})$ to the sample, which lies on the linear regression trendline of the sample (?).
We can mathematically see this actually decreases the $R$ value (the sum portion for this data point is $0$, but $n$ increases by $1$ so the denominator increases). However, cannot intuitively understand why.
Is there an intuitive explanation for this?
Thanks!
|
After logistic regression of the cross-sectional data sets, link test _hatsq shows insignificant. However, when I pool the same two data sets, the link test using the same set of variables regression gives significant _hatsq which shows specification error in the pooled data regression. Why is this happening?
|
Apologies if some of my terminology is incorrect, I only have relatively basic stats knowledge.
I have two groups of patients where we've done some experiments on the electrical conduction within the heart. Within each group, I've got a measurement of the degree of electricity abnormality (fractionation) during different conditions.
Condition A, B and C
For each condition, I've measured abnormal electricity (% of signals that were fractionated) and because of the way we've measured, we cannot get specific values, only stratified into a range. The range is 0-24%, 25-49%, 50-74%, or 75-100%. I want to show that the degree of fractionation during condition A correlates or doesn't correlate to degree of fractionation during condition B and/or C.
I can't quite figure out which test to use in order to do this. Could someone help?
I'm using GraphPad Prism as my stats software. I'm guess very few/noone here would use that, so I just need the name of the test/an explanation of how to test for it, and I'll figure it out within the software.
|
Let $X_1,\ldots,X_n$ be i.i.d. log-normal random variables such that $$\log(X_i)\sim N(\mu,\sigma^2)\ \ \forall i=1,\ldots,n$$
Now let $Y$ be equal to the $\min(X_1,\ldots,X_n)$. It is quite easy to obtain the relation between the corresponding CDFs: $$P(Y<y)=F_Y(y)=1-[1-F_X(y)]^n$$
Is there a closed form to calculate parameters $\mu_y,\sigma_y$ of a log-normal variable $Y$ given parameters $\mu, \sigma$ of a log-normal variable $X$? And vice versa (find $\mu,\sigma$ given $\mu_y,\sigma_y$)?
|
i am confused at how to estimate the variance of a classifier. Currently, i have split my data into training and test sets and used the training data with a k-fold cross-validation strategy to get the best model.
Then, i could use the whole training set to train the selected model and then evaluate the test set, but i can't find it very informative, because if gives no variance estimate of the model !?
So, how is the variance usually estimated ? maybe another k-fold cross-validation on the whole data set ? but in that case, it seems to me that the original test set from the original split was just never used and therefore useless.
|
The calculation the MAD (Mean Absolute Deviation)/Mean ratio is this, according to the title:
$$
\frac{\overline{\left | Forecast - Demand \right |}}{\overline{Demand}}$$
However, the calculation is often shown as this, because of the derivation from wMAPE (weighted Mean Absolute Percentage Error), with sum not average:
$$
\frac{1}{\sum Demand}\sum_{}^{}Demand\frac{\left | Forecast - Demand \right |}{Demand}=\frac{\sum_{}^{}\left | Forecast - Demand \right |}{\sum_{}^{}Demand}
$$
I understand why it's shown as that, because of the derivation, but that means that the title (mean) and the calculation (sum) don't match up. Are there pros and cons to using mean or sum? The key ones that I can think for using average are:
Fair comparison between numerators and denominators of different populations, because sum is affected by number of observations
Average is less affected by missing data than sum is, when you apply that aggregation across a column
|
Generally, my default practice in regression for nominal categorical variables, including race, is to use dummy coding, with the majority/plurality level as reference. Interpretation of the model coefficients using this scheme is straightforward.
Additionally, I typically view comparisons to the majority/plurality group as most relevant, and for this coding scheme those comparisons are simply evident in the estimated coefficients and it's straightforward to test only these comparisons. They are also the best-sampled pairwise comparisons (i.e., the tests with the most power for given effect size). For a sample in the US population, that usually means white race is the reference category.
Recently, a colleague of mine received some criticism for this approach, arguing that using "white" as a default category propagates bias that "white" is normal/typical, and that it's better to look at difference between each group to "all" as a reference, or perhaps to choose the measured maximum/minimum category as a reference (whichever is preferable with respect to the dependent variable).
I appreciate the sentiment behind this, but the interpretation seems flawed to me. For an outcome where disparities are expected due to racism or bias, a comparison of one race category to the mean across all categories (weighted or not) seems to dilute the size of any disparities that are present across more than one non-majority race. Planning contrasts only with the "best" point estimate could mean the comparison group is likely to be undersampled and introduces selection bias. Unfortunately, I wasn't present and was unable to follow up with the person raising an objection as to what specifically they are proposing.
Am I missing some alternative? I'd be interested in any supported proposals of best-practices for handling these types of variables. I understand the use of "race" as a variable is unfamiliar/unusual to many people outside the US and would prefer not to relitigate those issues here: from my perspective, perceived race is not useful as a biological variable, but is nonetheless important because it impacts how people are treated by others in society and therefore affects health and healthcare.
A colleague suggested the criticism may have been motivated by papers like https://journals.sagepub.com/doi/abs/10.1177/0081175020982632 that suggests use of mean contrasts or binary contrasts. That would help answer the alternative I'm missing, but I'm still a bit uncertain with these suggestions, as they still seem to bring other problems with interpretation.
|
I have read that the Vuong test is no longer considered appropriate for testing whether a ZINB fits better than a negative binomial test because it is not strictly non-nested nor partially non-nested. I'm modeling in R (glmmTMB) and there is not the Vuong (nested) option that I can find. If the NB is nested in the ZINB (when ziformula=~0), is it ok to use the Chi-square goodness of fit test? My AIC is considerably lower for the ZINB vs the NB but what is the correct test for the two models? I do have theoretical reasons for why there would be structural zeros.
|
I have started reading this paper, and do not understand a line in its first paragraph in the introduction. He says:
Indeed, as far as the mean square error is concerned, Gaussian distributions represent already the worst case, so that in the framework of a minimax mean least square analysis, no need is felt to improve estimators for non-Gaussian sample distributions.
So far I know that in the non-parametric setting, the sample mean is a minimax estimator of the expected value of a random variable when using the squared loss, for the class of distributions with finite variance (see Bickel and Doksum,
2015, Example 3.3.4).
But, I do not understand what he means by "Gaussian distributions already represent the worst case".
|
Let $A_t$ and $B_t$ be $I(1)$ processes and assume that they are co-integrated i.e. there exists $\beta$ such that $A_t - \beta B_t$ is $I(0)$. Woolridge's Introductory Econometrics text (5th edition, page 646) claims that if we fix $A_t$'s coefficient at 1, then $\beta$ unique.
A naive proof would be to assume to the contrary that there $\exists \delta \ne \beta$ and $A_t - \delta B_t$ is $I(0)$, then take difference of the two $I(0)$ processes to get $(\beta - \delta)B_t$. If difference of $I(0)$ processes is also $I(0)$, then we reach a contradiction, because $(\beta - \delta)B_t$, which is a scaler multiple of $B_t$, cannot be $I(0)$.
The issue is that it is generally not true that difference of $I(0)$ processes is also $I(0)$, unless they are jointly stationary. See here.
Is there something that I am missing?
Edit: Rephrased the question, and as pointed out by @Richard Hardy, stationarity is not necessarily the same as $I(0)$, and the post I linked to deals with stationarity.
|
I am using hierarchical GAMs to examine the effect of weather covariates (n=21) on annual bird counts. The hierarchical part is to account for nested observations with s(site, bs="re") and temporal autocorrelation with s(year, by=site, m=2). None of my covariates have significant smooths in the global model, so I have fit them all as parametric terms. My model looks as follows:
m = gam(nests ~ x1 + x2 + x3 + .... + x21 + s(site, bs="re") + s(year, by=site, m=2), data=data, family=poisson, method="REML")
My question is:
How do I perform selection on GAMs if all covariates of interest are parametric terms? I can't treat the parametric terms as low-degree smooths and use select=TRUE (as recommended here) due to concurvity issues, and I'm apprehensive to use the paraPen argument to penalize the parametric terms because it makes approximate p values unreliable.
Any help is greatly appreciated!
|
We have a Linear Hierarchical Model where
$$Y_i | \theta_i \sim N(\theta_i,1)$$
$$\theta_i | A \sim N(0,A)$$
with
$$Y_i |A \sim N(0,A+1)$$
where $ i = 1,2,\ldots,k.$
I found the likelihood function for $A$ by using the marginal distribution of the $Y_i$'s, where
$$L(A+1) = \prod_{i=1}^k f_{Y_i}(y_i | A+1)$$
$$ = [2\pi(A+1)]^\frac{-k}{2} e^{-\frac{1}{2(A+1)}\sum_{i=1}^k y_i^2}I(A \geq 0) $$
From this, I found the MLE for $A$, which is
$$\hat{A} = \max \left(\frac{\sum_{i=1}^k y_i^2}{k}, 0 \right) $$
where $A \geq 0$, because it's variance.
However, I do not know how to proceed with a generalized likelihood ratio to find the test statistic. I don't want to use a large sample approximation by taking the log to get a chi squared distribution. Apparently the Generalized likelihood ratio is supposed to evaluate to a known distribution. I want to test the hypothesis that
$$H_0: A = 0$$
$$H_1: A > 0$$
and I'm not sure how to proceed.
|
Suppose $X_1$ is a random variable. I want to test the hypothesis $$ H_0:X_1\sim \delta_0 \textrm{ vs } H_1:X_1\not \sim \delta_0.$$
Where $\delta_0$ is the Dirac-measure in $0$. As teststatistic I take T(x)=x, if i get the sample $x_1=0$, then I would calculate the p-value through
$$2\cdot\min\{P(T(x_1)\geq T(X_1)|H_0),P(T(x_1)\leq T(X_1)|H_0)\}
=2\cdot\min\{P(0\geq 0|H_0),P(0\leq 0|H_0)\}=2.$$
I don't understand how I can get a p-value over 1, if the p-value is supposed to represent the probability of an " even extremer event regarding T" under the Nullhypothesis. Maybe if the distribution of the teststatistic under the Nullhpothesis is discrete the two sided p-value has to be calculated through
$$ \min \{2\min\{P(T(x_1)\geq T(X_1)|H_0),P(T(x_1)\leq T(X_1)|H_0)\},1\}.$$
In this situation a p-value of 1 would make sense, but I have not found anything about handling p-value's of discrete distributions differently. This example is obviously made up, but I had a simular problem with the Hypergeometric distribution, where suddenly the left an right sided p-value where both greater than 0.5, so the p-value was greater than 1. I would appreciate some guidance, which mistake I made.
|
I have a dataset with symptoms and its severity (integer: 0-10, over 20 variables), and a columns with chosen treatment (0/1, over 10 variables).
The dataset include over 1000 records.
Correlation plot seems inconvenient for me.
I am looking for a statistical test and/or visualization for analysis of associations between declared symptomes and treatment.
I am coding in R.
|
Does anyone know where there is information on fit tests or something that can be done to help choose the best fitting family and link for a glm model? I have panel data that I am trying to run a regression with using a binary dependent variable (0/1). I am trying to decide between a glm with a poisson distribution and log link or a binomial distribution with logit link. Or is it best to just use a logit model?
If anyone knows where there is more information on how to correctly make these choices, either online, through fit tests, or in a specific text book, I would greatly appreciate the help!
|
I perform an LSA with textmodels_lsa of the quanteda package in R, but I have little idea about interpreting the results.
A minimal example taken from here
txt <- c(d1 = "Shipment of gold damaged in a fire",
d2 = "Delivery of silver arrived in a silver truck",
d3 = "Shipment of gold arrived in a truck" )
mydfm <- dfm(txt)
mylsa <- textmodel_lsa(mydfm)
Now my questions to the return values:
$sk are the singular values, is there a general rule of what is a high/significant singular value?
Is it possible to choose somehow like: "dimension that explain 80% of variety"?
are the matrices SVD of a decomposition X = SVD represented in the return?
How can I find 2 or 3 dimensions such that patterns of documents, like clustering, emerge?
same for dimensions. Is it possible to assign words/tokens to the dimensions to give them a topic?
Last one, how can I interpret for example the $feature matrix(rounded):
shipment -0.26 0.38 0.15
of -0.42 0.07 -0.05
gold -0.26 0.38 0.15
damaged -0.12 0.27 -0.45
in -0.42 0.07 -0.05
a -0.42 0.07 -0.05
fire -0.12 0.27 -0.45
delivery -0.16 -0.30 -0.20
silver -0.32 -0.61 -0.40
arrived -0.30 -0.20 0.41
truck -0.30 -0.20 0.41
Does this mean, that "truck" and "arrived" are most associated with dimension 3 (highest value)? What is the difference between negative and positive values?
I tried to learn it from here btu was left with all these questions.
|
I am comparing the difference of medians between two groups of sample sizes $n1$ and $n2$. I would like to confirm that my boostrap approach for finite population size without pooling sample data correctly provides a distribution function of the differences between samples. Below, I provide examples of approaches that I've looked at. Approach 1 is provided as a reference (assuming a large population). I would like to confirm that approach 3 is sound while better understanding how to interpret the differences in results between approach 2 and 3.
Assuming a large population, I can compute the distribution of medians for each group using bootstrapping with replacement. To check if the observed difference is due to random error, use the following approach:
Approach 1, assume large population
pool the samples from two groups together into a list of length $n1 + n2$,
shuffle the pool,
split the pool into "simulated" groups--cutting the shuffled list into new lists of sizes $n1$ and $n2$,
compute the medians of each simulated pool,
compute the differences of the medians in each pool,
repeat steps 2-5 many times to calculate a set of medians, and
use the resulting cumulative distribution function of set of medians to understand the probability of observing various effect sizes due to chance (i.e., bin and count the results, divide the counts by the total number of resamples).
A similar example of this approach is in A.B. Downey's Think Stats (pg 105).
Now, for a finite population size, A.C. Davidson and D.V. Hinkley's "bootstrap methods and their applications" provide methods modify sample size when bootstrapping statistics estimating a population quantity, where the population is a known, finite size (pg 92). For example, given a finite population size, we can adjust the resample size upwards to $n'$ where $n'=(n-1)/(1-n/N)$. Here, $N$ is the population size. (As the sample size approaches the population size, we will have more certainty in the estimate. By adjusting the resample size upwards as $n$ approaches $N$, we tighten the test statistic's distribution to reflect this increased certainty.)
I think that my above steps for shuffling a pool break down, because I'm now working with an $n1'$ and $n2'$ sample size. So I went with the following approach:
Approach 2, fixed population
compute $n1'$ and $n2'$
bootstrap the median test statistics for group 1 and group 2 many times
calculate the difference in medians between the groups (calculated in step 2)
use the empirical/cumulative distribution function of the resulting differences to explore probabilities of observing given differences between the medians.
Is approach 2 correct? (It is similar to Bootstrap sampling for ratio of means with uneven sample sizes) This second approach feels different than the first since I'm not pooling data together. My understanding is that by pooling, I'm testing to see if the two samples could have been generated by the same underlying population. Approach 2 doesn't seem to be accomplishing this since I'm not mixing the data before distributing the data between the two samples.
Approach 3
My intuition is to do somewhat of a hybrid:
pool groups 1 and 2 and then
resample from that pooled group two groups of size $n1'$ and $n2'$, and then
use steps 4 through 7 of approach 1.
If I wasn't adjusting the group sizes for the finite population, I would shuffle the pooled data into new groups (without replacement) as in Approach 1. By resampling with replacement, how should I interpret the results? Is it still correct to think about the fig_bsed_pool_deltas as the probability of observing the delta due to random error? Or is this a misapplication of the technique? One thing that bothers me is that I pool the data, but then use the original group size rather than setting the populations of each group to the sum of population_size_1 and population_size_2.
For reference, here is a toy example with python code implementing approach 3:Suppose I'm at a middle school where I give the same lecture to both class 1 and class 2 with respective class sizes of 15 and 20 students. I suspect that class 2 likes the course better since I teach that class after I have had my coffee. To assess attitude between the classes, I survey 5 students in class 1 and 10 students in class 2. The responses from class 1 are {1,2,3,4,5}. The responses from class 2 are {2,3,4,5,6,7,2,3,4,5}. I want to know if the attitude between the two classes taught by this teacher are different, say greater than a certain value x. (In this example, I happen to have ordered categorical responses--say a survey response from 1 to 7).
Set up and Define the inputs:
import numpy as np
import plotly.graph_objects as go
responses_1 = [1,2,3,4,5] #median is 3
responses_2 = [2,3,4,5,6,7,2,3,4,5] #median is 4
population_size_1 = 15
population_size_2 = 20
sam_pop_ration = len(responses_1)/population_size_1
sam_pop_ration = len(responses_2)/population_size_2
Approach 3:
def bootstrap_medians_pooled_approach(input_array_1, len_input_array_1, sam_pop_ration_1, \
input_array_2, len_input_array_2, samp_pop_ration_2, \
n_resamples):
#sample 1
adjusted_n_1 = (len_input_array_1 - 1)/(1 - sam_pop_ratio_1)
##some considerations for having a decimal adjusted_n_1
base_adjusted_n_1 = int(adjusted_n_1)
fraction_adjusted_n_1 = adjusted_n_1 - base_adjusted_n_1
#create an a array of sample 1 resample sizes
##alternate size to account for the fraction of adjustment
adjusted_n_array_1 = [base_adjusted_n_1 + \
int(np.random.choice([0,1], size = 1, \
p = [1 - fraction_adjusted_n_1, fraction_adjusted_n_1)) \
for x in range(n_samples)]
#sample 2 (same setup as above for sample 1)
adjusted_n_2 = (len_input_array_2 - 1)/(1 - sam_pop_ratio_2)
base_adjusted_n_2 = int(adjusted_n_2)
fraction_adjusted_n_2 = adjusted_n_1 - base_adjusted_n_2
adjusted_n_array_2 = [base_adjusted_n_2 + \
int(np.random.choice([0,1], size = 1, \
p = [1 - fraction_adjusted_n_2, fraction_adjusted_n_2)) \
for x in range(n_samples)]
pooled_array = input_array_1 + input_array_2
#create list of resampled medians for group 1 and group 2
medians_1 = [np.median(np.random.choice(pooled_array, size = x)) \
for x in adjusted_n_array_1]
medians_2 = [np.median(np.random.choice(pooled_array, size = x)) \
for x in adjusted_n_array_2]
n_resamples = 10000
bs_pool_delta = bootstrap_medians_pooled_approach(responses_1, len(responses_1),
sam_pop_ratio_1,\
responses_2, len(responses_2), sam_pop_ratio_2, \
n_resamples)
#visualize the distribution of deltas results
fig_bsed_pool_deltas = go.Figure()
fig_bsed_pool_deltas.add_trace(go.Histogram(x = bs_pool_delta)
#explore the chance that the observed delta of a given delta might be observed by random chance
deltas = 0.25 * x for x in range(-28,28)
fig_ps_bs = go.Figure()
fig_ps_bs.add_trace(go.Scatter(x = deltas, y = bsed_p_values_pool))
|
Steps followed for testing :
Derive parameter estimates using fitdistr() function in MASS package for the dataset, dt consisting of discrete values varying from 0 to 50.
df=dt[[1]]
nbfit <- fitdistr(df,'negative binomial')
pfit <- fitdist(df, "pois")
Use parameter estimates for creating reference distribution
fitnb <- dnbinom(0:50, size=0.4, mu=3.9)
fitp <- dpois(0:50, lambda=3.9)
I found that lambda value is same as mu.
Get frequencies
t <- table(df)
D <- as.data.frame(t)
observed_freq <- D$Freq
perform the chi-squared test
chisq.test(observed_freq, fitnb, simulate.p.value = TRUE)
chisq.test(observed_freq, fitp , simulate.p.value = TRUE)
Results
for NB
Pearson's Chi-squared test with simulated p-value (based on 2000 replicates)
data: observed_freq and fitnb
X-squared = 476, df = NA, p-value = 0.989
for Poisson
Pearson's Chi-squared test with simulated p-value (based on 2000 replicates)
data: observed_freq and fitp
X-squared = 476, df = NA, p-value = 0.9875
Question:
What can we comment upon which distribution fits better? Clearly, for both X-squared is the same and we fail to reject the null hypothesis.
Can I perform some other test?
Edit
I also used lrtest() function and got the following result:
fit_poi <- fitdistr(df,"poisson")
fit_nbin <- fitdistr(df,"negative binomial")
lrtest(fit_poi,fit_nbin)
Result
Model 1: fit_poi
Model 2: fit_nbin
#Df LogLik Df Chisq Pr(>Chisq)
1 1 -2537.7
2 2 -1159.3 1 2756.6 < 2.2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
With this, can we say negative binomial performs better?
Histogram of actual data used in the test (Note: Codes differ a little based on this dataset)
Actual data
value count
0 205
1 65
2 40
3 40
4 32
5 18
6 15
7 11
8 8
9 9
10 7
11 6
12 8
13 3
14 1
15 8
16 2
18 2
19 2
20 5
21 1
23 2
24 1
25 1
29 1
31 1
32 2
35 1
36 1
42 1
43 1
45 1
50 1
53 1
|
I'm writing my thesis on private equity funds' financial performance and ESG integration. The data is gathered from funds at a single point in time and is therefore cross-sectional data.
I want to run an OLS regression on the below dataset. I was a bit in doubt if I should use a logit model, however, as I understand it, a logit model is only applicable if the dependent variable is binary, which mine is not. I have therefore chosen to use a linear multiple regression model. I am running the models in STATA.
My dataset includes 2,300 funds. My variables are:
Dependent variable: IRR (financial performance) as a continuous variable.
Independent variable: Nominal categorical variable (1: Traditional funds, 2: ESG funds, 3: high ESG funds)
Control variables:
Maturity in years: Integer
Fund size: Integer
Industry: 6 binary variables
Geography: 7 binary variables
Strategy: 6 binary variables
Is it possible to run a fixed effects model (maturity, fund size, and industry fixed effects) on this model? Or is this not possible on cross-sectional data? I have a difficult time figuring if its possible and have really looked through everything. I hope someone call help!
|
I have the following model. Let $BP$ be a continuous variable s.t. true relationship is $BP=144+0.5age+4sex+3gene+\epsilon$. Suppose for subjects with $BP>160$, there is a probability of 0.5 getting some treatment with effect following truncated normal distribution with mean -15 and standard deviation $sd=2$, where treatment effect is defined only for $<0$ part of real axis.
In data, the treatment will distort the true relationship. Thus I want to correct the treatment effect. I was initially thinking Bayesian approach. However, I encountered issue of convergence for truncated normal density. I will denote $N(\mu,s^2)$ for normal with mean $\mu$ and variance $s^2$ and it will also be used to denote density function. In particular, for treated subjects observed $BP_o$, it has density proportional to $\int^\infty_{BP_o}N(\mu,s^2)(y)dy$, where $\mu$ follows some $N(\beta_0+\beta_1age+\beta_2sex+\beta_3gene,\sigma^2)$, $s$ follows half t distribution with 3 df and $\beta$'s follow some joint normal distribution centers at $0$. The Stan code will not converge in this case due to $y-\mu$ is too large in comparison to $s$.
Thus I want to estimate treatment effect sd=2 to give some reasonable informative prior, which might give model a chance for convergence.
I have considered the following.
If I can match treated and untreated people with $BP>160$ by propensity score and apply linear regression on matched data, then I should be able to extract treatment effect mean. That also extracts $s_1^2=\sigma^2+sd^2$ standard deviation of both treatment and inherent $\epsilon$ variation. $\epsilon$'s variation $\sigma^2$ can be preliminarily estimated by standard linear model on the whole observed data. Take square root of the difference of the two variations $\sqrt{s_1^2-\sigma^2}$. I would hope to some sort recover $sd$. This does not seem to be the case.
nsim=1200
age=runif(nsim,min=55,max=74)
sex=rbinom(nsim,size=1,prob=0.5)
gene=rbinom(nsim,size=1,prob=0.51)
sig_err=21
err=rnorm(nsim,mean=0,sd=sig_err)
beta0=144;beta1=0.5;beta2=4;beta3=3
Z=beta0+beta1*age+beta2*sex+beta3*gene+err
betas=c(beta0,beta1,beta2,beta3)
list_hyp=which(Z>160)
num_hyp=length(list_hyp)
treat=rbinom(num_hyp,1,prob=0.5)
old_Z=Z
Z[list_hyp]=Z[list_hyp]+treat*rnormTrunc(num_hyp,mean=-15,sd=2,max=0)
hyp=data.frame(id=1:nsim,age=age,sex=sex,gene=gene,BP=Z,true_BP=old_Z)
hyp$censor=1
hyp$censor[list_hyp[as.logical(treat)]]=0
data=list(N=dim(hyp)[1],
X=as.matrix(hyp[,c('age','sex','gene')],ncol=3),
censor=hyp$censor,
BP=hyp$BP,
hyp_diag=160
)
hyp_t=hyp %>% filter(censor==0)
hyp_u=hyp %>% filter(BP>data$hyp_diag&censor!=0)
hyp_dat=rbind(hyp_t,hyp_u)
m.out=MatchIt::matchit(censor~age+gene+sex+BP,data=hyp_dat,
method='full',distance='glm',link='logit',estimand = 'ATC')
m.dat=match.data(m.out)
z2=summary(lm(BP~censor+age+gene+sex,weights=weights,data=m.dat))
z3=summary(lm(BP~age+gene+sex,data=hyp %>% filter(censor!=0&BP<160)))
sqrt(z2$sigma^2-z3$sigma^2) ##trying to estimate treatment sd
$Q1:$ How do I give a reasonable prior for treatment effect variance? Or how do I estimate treatment effect variance beforehand?
$Q2:$ How do I give a reasonable prior for treatment effect mean? It can be checked from above code that coefficient of censor term represents treatment effect. However, this treatment effect is way to off away from true treatment effect.
|
In GLM, we assume that $\mathbb{E}[Y|X]=\mu(\beta^\top X)$ and $Y|X$ follows exponential family distribution. I am going to assume that the probability of success in the Bernoulli distribution is modeled as $\beta^\top X$. However, the issue is that because $\beta^\top X$ can take values less than 0 or greater than 1, I will use map $\mu(\beta^\top X)=0$ if $\beta^\top X<0$ and $\mu(\beta^\top X)=1$ if $\beta^\top X>1$. I am wondering if I use this modeling, I will still have a GLM model? Is the link function $g=\infty $ for $\beta^\top X<0$, $g=\text{identity}$ for $0\le\beta^\top \le 1$, and $g=1 $ for $\beta^\top X>1$?
If I use this definition, can I estimate parameters through OLS accurately?
|
I have the following project in front of me:
Clients are going to get a bill informing them of a price increase (let's call them group A). However clients are getting the bill at different points of time.
I want to learn which individual characteristics eg age will make individuals to quit.
Then I want to use these learnings to predict the behaviour of other clients (let's call them group B).
I have thought that I should create a panel for clients of group A.
Quitted_it = (age_i + other ind characteristics_i) * days since they got the bill_it
Then I can predict the probability of quitting on individuals of group B, given their charactetistics and a number of days since they got the bill (eg 30)
Does it make sense? How could I compute it in R?
|
Really a pretty basic question about generative models, but I'm trying to map my (limited) understanding of NNs generally to what's going on when I invoke an OpenAI API:
When the OpenAI API docs state a limit to the prompt size for a model ("max tokens", which I assume is the same as what's also referred to as in the documentation as "context length"), does
that correspond to the size of the input layer of the actual network implementing the model?
Does each token generated by invocation of the API correspond to an execution of the underlying model, presumably truncating the front of the original supplied prompt if prompt plus output has exceeded max tokens?
|
I estimated a variable for three different species and have a posterior distribution of 4000 estimates for each species. Now I want to know if the distribution between species differs significantly. How do I do this?
I have displayed the mean and 95% credible interval for each three species and would like to indicate if differences are significant.
|
Suppose we are using the matchit function from the MatchIt package in R, as in the following example given in the package,
m.out1 <- matchit(treat ~ age + educ + race + nodegree + married + re74 + re75, data = lalonde, distance = "glm", method = "nearest", replace=FALSE, caliper = NULL, ratio = 1)
Suppose now, you want to match only those treated patients that are within a specified caliper = 0.1 (in terms of propensity scores) of the control, and otherwise leave them unmatched, I would do,
m.out2 <- matchit(treat ~ age + educ + race + nodegree + married + re74 + re75, data = lalonde, distance = "glm", method = "nearest", replace = FALSE, caliper = 0.1, ratio = 1)
In both cases I get the same matching result, in terms of sample size. I am confused by this, as even those outside the caliper are being matched. Note that, this was working the way I understand before, but is giving different results now. Please let me know if there have been any updates that I am missing.
Thank you!
############################################################################
Adding to Noah's comment below,
m.out1 <- matchit(treat ~ age + educ + race + nodegree + married + re74 + re75, data = lalonde, distance = "glm", method = 'nearest', replace = FALSE, ratio = 1)
summary(m.out1)$nn
m.out2 <- matchit(treat ~ age + educ + race + nodegree + married + re74 + re75, data = lalonde, distance = "glm", method = "nearest", replace = FALSE, caliper = 0.1, ratio = 1)
summary(m.out2)$nn
|
My model estimates the percentage of living-in agricultural workers (that is, the ones provided with board & accommodation) in labor force. The unit is the % of those workers in a district:
Y = a0 + a1 * cattle + a2 * horses + a3 * popden + a4 * altitude + e
Cattle and horses (more specifically, their densities, i.e. the number divided by acreage) affect demand for labor. Population density is a labor supply factor, and altitude affects both labor supply (via population density) and demand (via animal density).
The results generally make sense, but I am concerned that I combine supply and demand factors in one equation. Is there a way to check if this concern is justified? If it is, what would be the best approach to resolving this issue?
|
I want to run an A/B test to examine the effect of a marketing campaign on revenue. I’m using a synthetic control setup, hence I have to compare the revenue generated by treated units to the counterfactual revenue given by the synthetic control.
One way of assessing if there is a non-random effect of the campaign is to compare the observed revenue under treatment to the distribution of all possible counterfactual revenues (as derived by my counterfactual model). If the observed treatment revenue was in the tail of such distribution (i.e., I get a low p-value), then I would conclude the campaign had an effect. This is the inferential approach adopted, for example, by Google’s library CausalImpact.
I wonder if this is enough, however. The observed revenue in the treated units could have been different from the value actually measured, due to random noise. The above approach does not take into account this uncertainty in observed treatment, but this uncertainty feels like something that needs to be accounted for - as we do, for example, in classic A/B test where we have both treated and control outcomes and we run a statistical test on the difference between them based on both their distribution.
So I wonder if I should somehow try to measure the uncertainty in my observed treatment revenue and compare its distribution to the counterfactual distribution (effectively running a test for the difference in means between treatment and synthetic control). Any suggestions on whether this is a better approach indeed?
|
I am modelling change in the mass of an animal over time with a GAM. These animals are caught at one of two ages (1 or 2, but 1's never become 2's in the subsequent year). I also have a large number of sites in my analysis.
The most simple model I am considering is:
mm <- gamm(MASS ~ s(YEAR.s, SITE, bs = "fs", by = AGE, k = 5), data = df)
Where YEAR.s is a continuous variable that has been centered and scaled, and SITE and AGE are factors.
(Partial) autocorrelation plots show a lot autocorrelation in the timeseries.
acf(resid(mm$lme, type = "normalized"))
pacf(resid(mm$lme, type = "normalized"))
I know there are three different correlation structure classes (corARM1, corARMA, corCAR1) that I could add to my model, but I'm not certain about the best one to use for this situation.
Also, I'm assuming that I would include both SITE and AGE as grouping factors, but I'm not sure if it will be an issue that not all AGEs are found at all SITEs (e.g., SITE A might have AGE 1 & 2, but SITE B only has AGE 1 and SITE C only has AGE 2). Between the two factors, making sure that I've included SITE is probably more important than AGE.
Finally, not all sites are monitored in each year of the timeseries (min. 20 years of data over a ~50 year timeseries).
|
The Forecasting Model
I have developed a proprietary forecasting model (call it AMA) that forecasts an expected range in price movement of a particular security. An example of the way it works is by giving out expected price range movement of $\mathrm{$10}$ for a $\mathrm{$100}$ stock within the next hour (or any given time interval or period).
For an overview using the above example, we'll ignore the possibility of the stock's price falling and focus only on price rising to avoid over-complicating things.
In the model, a stock's current high price $H_{t_i}$, is forecasted to print a price high by adding the Differential High $DH_t$ to the previous period's closing price, $C_{(t-1)}$, giving us $E(H_{t})$.
Let
$t_i=$ $\mathrm{1\,hour\,period}$, divided into 10 sub intervals where $i=$ $\mathrm{1,2,3,...,10}$
$C_{t_i}$ = $\mathrm{Current\,price\,in\,period}$ $t$
$H_{t_i}$ = $\mathrm{Current\,high\,price\,in\,period}$ $t$
$DH_t$ = $\mathrm{Differential\,High}$
$C_{(t-1)}$ = $\mathrm{Previous\,closing\,price}$
$E(H_{t})$ = $C_{(t-1)}+DH_t$; $\mathrm{AMA\,/\,Expected\,high\,price\,of\,current\,period}$
Assume $C_{t_i}$= $H_t$ for this problem.
The Problem
However, I'm having confusion in calculating the probability that price reaches $100\%$ AMA before the time interval ends, whether to use Bayes Theorem or Poisson. I have a basic understanding of probability so Poisson is a bit out of my domain.
Intuitively though, the probability of reaching $E(H_{t})$ should diminish/reduce over time the closer we get to an hour's time given that $H_t=c$, where c is a fixed constant price throughout the period (stagnant). Conversely, this would also imply that the probability of reaching $E(H_{t})$ increases when $H_t$ moves closer to the expected high price.
Test Solutions
The way I tried to solve the question is by constructing a counting table to compute the probability. However I am not confident of my probability test calculations given the nature of time decay of the problem (which sounds a lot like Poisson). However, the thought process makes sense intuitively as I have mentioned above given the calculations I did which is prone to error.
A snippet of my calculations in the picture below is constructed behind the premise of basic probability where $$P(A)=\frac{number\,of\,favourable\,outcomes}{total\,number\,of\,outcomes}$$
$$A = \mathrm{Event\,of\,reaching\,100\%\,AMA\,occuring}$$
Test Solution #1: "Modified" Basic Probability
The confirmation bias is strong with this solution as it's not mathematically sound ie. I manipulated the formula to make it display increasing probability of event $A$ occurring when $H_{t_i}$ increases towards $100\%$ AMA within a time interval whereas decreasing probability as $H_{t_i}$ remains the same throughout the whole period $t_{1,2,...10}$. Regardless, I'm describing the formula anyways.
$P(A)=1-\frac{number\,of\,favourable\,outcomes}{total\,number\,of\,outcomes}$
Test Solution #2: Basic Probability
This one is mathematically sound and doesn't involve any manipulation to fit my presumptions where the formula is the basic probability calculation as described earlier. However intuitively, this doesn't make any sense as it assumes that the probability of event $A$ occurring is the same given AMA =$10\%,20\%,\,...,100\%\,for\,t_{1,2,3,...,10}$
Test Solution #3: Hybrid 1D Random Walk
Note the Random Walk model in this one is only possible with movements north or east outwards along y- and x-axis respectively due to the time decay nature of the problem. In this one, it assumes that since there are only 2 possible outcomes for each node, the probability is $50\%$ chance for the price to either reach a new AMA high along the y-axis or exhausting the time interval to move on to the next node along the x-axis where $H_{t_i}=c$.
Final Notes
Given all the Test Solutions I've presented, how do I determine the correct one to use (as a starting point) so as to develop a working probability model for this problem?
TLDR version: I need to find a viable solution to calculating the probability that price reaches the AMA or Expected High before the time interval ends. 3 Test Solutions are given and which one is the correct one (if any) to base off of to repair this probability model?
|
I'm working on a machine learning classification project and i faced some difficulties: all of my features distributed like this:
I'm not sure what should i do, should i use any scalers/other preprocessing methods? All features means count of some events for a specific person. What problems can this distribution cause after fitting the models? Also, is there any name for this kind of distribution?
|
I have a set of $k$ random variables,
$y_i(x) = f_i(x) + \epsilon_i, y \in \mathbb{R}$
where,
$\epsilon_i \sim \mathcal{N}(0, \sigma_n^2)$ (a noise term)
$x \sim \mathcal{U}(-\infty,\infty; -\infty,\infty)$ (a 2D uniform variable)
$f_i(x) = \varphi(x,\mu_i,\Sigma_s), \mu_i \in \mathbb{R}^2, \Sigma_s = \begin{bmatrix}\sigma_s^2 & 0 \\\ 0 & \sigma_s^2\end{bmatrix}
$ (i.e., the 2D multivariate normal P.D.F.)
I am interested in the joint posterior probability, $P(x|y)$, and so I am attempting to use Bayes' theorem,
$P(x|y) = \frac{P(y|x) P(x)}{P(y)}$
Since $P(x)$ is uniform, it only has the effect of scaling the result. However, I am unsure of how to arrive at the other terms, or even if I can expect a closed-form solution. I can easily simulate these results for small k, but I would like an analytic solution when k is large.
One observation is that $P(y_i | f_i(x)) = P(\epsilon_i = y_i - f_i(x)) = \varphi(y_i, f_i(x), \sigma_n^2)$. Thus, given a realization of $f_i(x)$ I can estimate the probability of $y_i$. I also know that $P(x|f_i(x))\neq0$ defines a circle in $\mathbb{R}^2$ (or more generally an ellipsoid), so I was thinking that I need some generalization of the delta function. In this way I think I can go from $P(x) \rightarrow P(f_i(x)) \rightarrow P(y_i|f_i(x)) \rightarrow P(y_i|x)$. However, I am not sure how to arrive at $P(y|x)$ from $P(y_i|x)$, since the $y_i$ are not conditionally independent.
I am even less sure of how to approach $P(y)$. The range of $f_i(x)$ is strictly positive, but I don't know its probability distribution, much less $P(y_i)$ or the joint $P(y)$. However I imagine since I am working with the normal distribution that if there is a solution it has been studied before.
Is there a solution to this problem? Are there similar problems that do have solutions (e.g., adding a scale factor or other transformation to $f_i$, discretizing $x$)?
|
This is probably a very crude question and I've been thinking about it for a while. Is the odds ratio the same for generalized additive models (GAMs) as it is with generalized linear models (GLMs)? If no, then how can I interpret them in a logistic GAM? I found many links about odds ratios for GAMMs but not GAMs.
|
People sometimes include the interactions between their instruments and region/year or the interactions between different instruments in the first stage of a 2SLS regression. I wonder if the effect hierarchy principle applies to the instruments in 2SLS.
I'm asking this question because this paper selects instruments using LASSO without considering the effect hierarchy principle and ends up selecting a few interaction terms without the main effects. Does this make sense?
|
A colleague wants to analyze two outcomes (X1, X2). They believe the two outcomes measure the same construct or something similar. They decide to Z-score both outcomes in separate dataset and combine them (X3). They assume that because they are Z-scored they are now on the same scale, and because they are both measuring the same outcome, they can be combined.
A real-world example could be that a person has depressive symptoms measure by the Center for Epidemiological Studies - Depression Scale and the Beck Depression Inventory, in two different datasets and in two different scales. So, the “construct” is the same but the scales and measurement are different. They want to combine these, but the measures are on different scales. They decide to Z-score and concatenate. But this is inappropriate, no?
In other words, X1 exists only in dataframe 1 and X2 exists only in dataframe 2. The mean, principal component or latent factor cannot be estimated as they are variables in separate datasets. The person just Z-scores and concatenation the variables into X3 to assume they are the same variable.
I am aware that simply standardizing scores does not necessarily make variables interchangeable or equivalent. However, I am unclear if there is any simulation-based evidence or literature that can make this point clear. I want to make clear that two variables that you assume to measure the same construct on different metrics/scales cannot simply be combined because they have been Z-scored. What is the reason for this? What is a good way to communicate this point that X1 and X2 both being Z-scored does not mean they can be combined?
|
Suppose I have three variables (x, y, z) and I am calculating their regression coefficients.
How to calculate Z-score between two variable (Z-score_xy) if Z-scores of two variable with a third variable are known (Z-score_xz, Z-score_yz)?
If you can show the calculation with beta would also be helpful.
Give reference please.
|
I am trying to develop a strategy to deal with outliers.
Below is the residual boxplot that I have generated for the test data, train data and both the test and train data.
My question is, what is the best way to deal with these outliers? Is it ok to just leave them if I had looked at them closely and there is no evidence to show that they are outliers due to mistake? What is the best way to go about this sort of problem?
Thank you!
|
Question
A drug maker wants to design a study in which two medications are compared. The first group has seen a improvement in 40%. Researchers want to see if the newer drug will improve this by 10% so they design a study. Assuming a study with power of 80% and a two-sided significance level of 5%. How many participants are required to detect a difference of 10% between the two groups.
My attempt and understanding
From the information gathered above there are two critical values $Z_{1-\alpha/2} = 1.96$ and $Z_{\beta}=0.84$ given the significance level of $\alpha = 0.05$. From the proportions, $p_{0} = 0.4$ and the proposed new drug should have the second group $p_{1} = 0.5$. The difference $\theta = p_{1}-p_{0} = 0.1$. The hypotheses
\begin{align}
H_{0}:\theta &= 0 \\
H_{1}: \theta &\neq 0
\end{align}
However, I am unsure what formula has been used to calculate this. I have run command in R which has given me $\approx 388$ per group.
> power.prop.test(p1 = 0.4, p2 = 0.5, alternative = "two.sided",
+ sig.level = 0.05,
+ power = 0.80)
Two-sample comparison of proportions power calculation
n = 387.3385
p1 = 0.4
p2 = 0.5
sig.level = 0.05
power = 0.8
alternative = two.sided
NOTE: n is number in *each* group
However, I am unsure what formula has been used to calculate the sample size. From my understanding this has been a comparison of two binomial proportions. If I could get any assistance in understand how this was calculated, it would be greatly appreciated!
|
I am considering the following log log model :
$$
log(y_t) = log(\beta_0) + \sum_{i=1}^{K}\beta_ilog(x_{i,t}) + \sum_{j=K+1}^{L}\beta_jx_{j,t} + log(\epsilon_t)
$$
to explain the sales of a company given several factors. After estimation of the model, I would like to "decompose" the dependent variable (the sales) by computing the contribution of each explanatory variable. Ideally, what I have in mind would be a percentage contribution, the sum of which would be 1, but I don't know how to go about it since I have some variables in log and others not, I don't even know if it's possible. So I would like to have your help on this please.
Thanks a lot!
|
I'm trying to understand the proofs on the The information bottleneck method paper by Tishby, Pereira and Bialek without luck. In particular, the second term in the functional derivative of the Mutual Information, i.e.,
$$
\frac{\partial}{\partial p(\tilde{x} | x)} I(X, \tilde{X}) = \frac{\partial}{\partial p(\tilde{x} | x)} \left[ \sum_{x, \tilde{x}} p(\tilde{x} | x) p(x) \big( \log p(\tilde{x} | x) - \log p(\tilde{x}) \big) \right].
$$
In the equations 10 and 25, when they take the functional derivative $\frac{\partial}{\partial p(\tilde{x} | x)} \sum_{x, \tilde{x}} p(\tilde{x} | x) p(x) \log p(\tilde{x})$, that is the mutual information $I(X, \tilde{X})$ functional derivative that corresponds to the denominator in the $\log$, they have a different way of finding the derivative that I cannot understand.
In (10),
$$\frac{\partial}{\partial p(\tilde{x} | x)} \sum_{x, \tilde{x}} p(\tilde{x} | x) p(x) \log p(\tilde{x}) = p(x) \left[ \log p(\tilde{x}) + \frac{1}{p(\tilde{x})} \sum_{x'} p(x') p(\tilde{x} | x') \right].$$
But shouldn't the second term $\sum_{x'} p(x') p(\tilde{x} | x')$ be the functional derivative of this w.r.t. $p(\tilde{x} | x)$ due to the chain rule. Additionally, I think it is missing the $p(\tilde{x} | x)$ from the first product rule. (See my attempt of this derivation below.) In (25), they do something similar (I guess), since the term accompanying $\log p(\tilde{x})$ is $1$.
My attempt is
$$\begin{align}
\frac{\partial}{\partial p(\tilde{x} | x)} \sum_{x, \tilde{x}} p(\tilde{x} | x) p(x) \log p(\tilde{x}) &= p(x) \left[ \log p(\tilde{x}) + \frac{p(\tilde{x} | x)}{p(\tilde{x})} \frac{\partial p(\tilde{x})}{\partial p(\tilde{x} | x)} \right] \\
&= p(x) \left[ \log p(\tilde{x}) + \frac{p(\tilde{x} | x)}{p(\tilde{x})} p(x) \right] \\
&= p(x) \left[ \log p(\tilde{x}) + p(x | \tilde{x}) \right]
\end{align}$$
So, I was wondering if somehow $p(x | \tilde{x})=1$ given the $x$ and $\tilde{x}$ after the functional derivative and due to the relation between $\tilde{x}$ been considered as the quantized version of $x$ (the codeword in the codebook), and if this is the case, why?
What am I doing wrong?
|
I have data with a large proportion of zeros that we are committed to analysing with ITS [to fit in with other parts of a project] and have been advised that the best approach would be to convert to a binary outcome. I've done so and fit a binomial glm to the data, but[with experience of more standard count/rate based ITS analyses] I am not too sure of whether yhe outcome should be reported as the coefficients of the model as normal? I u derstand that such would be an Odds Ratio, but wondered whether the coefficients were also less relevant?
|
I have run a binomial generalised linear mixed model (GLMM) via the lme4 package in R. The optimal model of mine is from this syntax: fm1 <- glmer (answer~ (1|subj) + (1|item) + seeconversationmask, data=analysis1, family=binomial, control=glmerControl(optimizer="bobyqa",optCtrl=list(maxfun=2e5)))
I have three independent variables. They're all categorical. The first variable has two categories; the second one has two categories and the last one has three categories. The optimal model is the model with three-way interaction which means all variables are needed to explain the finding.
And this is the example of output I've got from emmeans package for pairwise comparison to see if there are significant differences in each contrast:
The example of my interpretations is 'The participants received significantly lower score in ao, clear, dm context than ao, con, dm context (b = -4.74, SE = 0.60, p < 0.01).'
However, after running the syntax for odd ratio, it gives me this:
So we can see that the results from odd ratio formula doesn't fit with the results from pairwise comparison. Hence, I cannot use the odd ratio from this formula as it doesn't give me odd ratio for each pair of contrast. Do you have any other solution? Is there other way to get odd ratio for each pair of contrast for three independent variables? Sorry for posting this question again. The first time hasn't been answered.
|
Can a Mann-Whitney test be done on different sample sizes, one sample about 18 and the other about 32? The problem is looking at the difference in views of tourists and locals about the environmental quality of a location.
|
I have a general population of let's say 100,000 people, 10,000 of which are on a register following a diagnosis of kidney disease (this involves looking at a blood result and coding a diagnosis). However, I hypothesize there may be more undiagnosed people in the 100,000 general population (blood results indicates but no diagnosis code). I wish to test whether if there are people undiagnosed, is the value statistically significant. My null hypothesis is the kidney disease register is robust and there are a statistically minimal number of people undiagnosed. I'm at a loss as to what statistical test to use. Any help would be much appreciated. Thanks
|
I am trying to identify a list of features that are significantly over represented in one population when compared to the other. Experiment is designed such that I have two groups of individuals A and B. Each group has 20 individuals. Over 2 million features have been analyzed in these two groups and for each feature its presence/absence has been counted. As a result I have a table that looks like this:
A | B
yes/no | yes/no
8/12 | 19/1
2/18 | 3/17
...
Problem is, if I apply Fisher exact test on each row and correct the obtained p-values utilizing Bonferroni correction, because of small within row counts my p-values are not low enough for Bonferroni correction to adequately address multiple testing correction. All corrected p-values after the lowest one soon become 1. How to properly correct for multiple testing given the above describe scenario?
|
Is it safe to run a logistic regression with time as independent variable? For example, I want to test whether a certain outcome (say, blue or red) changes along time. I know that, in general, one should be careful on time series because of autocorrelation.
EDIT: the dataset I have contains hundreds of observations per months, where there is a decrease in the outcome blue at the advantage of red at the end of the time span. Note that the data might not be completely independent as I have multiple observations for each month coming from the same individual, but the individuals are different for each month (this is not a consequence of poor experimental design, it is impossible to get an observation from the same individual in different months).
|
Why do some definitions of the Kullback-Leibler divergence include extra terms $-p_i + q_i$? For example, kl_div() (in the Python scipy.special module) defines the Kullback-Leibler divergence as
$$
\sum_i p_i \ln\frac{p_i}{q_i} - p_i + q_i.
$$
The documentation says:
The origin of this function is in convex programming; see [1] for details. This is why the function contains the extra terms over what might be expected from the Kullback-Leibler divergence.
I don't have the referenced book at hand. What is the justification or motivation for the additional $-p_i + q_i$ terms?
Anti-closing note: This is not a question about software, but about the concept behind it.
|
Assume we have $n$ non-iid standard normal random variables $X_i$. I'm interested in the distribution of $Z=\sum X_i^2$. It is clear to me, that the sum of independent $n$ squared standard normal variables will follow a chi-squared distribution with n degrees of freedom. However, as said above, we assume the $X_i$ to be pairwise correlated. To be more precise:
Assume we have $n$ non-iid standard normal random variables $X_i (i=1,...n)$ with mean vector $\vec{\mu}= \vec 0$ and covariance matrix $\Sigma$. Is it possible to calculate the distibution of $Z$ in dependence of $\sigma_{ij}$? Or $\rho_{ij}$ (as pairwise correlation coefficient)?
|
I have an experiment where I take a set of samples, and test them. If I then take a second set of samples and repeat the experiment, how can I test if there is a significant difference between the sample sets?
More specifically, I take 25 rods of a material and subject them to a breakage test at five different strain rates, with five replicates per strain rate. I can plot breakage stress vs strain rate as a scatter plot and calculate the standard errors, etc.
Now if I take different 25 rods of a treated material and repeat the experiment under the same conditions and same strain rates I get a second scatter plot.
I know from other work that the breakage stress at a each strain rate is (almost!) normally distributed.
How can I tell if there is a difference due to the treatment?
I could take each strain rate separately and do a (paired?) t-test giving 5 p-values or just put all the 25 results together for untreated and treated, then do a two sample t-test but either of these seem to be ignoring the structure of the data.
|
I met a problem in interpreting z values in my GLMM and its post-hoc tests. Since what I am calculating is the counts, I used the poisson family from glmer. and I got a z values of z value = 2.278. But after I put the model into post-hoc analysis ("Dunn" adjustment) with "emmeans", the z value became z value = -3.350. Why would the z value change? I'm concerned about the reliability of the results. If the two are different, which value shall I trust?
The code in the model runs like this: summary(m1 <- glmer(y ~ fixed1*fixed2+(1|random1)+(1|random2), family="poisson", data=data,control=glmerControl(optimizer = "bobyqa"))). Fixed1 represents for fixed factor one, fixed2 for fixed factor 2, random1 for random factor 1, and random2 for random factor 2. While for the post-hoc analysis, I used: pairs(emmeans(m1, pairwise~fixed1),adjust="Dunn"), in order to adjust p value. But the z values are different from the two. So how shall I interpret the results? Thanks!
|
I'm new to Bayesian statistics. I'm running a linear mixed effects model in R (using lmer), and I want to report Bayes Factors using these results, as demonstrated in Silver,Dienes & Wonnacott (2021). This paper uses a R script version of Dienes' Bayes Factor calculator found here: http://www.lifesci.sussex.ac.uk/home/Zoltan_Dienes/inference/Bayes.htm
I have a question about specifying the model of the alternative hypothesis. In this paper, the authors state that this can be defined with a normal distribution with mean (effect size) 0 and that this makes 'a more stringent test' as it is then harder to discriminate H1 and H0. However, surely an effect size of 0 is impossible in the model of the alternative hypothesis?
|
I have an AR(3)-GJR-GARCH(2,2,2) model. How can I test the presence of ‘leverage effects’ (i.e. asymmetric responses of the conditional variance to the positive and negative shocks) with 5% significance level?
Below is my code for the model:
startdate = '2009-01-01'
enddate = '2021-12-31'
data = yf.download('GD', start = startdate, end = enddate)
data.rename(columns={"Adj Close": "price"}, inplace = True)
log_returns = np.log(data['price']/data['price'].shift(1))*100 # Log return in %
log_returns.dropna(inplace = True)
startdate = '2010-01-01'
enddate = '2018-12-31'
in_sample_return = log_returns.loc[startdate:enddate]
gjr_garch = arch_model(in_sample_return,mean='AR',lags=3,vol='GARCH',p=2,o=2,q=2,dist='t').fit(update_freq=5)
What do I do next to check if ‘leverage effects’ is present at 5% significance level?
As I know the gamma parameter is the leverage and when gamma is non-zero it means that the model has leverage effect, but the problem is here in this model I have two gamma parameters.
I thought checking gamma coefficient is enough but as it mentioned "5% significance level", I believe the p-value needs to be calculated and I'm not sure how do I do it.
|
I am trying to model the following and would be happy about input/if its correct.
I want to compute a model that checks on differences between Statuses between data sets plotted here on the graph:
The question is: are there significantly more poor/moderate/good statuses in one data origin, in comparison to the other origin.
my data format is the following:
> head(data$Nutritional.Status)
[1] Moderate Moderate Moderate Poor-very poor Moderate Moderate
Levels: Good Moderate Poor-very poor
> head(data$Nutritional.Status.olr)
[1] 2 2 2 3 2 2
Levels: 1 2 3
> head(data$Data.origin)
[1] IR.recent IR.recent IR.recent IR.recent IR.recent IR.recent
Levels: IR.historic IR.recent UK
I have tried to compute an ordinal logistic regression` from here:
> data$Data.origin <- as.factor(data$Data.origin)
> m <- polr(data$Nutritional.Status.olr ~ Data.origin, data = data, Hess=TRUE)
> summary(m)
Call:
polr(formula = data$Nutritional.Status.olr ~ Data.origin, data = data,
Hess = TRUE)
Coefficients:
Value Std. Error t value
Data.originIR.recent 1.2417 0.4716 2.633
Data.originUK -0.8151 0.4362 -1.869
Intercepts:
Value Std. Error t value
1|2 -0.2478 0.4251 -0.5831
2|3 1.4957 0.4339 3.4475
Residual Deviance: 895.0317
AIC: 903.0317
(428 observations deleted due to missingness)
> ctable <- coef(summary(m))
> ctable
Value Std. Error t value
Data.originIR.recent 1.2417349 0.4716366 2.6328213
Data.originUK -0.8150987 0.4361647 -1.8687866
1|2 -0.2478473 0.4250569 -0.5830922
2|3 1.4957495 0.4338635 3.4475116
> p <- pnorm(abs(ctable[, "t value"]), lower.tail = FALSE) * 2
> ctable <- cbind(ctable, "p value" = p)
> ctable
Value Std. Error t value p value
Data.originIR.recent 1.2417349 0.4716366 2.6328213 0.008467888
Data.originUK -0.8150987 0.4361647 -1.8687866 0.061652505
1|2 -0.2478473 0.4250569 -0.5830922 0.559831226
2|3 1.4957495 0.4338635 3.4475116 0.000565776
> ci <- confint(m)
Waiting for profiling to be done...
> ci
2.5 % 97.5 %
Data.originIR.recent 0.3241307 2.18429517
Data.originUK -1.6670721 0.05759485
> exp(coef(m))
Data.originIR.recent Data.originUK
3.4616139 0.4425956
> exp(cbind(OR = coef(m), ci))
OR 2.5 % 97.5 %
Data.originIR.recent 3.4616139 1.382828 8.884384
Data.originUK 0.4425956 0.188799 1.059286
I am not sure why IR.historic is not mentioned anymore anywhere in the outputs? Data IR historic has way less points overall due to the nature of the sample collection, maybe it was excluded therefore?
see here my data summarized:
> counts2 <- table(data$Nutritional.Status, data$Data.origin)
> counts2
IR.historic IR.recent UK
Good 12 12 251
Moderate 4 37 106
Poor-very poor 7 34 36
Is my final output to read/present to ctable telling me the IRrecent origin has significantly more 3 than 1&2 in comparison to UK? That is my desired outcome
|
despite my searching, I cannot find a clear answer to the question:
Do dynamic logistic regression models require data points to be temporally independent?
An answer, an hint and/or a reference would be appreciated, if anyone knows. Thank you.
|
Research Problem
We are trying to help universities recruit as many students as possible for different projects for a good cause. To be eligible for any project offered by their university, students can self-register online via a brief form that asks for contact information and some more data. Some questions are mandatory, others aren’t. We see that a lot of students drop out filling in the form. (BTW this is not my real research problem, but will serve as a good analogy).
So our hypothesis is:
The number (and type) of questions influence the drop out rate above and beyond other predictors. The more (mandatory) fields, the lower the probability of a student submitting the form.
Data Structure
Our research question seems quite simple, but the structure of our data has some pitfalls:
Each row is a student that opened an application form with their respective university (e.g. city), project (e.g. project category) and individual level variables (e.g. sex, device they opened the form with)
The two predictor variables of interest are on yet another level, the application form level (see details below). These are the only numeric predictors, all confounding variables are categorical.
The outcome is whether the student submitted the application form or not (Boolean)
The data is hierarchical:
We have a total of ~80 universities with a total of ~4,000 different projects and ~30,000 rows (students).
All distributions are skewed: Some universities have hundreds of projects, some have only two. Some projects have hundreds of student applicants, some have less than 10.
We know from exploratory data analysis that within each university and project, submission rates do share a high proportion of variance so we want to consider that.
The application forms do not vary a lot between universities / projects and if they do, they are mostly the same within any university:
Each university can have their own application form, but many use the same default form that contains only 4 mandatory fields
Most universities do use one form for all student recruitment, but some use different form types for different project categories
This means that also in the predictor variables, we tend to have unbalanced classes / little variance in the no. of fields
So overall, we have a lot of multicollinearity in the data if we use it row-wise.
Question / Current Approach
Our question is IF AND HOW we can robustly measure the effect of the number of fields on the outcome (students submits form or not) in such a way that it does not only reflect the differences between the universities (or projects).
We were thinking about either including university and project IDs as confounding variables into our logistic regression model, or hierarchical modelling. But is this even possible with so many different universities and projects?
Alternatively, we could aggregate the data on project level and compute dummy variables for the individual level variables, e.g. “ratio_male” = 0.27. We could then use the submission rates per project as the outcome variable for a (weighted) linear regression. But wouldn’t that make us lose lots of information? And in any case, the data would still be hierarchical (projects nested in universities)…
Thanks so much in advance for any input!
|
The agent has two actions, a0 and a1, whose effects in each state σ0; . . . ; σ3 are described in Figure 1. The edges from actions are labeled with the probability that this transition occurs. For example, Pr[st+1 = σ2 | st = σ0; at = a1] = 1; similarly, Pr[st+1 = σ0 | st = σ1, at = a0] = 1-p. If there is no edge from a state to
an action, that action is not allowed in that state. Thus, choosing either a0 or a1 in σ3 is not allowed ,and σ3 is a sink state; similarly, action a1 cannot be taken in state σ1. The rewards in each state are action-independent, and are r(σ0) = r(σ2) = 0; r(σ1) = 1; r(σ3) = 10.
Q1) What are the possible (deterministic) policies are there for this MDP? When counting, ignore
“degenerate” actions, i.e. ones that are not allowed in a given state.
Doubt - Is the policy choosing a0 at σ0 and a1 at σ2 a deterministic policy? If yes, how do I write it mathematically as we have transition probabilities included with 2 states? Also, how do I write the value function for this policy?
|
x_t=(1+ θL)*ε_t,ε_t ~ iid N (0,σ^2)
What is the long-run variance of x_t? Does it depend on θ? If so, how?
|
I have been studying linear models recently and I'm confused why $cov(Y) = cov(\epsilon)$ holds for $Y = X\beta + \epsilon$. This was just kinda assumed in my course notes
I was looking at this section specifically
|
I'm learning about recommentation systems and I'm trying to build one using item based collaborative filtering approach. I have this dataset in which the lines correspond the items and columns the user. The matrix is filled with the ratings[1, 5] of the items. I chose sklearn.neighbors.NearestNeighbors class to find the cosine similarities of the items.
If I'm right this dataset doesn't have label. In other words is an unsupervised task(clustering). I would like to know If I have to do the train/test split in this case. I'm asking this beacuse all code videos i've watched on youtube the guys don't do this but in some theory videos they say this step of the pipeline should be done.
Thanks!
|
I am thinking about a problem in which features comes in batches and in different moments of time/environments.
My current problem set is the following:
Let's say you have n datasets with an arbitrary number of features. Is it possible to set for the boosting optimization to use the first dataset in the first k estimators? And then tell it to use all the features from datasets 1 and 2 from the estimators k to k+p?
|