instruction
stringlengths 9
38.7k
|
---|
The 'fundamental' idea of statistics for estimating parameters is maximum likelihood. I am wondering what is the corresponding idea in machine learning.
Qn 1. Would it be fair to say that the 'fundamental' idea in machine learning for estimating parameters is: 'Loss Functions'
[Note: It is my impression that machine learning algorithms often optimize a loss function and hence the above question.]
Qn 2: Is there any literature that attempts to bridge the gap between statistics and machine learning?
[Note: Perhaps, by way of relating loss functions to maximum likelihood. (e.g., OLS is equivalent to maximum likelihood for normally distributed errors etc)]
|
Suppose there is a very big (infinite?) population of normally distributed values with unknown mean and variance.
Suppose also that we have a sample, S, of n values from the entire population. We can calculate mean and standard deviation for this sample (we use n-1 for stdev calculation).
The first and most important question is how is stdev(S) related to the standard deviation of the entire population?
An illustration for this issue is the second question:
Suppose we have an additional number, x, and we would like to test whether it is an vis-a-vis the general population. My intuitive approach is to calculate Z as follows:
$Z = \frac{x - mean(S)}{stdev(S)}$
and then test it against standard distribution if n>30 or against t-distribution if n<30.
However, this approach doesn't account for n, the size of the sample. What is the right way to solve this question provided there is only single sample S?
|
I am struggling a little bit at the moment with a question related to logistic regression. I have a model that predicts the occurrence of animal based on land cover with reference to forest. I am not grasping the concept of a reference class and struggle to extrapolate the model onto a new area. Any explanations or guidance towards papers, lecture notes etc would be highly appreciated.
|
What is the difference between offline and online learning? Is it just a matter of learning over the entire dataset (offline) vs. learning incrementally (one instance at a time)? What are examples of algorithms used in both?
|
I am interested in tools/techniques that can be used for analysis of streaming data in "real-time"*, where latency is an issue. The most common example of this is probably price data from a financial market, although it also occurs in other fields (e.g. finding trends on Twitter or in Google searches).
In my experience, the most common software category for this is "complex event processing". This includes commercial software such as Streambase and Aleri or open-source ones such as Esper or Telegraph (which was the basis for Truviso).
Many existing models are not suited to this kind of analysis because they're too computationally expensive. Are any models** specifically designed to deal with real-time data? What tools can be used for this?
* By "real-time", I mean "analysis on data as it is created". So I do not mean "data that has a time-based relevance" (as in this talk by Hilary Mason).
** By "model", I mean a mathematical abstraction that describe the behavior of an object of study (e.g. in terms of random variables and their associated probability distributions), either for description or forecasting. This could be a machine learning or statistical model.
|
I'm trying to separate two groups of values from a single data set. I can assume that one of the populations is normally distributed and is at least half the size of the sample. The values of the second one are both lower or higher than the values from the first one (distribution is unknown). What I'm trying to do is to find the upper and lower limits that would enclose the normally-distributed population from the other.
My assumption provide me with starting point:
all points within the interquartile range of the sample are from the normally-distributed population.
I'm trying to test for outliers taking them from the rest of the sample until they don't fit into the 3 st.dev of the normally-distributed population. Which is not ideal, but seem to produce reasonable enough result.
Is my assumption statistically sound? What would be a better way to go about this?
p.s. please fix the tags someone.
|
Comparing two variables, I came up with the following chart. the x, y pairs represent independent observations of data on the field. I've doen Pearson correlation on it and have found one of 0.6.
My end goal is to establish a relationship between y and x such that y = f(x).
What analysis would you recommend to obtain some form of a relationship between the two variables?
|
I completed a Monte Carlo simulation that consisted of one million ($10^6$) individual simulations. The simulation returns a variable, $p$, that can be either 1 or 0. I then weight the simulations based on predefined criteria and calculate the probability of $p$. I also calculate a risk ratio using $p$:
$$\text{Risk ratio} = P(p|\text{test case}) / P(p|\text{control case})$$
I had eight Monte Carlo runs, which consist of one control case and seven test cases.
I need to know if the probabilities of $p$ are statistically different compared to the other cases. I know I can use a multiple comparison test or nonparametric ANOVA to test individual variables, but how do I do this for probabilities?
For example are these two probabilities statistically different?:
Probabilities:
$P(p|\text{test #3}) = 4.08 \times 10^{-5}$
$P(p|\text{test #4}) = 6.10 \times 10^{-5}$
Risk Ratios:
$\text{Risk Ratio}(\text{test #3}) = 0.089$
$\text{Risk Ratio}(\text{test #4}) = 0.119$
|
For 1,000,000 observations, I observed a discrete event, X, 3 times for the control group and 10 times for the test group. How do I determine for a large number of observations (1,000,000), if three is statistically different than ten?
|
What are some podcasts related to statistical analysis? I've found some audio recordings of college lectures on ITunes U, but I'm not aware of any statistical podcasts. The closest thing I'm aware of is an operations research podcast The Science of Better. It touches on statistical issues, but it's not specifically a statistical show.
|
This one is bothering me for a while, and a great dispute was held around it. In psychology (as well as in other social sciences), we deal with different ways of dealing with numbers :-) i.e. the levels of measurement. It's also common practice in psychology to standardize some questionnaire, hence transform the data into percentile scores (in order to assess a respondent's position within the representative sample).
Long story short, if you have a variable that holds the data expressed in percentile scores, how should you treat it? As an ordinal, interval, or even ratio variable?!
It's not ratio, cause there no real 0 (0th percentile doesn't imply absence of measured property, but the variable's smallest value). I advocate the view that percentile scores are ordinal, since P70 - P50 is not equal to P50 - P30, while the other side says it's interval.
Please gentlemen, cut the cord. Ordinal or interval?
|
How comprehensive is the following book - What interpretations are missing?
Interpretations of Probability, Andrei Khrennikov, 2009, de Gruyter, ISBN 978-3-11-020748-4
http://www.degruyter.com/cont/fb/ma/detailEn.cfm?isbn=9783110207484&sel=pi
Contents:http://www.degruyter.com/files/pdf/9783110207484Contents.pdf
|
Is the Yates' correction for continuity used only for 2X2 matrices?
|
I've been beginning to work my way through Statistical Data Mining Tutorials by Andrew Moore (highly recommended for anyone else first venturing into this field). I started by reading this extremely interesting PDF entitled "Introductory overview of time-series-based anomaly detection algorithms" in which Moore traces through many of the techniques used in the creation of an algorithm to detect disease outbreaks. Halfway through the slides, on page 27, he lists a number of other "state of the art methods" used to detect outbreaks. The first one listed is wavelets. Wikipeida describes a wavelet as
a wave-like oscillation with an
amplitude that starts out at zero,
increases, and then decreases back to
zero. It can typically be visualized
as a "brief oscillation"
but does not describe their application to statistics and my Google searches yield highly academic papers that assume a knowledge of how wavelets relate to statistics or full books on the subject.
I would like a basic understanding of how wavelets are applied to time-series anomaly detection, much in the way Moore illustrates the other techniques in his tutorial. Can someone provide an explanation of how detection methods using wavelets work or a link to an understandable article on the matter?
|
When I type a left paren or any quote in the R console, it automatically creates a matching one to the right of my cursor. I guess the idea is that I can just type the expression I want inside without having to worry about matching, but I find it annoying, and would rather just type it myself. How can I disable this feature?
I am using R 2.8.0 on OSX 10.5.8.
|
New to the site. I am just getting started with R, and want to replicate a feature that is available in SPSS.
Simply, I build a "Custom Table" in SPSS with a single categorical variable in the column and many continuous/scale variables in the rows (no interactions, just stacked on top of each other).
The table reports the means and valid N's for each column (summary statistics are in the rows), and select the option to generate significance tests for column means (each column against the others) using alpha .05 and adjust for unequal variances.
Here is my question.
How can I replicate this in R? What is my best option to build this table and what tests are available that will get me to the same spot? Since I am getting used to R, I am still trying to navigate around what is available.
|
Take $x \in \{0,1\}^d$ and $y \in \{0,1\}$ and suppose we model the task of predicting y given x using logistic regression. When can logistic regression coefficients be written in closed form?
One example is when we use a saturated model.
That is, define $P(y|x) \propto \exp(\sum_i w_i f_i(x_i))$, where $i$ indexes sets in the power-set of $\{x_1,\ldots,x_d\}$, and $f_i$ returns 1 if all variables in the $i$'th set are 1, and 0 otherwise. Then you can express each $w_i$ in this logistic regression model as a logarithm of a rational function of statistics of the data.
Are there other interesting examples when closed form exists?
|
What is the relationship between a Nonhomogeneous Poisson process and a process that has heavy tail distribution for its inter arrival times?
Any pointer to a resource that can shed some light on this question would be hugely appreciated
|
Context
I have a survey of 16 questions, each with four possible responses. The purpose of the survey is to measure the respondent's propensity towards four categories (which we will denote A, B, C, D). Each of the four responses per question are representative of an aspect of the four categories, A, B, C, D.
The respondent rank orders each of the four responses (we will denote the first response by "4", the second by "3", etc).
To score the categories, we add the responses up based on the coding above. There are 16 x (4 + 3 + 2 + 1) = 160 total points. The sums for each category are computed, and the maximum score is deemed the respondent's dominant category.
Therefore each survey looks like the following (in CSV format)
question_num, A, B, C, D
1, 4, 3, 1, 2
2, 3, 4, 1, 2
3, 3, 4, 2, 1
4, 4, 3, 1, 2
5, 4, 3, 1, 2
6, 4, 3, 2, 1
7, 4, 3, 1, 2
.
.
.
16, 3, 4, 1, 2
sums, 64, 48, 24, 24
I have about 325 surveys completed.
Aim
I want to remove possible redundant items in the survey so I can reduce the burden on future respondents.
Questions
My first strategy was to do a multi-logistic regression with the response as the dominant category (described above). Is this a good idea?
Would PCA be helpful?
Are there any other strategies for identifying redundant items?
|
Just wonder, is there any data analysis/ statistic/ data mining work that are available on freelance basis?
This could be subjective and argumentative, which is why I put it as CW.
|
I have a dataset made up of elements from three groups, let's call them G1, G2, and G3.
I analysed certain characteristics of these elements and divided them into 3 types of "behaviour" T1, T2, and T3 (I used cluster analysis to do that).
So, now I have a 3 x 3 contingency table like this with the counts of elements in the three groups divided by type:
| T1 | T2 | T3 |
------+---------+---------+---------+---
G1 | 18 | 15 | 65 |
------+---------+---------+---------+---
G2 | 20 | 10 | 70 |
------+---------+---------+---------+---
G3 | 15 | 55 | 30 |
Now, I can run a Fisher test on these data in R
data <- matrix(c(18, 20, 15, 15, 10, 55, 65, 70, 30), nrow=3)
fisher.test(data)
and I get
Fisher's Exact Test for Count Data
data: data
p-value = 9.028e-13
alternative hypothesis: two.sided
So my questions are:
is it correct to use Fisher test this way?
how do I know who is different from who? Is there a post-hoc test I can use? Looking at the data I would say the 3rd group has a different behaviour from the first two, how do I show that statistically?
someone pointed me to logit models: are they a viable option for this type of analysis?
any other option to analyse this type of data?
|
So in R, for instance, this would be:
my_ts_logged_diffed = diff(log(some_ts_object))
plot(my_ts_logged_diffed)
This seems to be part of every experienced analyst/forecaster analytical workflow--in particular, a visual examination of the plotted data. What are they looking for--i.e., what useful information does this transformation help reveal?
Similarly, I have a pretty good selection of time series textbooks, tutorials, and the like; nearly all of them mention this analytical step, but none of them say why it's done (i am sure there's a good reason, and one that's apparently too obvious to even mention).
(i do indeed routinely rely on this transformation but only for the limited purpose of testing for a normal distribution (i think the test is called Shapiro-Wilk). The application of the test just involves (assuming i am applying it correctly) comparing a couple of parameters (a 'W' parameter and the p-value) against a baseline--the Test doesn't appear to require plotting the data).
|
What are the freely available data set for classification with more than 1000 features (or sample points if it contains curves)?
There is already a community wiki about free data sets:
Locating freely available data samples
But here, it would be nice to have a more focused list that can be used more conveniently, also I propose the following rules:
One post per dataset
No link to set of dataset
each data set must be associated with
a name (to figure out what it is about) and a link to the dataset (R datasets can be named with package name)
the number of features (let say it is p) the size of the dataset (let say it is n) and the number of labels/class (let say it is k)
a typical error rate from your experience (state the used algorithm in to words) or from the litterature (in this last case link the paper)
|
When we are monitoring movements of structures we normally install monitoring points onto the structure before we do any work which might cause movement. This gives us chance to take a few readings before we start doing the work to 'baseline' the readings.
Quite often the data is quite variable (the variations in the reading can easily be between 10 and 20% of the fianl movement). The measurements are also often affected by the environment in which they are taken so one set of measurements taken on one project may not have the same accuracy as measurements on another project.
Is there any statisitcal method, or rule of thumb that can be applied to say how many baseline readings need to be taken to give a certain accuracy before the first reading is taken? Are there any rules of humb that can be applied to this situation?
|
I haven't studied statistics for over 10 years (and then just a basic course), so maybe my question is a bit hard to understand.
Anyway, what I want to do is reduce the number of data points in a series. The x-axis is number of milliseconds since start of measurement and the y-axis is the reading for that point.
Often there is thousands of data points, but I might only need a few hundreds. So my question is: How do I accurately reduce the number of data points?
What is the process called? (So I can google it)
Are there any prefered algorithms (I will implement it in C#)
Hope you got some clues. Sorry for my lack of proper terminology.
Edit: More details comes here:
The raw data I got is heart rate data, and in the form of number of milliseconds since last beat. Before plotting the data I calculate number of milliseconds from first sample, and the bpm (beats per minute) at each data point (60000/timesincelastbeat).
I want to visualize the data, i.e. plot it in a line graph. I want to reduce the number of points in the graph from thousands to some hundreds.
One option would be to calculate the average bpm for every second in the series, or maybe every 5 seconds or so. That would have been quite easy if I knew I would have at least one sample for each of those periods (seconds of 5-seconds-intervals).
|
I have distributions from two different data sets and I would like to
measure how similar their distributions (in terms of their bin
frequencies) are. In other words, I am not interested in the correlation of
data point sequences but rather in the their distributional properties with respect to similarity. Currently I can only observe a similarity in eye-balling which is not enough. I don't want to assume causality and I don't want to predict at this point. So, I assume that correlation is the way to go.
Spearman's Correlation Coefficient is used to compare non-normal data and since I don't know anything about the real underlying distribution in my data, I think it would be a save bet. I wonder if this measure can also be used to
compare distributional data rather than the data poitns that are
summarized in a distribution. Here the example code in R that exemplifies
what I would like to check:
aNorm <- rnorm(1000000)
bNorm <- rnorm(1000000)
cUni <- runif(1000000)
ha <- hist(aNorm)
hb <- hist(bNorm)
hc <- hist(cUni)
print(ha$counts)
print(hb$counts)
print(hc$counts)
# relatively similar
n <- min(c(NROW(ha$counts),NROW(hb$counts)))
cor.test(ha$counts[1:n], hb$counts[1:n], method="spearman")
# quite different
n <- min(c(NROW(ha$counts),NROW(hc$counts)))
cor.test(ha$counts[1:n], hc$counts[1:n], method="spearman")
Does this make sense or am I violating some assumptions of the coefficient?
Thanks,
R.
|
A colleague wants to compare models that use either a Gaussian distribution or a uniform distribution and for other reasons needs the standard devation of these two distributions to be equal. In R I can do a simulation...
sd(runif(100000000))
sd(runif(100000000,min=0,max=2))
and see that the calculated standard deviation is likely to be ~.2887 * the range of the uniform distribution. However, I was wondering if there was an equation that could yield the exact value, and if so, what that formula was.
|
I am trying to calculate the reliability in an elicitation exercise by analysing some test-retest questions given to the experts. The experts elicited a series of probability distributions which were then compared with the true value (found at a later date) by computing the standardized quadratic scores. These scores are the values that I am using to calculate the reliability between the test-retest results.
Which reliability method would be appropriate here? I was looking mostly at Pearson's correlation and Chronbach's alpha (and got some negative values using both methods) but I am not sure this is the right approach.
UPDATE:
Background information
The data were collected from a number of students who were asked to predict their own actual exam mark in four chosen modules by giving a probability distribution of the marks. One module was then repeated at a later date (hence the test-retest exercise).
Once the exam was taken, and the real results were available, the standardized quadratic scores were computed. These scores are proper scoring rules used to compare assessed probability distributions with the observed data which might be known at a later stage.
The probability score Q is defined as:
Quadratic score http://img717.imageshack.us/img717/9424/chart2j.png
where k is the total number of elicited probabilities and j is the true outcome.
My question is which reliability method would be more appropriate when it comes to assessing the reliability between the scores of the repeated modules? I calculated Pearson's correlation and Chronbach's alpha (and got some negative values using both methods) but there might be a better approach.
|
I've got a linear regression model with the sample and variable observations and I want to know:
Whether a specific variable is significant enough to remain included in the model.
Whether another variable (with observations) ought to be included in the model.
Which statistics can help me out? How can get them most efficiently?
|
Introductory, advanced, and even obscure, please.
Mostly to test myself. I like to make sure I know what the heck I'm talking about :)
Thanks
|
I am comparing two distributions with KL divergence which returns me a non-standardized number that, according to what I read about this measure, is the amount of information that is required to transform one hypothesis into the other. I have two questions:
a) Is there a way to quantify a KL divergence so that it has a more meaningful interpretation, e.g. like an effect size or a R^2? Any form of standardization?
b) In R, when using KLdiv (flexmix package) one can set the 'esp' value (standard esp=1e-4) that sets all points smaller than esp to some standard in order to provide numerical stability. I have been playing with different esp values and, for my data set, I am getting an increasingly larger KL divergence the smaller a number I pick. What is going on? I would expect that the smaller the esp, the more reliable the results should be since they let more 'real values' become part of the statistic. No? I have to change the esp since it otherwise does not calculate the statistic but simply shows up as NA in the result table ...
|
Consider the following model
$Y_i = f(X_i) + e_i$
from which we observe n iid data points $\left( X_i, Y_i \right)_{i=1}^n$. Suppose that $X_i \in \mathbb{R}^d$ is a $d$ dimensional feature vector. And suppose that a ordinary least squares estimate is fit to data, that is,
$\hat \beta = {\rm arg} \min_{\beta \in \mathbb{R}^d} \sum_i (Y_i - \sum_j X_{ij} \beta_j)^2$
Since a wrong model is estimated, what is the interpretation for the confidence interval around estimated coefficients?
More generally, does it make sense to estimate confidence intervals around parameters in a misspecified model? And what does the confidence interval tell us in such a case?
|
The wiki article on credible intervals has the following statement:
credible intervals and confidence intervals treat nuisance parameters in radically different ways.
What is the radical difference that the wiki talks about?
Credible intervals are based on the posterior distribution of the parameter and confidence interval is based on the maximum likelihood associated with the data generating process. It seems to me that how credible and confidence intervals are computed is not dependent on whether the parameters are nuisance or not. So, I am a bit puzzled by this statement.
PS: I am aware of alternative approaches to dealing with nuisance parameters under frequentist inference but I think they are less common than standard maximum likelihood. (See this question on the difference between partial, profile and marginal likelihoods.)
|
I'm comparing a sample and checking whether it distributes as some, discrete, distribution. However, I'm not enterily sure that Kolmogorov-Smirnov applies. Wikipedia seems to imply it does not. If it does not, how can I test the sample's distribution?
|
my question particularly applies to network reconstruction
|
I am looking for a good book/tutorial to learn about survival analysis. I am also interested in references on doing survival analysis in R.
|
What is the equivalent command in R for the stcox command in Stata?
|
I'll use an example so that you can reproduce the results
# mortality
mort = ts(scan("http://www.stat.pitt.edu/stoffer/tsa2/data/cmort.dat"),start=1970, frequency=52)
# temperature
temp = ts(scan("http://www.stat.pitt.edu/stoffer/tsa2/data/temp.dat"), start=1970, frequency=52)
#pollutant particulates
part = ts(scan("http://www.stat.pitt.edu/stoffer/tsa2/data/part.dat"), start=1970, frequency=52)
temp = temp-mean(temp)
temp2 = temp^2
trend = time(mort)
Now, fit a model for mortality data
fit = lm(mort ~ trend + temp + temp2 + part, na.action=NULL)
What I want now is to reproduce the result of the AIC command
AIC(fit)
[1] 3332.282
According to R's help file for AIC, AIC = -2 * log.likelihood + 2 * npar.
If I'm correct I think that log.likelihood is given using the following formula:
n = length(mort)
RSS = anova(fit)[length(anova(fit)[,2]),2] # there must be better ways to get this, anyway
(log.likelihood <- -n/2*(log(2*pi)+log(RSS/n)+1))
[1] -1660.135
This is approximately equal to
logLik(fit)
'log Lik.' -1660.141 (df=6)
As far as I can tell, the number of parameters in the model are 5 (how can I get this number programmatically ??). So AIC should be given by:
-2 * log.likelihood + 2 * 5
[1] 3330.271
Ooops, it seems like I should have used 6 instead of 5 as the number of parameters. What is wrong with those calculations?
|
In general inference, why orthogonal parameters are useful, and why is it worth trying to find a new parametrization that makes the parameters orthogonal ?
I have seen some textbook examples, not so many, and would be interested in more concrete examples and/or motivation.
|
My stats has been self taught, but a lot of material I read point to a dataset having mean 0 and standard deviation of 1.
If that is the case then:
Why is mean 0 and SD 1 a nice property to have?
Why does a random variable drawn from this sample equal 0.5? The chance of drawing 0.001 is the same as 0.5 so this should be flat distribution...
When people talk about Z Scores what do they actually mean here?
|
My question is actually quite short, but I'll have to start by describing the context since I am not sure how to directly ask it.
Consider the following "game":
We have a segment of length n ("large segment") and m integers ("lengths"), all considerably smaller than n. For each of the m lengths we draw a random sub-segment of its length on the large segment. For example, if the large segment is of size 1000 (i.e. 1..1000) and we are given lengths 20, 10, 50, than a possible solution would be: 31..50, 35..44, 921..970 (sub-segments of lengths 20, 10 and 50 respectively).
Notes:
1. This is just a toy example. We usually have many more lengths so there are many overlaps and each position in the large segment is covered by multiple sub-segments.
2. Remember that the lengths are given; only their mapping to the large segment is random.
3. Drawing a sub-segment of length k is done bu simply drawing a number from a uniform distribution over 1..n-k (a sub-segment of size k can start at position 1, 2, ... n-k).
Now, we conduct many simulations of the process an d record the data. We finally examine for each position the distribution of number of sub-segments covering this position. If we look at positions that are relatively far from the edges of the large segment, the distribution in each such position is normal, and all the distributions look the same.
The "problem" is that the positions at the ends do not look normal at all. This is not surprising, since, for example, if we are now drawing a sub-segment of length 10, the only way the very first position in the large segment will be covered is if we draw 1, whereas, for example, the 10th position will be covered if we draw 1,2,3,..10.
What I am trying to figure out is what is the kind of distribution we see in the "edge" positions (it's not normal, but I think it usually looks like a normal distribution with its tail cut in one direction), and also how can I approximate this distribution density function from my simulations. For the "center" positions, I just estimate the mean and standard deviation and since I beleive the distributions there are normal - I can use the normal density function. This alos makes me think if I really need to treat the positions in a categorical way - "near the edges" and "not near the edges", or whether there are actually the same in some sort (some generalization of the normal distribution?).
Thank you, and sorry again for the length of the post.
|
I have been reading Zuur, Ieno and Smith (2007) Analyzing ecological data, and on page 262, they try to explain how nMDS (non-metric multidimensional scaling) algorithm works. As my background is in biology and not math or statistics per se, I'm having hard time understanding a few points and would ask you if you could elaborate on them. I'm reproducing the entire algorithm list for clarity, and I hope I'm not breaking any laws by doing so.
Choose a measure of association and calculate the distance matrix D.
Specify m, the number of axes.
Construct a starting configuration E. This can be done using PCoA.
Regress the configuration on D: D_ij = (alpha) + (beta)E_ij + (epsilon)_ij.
Measure the relationship between the m dimensional configuration and the real dinstances by fitting a non-parametric (monotonic) regression curve in the Shepard diagram. A monotonic regression is constrained to increase. If a parametric regression line is used, we obtain PCoA.
The discrepancy from the fitted curve is called STRESS.
Using non-linear optimization routines, obtain a new estimation of E and go from step 4 until convergence.
Questions:
In 4., we regress the configuration to D. Where do we use the estimated parameters (alpha), (beta) and (epsilon)? Are these used to measure distance from the regression (Shepard diagram) in this new configuration
In regard to number 7, can you talk a little about non-linear optimisation routines? My internet search came up pretty much empty in terms of a layman's explanation. I'm interested in knowing what this routine tries to achieve (in nMDS). And I guess the next question depends on knowing these routines: what represents convergence? What converges to where?
Can someone add "nmds" tag? I can't create new tags yet...
|
I am trying to assess the significance of the obtained MI matrix. The initial input was a array of 3000 genes by 45 timepoints. MI was computed resulting in a array of 3600 by 3600. I am thus comparing my results to a shuffled matrix with the same dimensions. I permutate the columns 100 times, thus have 100 results for each element in the matrix. At this stage shall I take the mean for each value in the cell and then overall mean of the matrix MI values to estimate the threshold cutoff? Is taking the mean plus 3SD sensible? Ideally comparison of probability density function between my model and the random should show large discrepancy.
|
I'm trying to visualize a set of data that represents human body mass over time, taken from (usually) daily weighings.
Because body mass tends to fluctuate +/- 3 pounds based on hydration I would like to draw a strongly smoothed line graph to minimize the fluctuation.
Any help on what the equation would look like is much appreciated, or even just some names/links to send me in the right direction.
EDIT:
I need to code the visualization in Javascript, so I need understanding of the math involved, rather than a library that will do it for me.
|
I'm looking for a distribution to model a vector of $k$ binary random variables, $X_1, \ldots, X_k$. Suppose I have observed that $\sum_i X_i = n$. In this case I do not want to treat them as independent Bernoulli random variables. Instead, I would like something like the multinomial:
$P(X_1=x_1, \ldots, X_k=x_k) = f(x_1, \ldots, x_k; n, p_1, \ldots, p_k) = \frac{n!}{x_1! \cdots x_k!} \prod_{i=1}^k p_i^{x_i}$
but instead of the $x_i$ being nonnegative integers, I want them restricted to be either 0 or 1. I have been trying to see if the multivariate hypergeometric is appropriate, but I'm not sure.
Thanks in advance for any advice.
|
This question concerns an implementation of the topmoumoute natural gradient (tonga) algorithm as described in page 5 in the paper Le Roux et al 2007 http://research.microsoft.com/pubs/64644/tonga.pdf.
I understand that the basic idea is to augment stochastic gradient ascent with the covariance of the stochastic gradient estimates. Basically, the natural gradient approach multiplies a stochastic gradient with the inverse of the covariance of the gradient estimates in order to weight each component of the gradient by the variance of this component. We prefer moving into directions that show less variance during the stochastic gradient estimates:
$ng \propto C^{-1} g$
Since updating and inverting the covariance in an online optimisation setting is costly, the authors describe a ``cheap'' approximate update algorithm as described on page 5 as:
$C_t = \gamma \hat{C}_{t-1} + g_tg_t^T$ where $C_{t-1}$ is the low rank approximation at time step t-1. Writing $\hat{C}_{t} = X_tX_t^T$ with $X_t =\sqrt{\gamma} X_{t-1}\ \ g_t]$ they use an iterative update rule for the gram matrix $X_t^T X_T = G_t = $
($\gamma G_{t-1}$ $\sqrt{\gamma} X^T_{t-1}g_t$
$\sqrt{\gamma} g^T_t X_{t-1}$ $g_t^Tg_t$)
They then state ``To keep a low-rank estimate of $\hat{C}_{t} = X_tX_t^T$, we can compute its eigendecomposition and keep only the first k eigenvectors. This can be made low cost using its relation to that of the Gram matrix
$G_t= X_t^T X_T$:
$G_t = VDV^T$
$C_t = (X_tVD^{-\frac12})D(X_tVD^{-\frac12})^T$''
Because it's cheaper than updating and decomposing G at every step, they then suggest that you should update X for several steps using
$C_{t+b} = X_{t+b}X_{t+b}^T$ with $X_{t+b} = \left[\gamma U_t, \ \gamma^{\frac{b-1}{2}g_{t+1}},...\ \ \gamma^{-\frac12}g_{t+b-1}, \ \gamma^{\frac{t+b}{2}g_{t+b}}\right]$
I can see why you can get $C_t$ from $G_t$ using the eigendecomposition. But I'm unsure about their update rule for X. The authors don't explain where U is coming from. I assume (by notation) this is the first k eigenvectors of $C_t$, correct? But if so, why would the formula for $X_t$ be a good approximation for $X_t$? When I implement this update rule, $X_t$ does not seem to be a good approximation of the ''real'' $X_t$ (that you would get from $X_t = [\sqrt{\gamma} X_{t-1}\ \ g_t]$) at all. So why should I then be able to get a good approximation of $C^{-1}$ from this (I don't)? The authors are also not quite clear about how they keep $G_t$ from growing (The size oft he matrix $G_t$ increases at each iterative update). I assume they replace $G_t$ by $\hat V\hat D\hat V^T$ with $\hat V$ and $\hat D$ being the first k components of the eigendecomposition?
So in summary:
I tried implementing this update rule, but I'm not getting good results and am unsure my implementation is correct
why should the update rule for $X_t$ be reasonable? Is $U_t$ really the first k eigenvectors of $C_t$? (Clearly I cannot let $X_t$ and $G_t$ grow for each observed gradient $g_t$)
This is far fetched, but has anyone implemented this low rank approximate update of the covariance before and has some code to share so I can compare it to my implementation?
As a simple example if I simulate having 15 gradients in matlab like:
X = rand(15, 5);
c = cov(X);
e = eig(c);
%if the eigenvectos of C would give a good approximation of the original , this should give an appoximation of c, no?:
c_r = e'*e; %no resemblence to c
So I'm quite certainly doing it wrong, I guess U might actually not be the eigenvectors of C, but then what is U?
Any suggestions or references would be most welcome!
(sorry about the terrible layout, looks like only a subset of latex is supported, no arrays for matriced and embedded latex doesn't look particularly good, all the formulas are much more readable on page 5 of the referenced paper :)
Also, is this considered off-topic here? It's really more related to optimisation and machine-learning...)
|
I am working on disease infection data, and I am puzzled on whether to handle the data as "categorical" or "continuous".
"Infection Count"
the number of infection cases found in a specific period of time, the count
is generated from categorical data (i.e. no. of patient tagged as "infected")
"Patient Bed Days"
sum of total number of day stay in the ward by all patients in that ward, again, the count is generated from categorical data (i.e. no. of patient tagged as "staying in that particular ward")
"infection per patient bed days"
"infection count" / "patient bed days"
both were originally count data, but now becomes a rate
Question:
Can I use Chi-Square here to assess whether the difference in "infections per patient bed days" is statistically significant or not?
Updates
I have found that I can compare the incidence rate (or call it infection rate), but doing something like "incidence rate difference" (IRD) or "incidence rate ratio" (IRR). (I found it from here)
What is the difference between IRD and t-test?
Is there any statistical test complementary for IRR?
|
I want to represent a variable as a number between 0 and 1. The variable is a non-negative integer with no inherent bound. I map 0 to 0 but what can I map to 1 or numbers between 0 and 1?
I could use the history of that variable to provide the limits. This would mean I have to restate old statistics if the maximum increases. Do I have to do this or are there other tricks I should know about?
|
I have computed percentage change from time1 to time2 for several variables.
Can I predict percentage change in earnings from percentage change in produced and percentage changed in price?
When I ran a model with actual data and dummy coded time (time1=1, time2=0), the dummy variable was not statistically significant. But there are large changes.
|
In many papers I see data representing a rate of success (i.e a number between 0 and 1) modeled as a gaussian. This is clearly a sin (the range of variation of the gaussian is all of R),
but how bad is that sin? Under what assumptions would you say it is tolerable?
|
Joshua Epstein wrote a paper titled "Why Model?" available at http://www.santafe.edu/media/workingpapers/08-09-040.pdf in which gives 16 reasons:
Explain (very distinct from predict)
Guide data collection
Illuminate core dynamics
Suggest dynamical analogies
Discover new questions
Promote a scientific habit of mind
Bound (bracket) outcomes to plausible ranges
Illuminate core uncertainties.
Offer crisis options in near-real time
Demonstrate tradeoffs / suggest efficiencies
Challenge the robustness of prevailing theory through perturbations
Expose prevailing wisdom as incompatible with available data
Train practitioners
Discipline the policy dialogue
Educate the general public
Reveal the apparently simple (complex) to be complex (simple)
(Epstein elaborates on many of the reasons in more detail in his paper.)
I would like to ask the community:
are there are additional reasons that Epstein did not list?
is there a more elegant way to conceptualize (a different grouping perhaps) these reasons?
are any of Epstein's reasons flawed or incomplete?
are their clearer elaborations of these reasons?
|
I have data for about 1 year, 100 observations, multiple observations per subject, transactions occur on weekly basis but have 6-12 subjects per week, there is no order to this. There is a policy change in latter half of year, I want to model change in dependent variable due to policy change as a dummy variable: time1=0, time2=1.
Is this a case for fixed effects estimation?
The number of weeks per subject varies a lot and the number of weeks in time1 is greater than time2. Computed means for time1, time2 and percent change=large change in dependent variable, estimated linear model:
pay=X1 X2 Time(dummy). Dummy variable is not statistically significant.
Any suggestions as to how to model this?
Can I treat it as panel data?
|
I have cross classified data in a 2 x 2 x 6 table. Let's call the dimensions response, A and B. I fit a logistic regression to the data with the model response ~ A * B. An analysis of deviance of that model says that both terms and their interaction are significant.
However, looking at the proportions of the data, it looks like only 2 or so levels of B are responsible for these significant effects. I would like to test to see which levels are the culprits. Right now, my approach is to perform 6 chi-squared tests on 2 x 2 tables of response ~ A, and then to adjust the p-values from those tests for multiple comparisons (using the Holm adjustment).
My question is whether there is a better approach to this problem. Is there a more principled modeling approach, or multiple chi-squared test comparison approach?
|
I am working with a large amount of time series. These time series are basically network measurements coming every 10 minutes, and some of them are periodic (i.e. the bandwidth), while some other aren't (i.e. the amount of routing traffic).
I would like a simple algorithm for doing an online "outlier detection". Basically, I want to keep in memory (or on disk) the whole historical data for each time series, and I want to detect any outlier in a live scenario (each time a new sample is captured). What is the best way to achieve these results?
I'm currently using a moving average in order to remove some noise, but then what next? Simple things like standard deviation, mad, ... against the whole data set doesn't work well (I can't assume the time series are stationary), and I would like something more "accurate", ideally a black box like:
double outlier_detection(double* vector, double value);
where vector is the array of double containing the historical data, and the return value is the anomaly score for the new sample "value" .
|
The wiki discusses the problems that arise when multicollinearity is an issue in linear regression. The basic problem is multicollinearity results in unstable parameter estimates which makes it very difficult to assess the effect of independent variables on dependent variables.
I understand the technical reasons behind the problems (may not be able to invert $X' X$, ill-conditioned $X' X$ etc) but I am searching for a more intuitive (perhaps geometric?) explanation for this issue.
Is there a geometric or perhaps some other form of easily understandable explanation as to why multicollinearity is problematic in the context of linear regression?
|
R allows us to put code to run in the beginning/end of a session.
What codes would you suggest putting there?
I know of three interesting examples (although I don't have "how to do them" under my fingers here):
Saving the session history when closing R.
Running a fortune() at the beginning of an R session.
I was thinking of having an automated saving of the workspace. But I didn't set on solving the issue of managing space (so there would always be X amount of space used for that backup)
Any more ideas? (or how you implement the above ideas)
p.s: I am not sure if to put this here or on stackoverflow. But I feel the people here are the right ones to ask.
|
When solving business problems using data, it's common that at least one key assumption that under-pins classical statistics is invalid. Most of the time, no one bothers to check those assumptions so you never actually know.
For instance, that so many of the common web metrics are "long-tailed" (relative to the normal distribution) is, by now, so well documented that we take it for granted. Another example, online communities--even in communities with thousands of members, it's well-documented that by far the largest share of contribution to/participation in many of these community is attributable to a minuscule group of 'super-contributors.' (E.g., a few months ago, just after the SO API was made available in beta, a StackOverflow member published a brief analysis from data he collected through the API; his conclusion--less than one percent of the SO members account for most of the activity on SO (presumably asking questions, and answering them), another 1-2% accounted for the rest, and the overwhelming majority of the members do nothing).
Distributions of that sort--again more often the rule rather than the exception--are often best modeled with a power law density function. For these type of distributions, even the central limit theorem is problematic to apply.
So given the abundance of populations like this of interest to analysts, and given that classical models perform demonstrably poorly on these data, and given that robust and resistant methods have been around for a while (at least 20 years, I believe)--why are they not used more often? (I am also wondering why I don't use them more often, but that's not really a question for CrossValidated.)
Yes I know that there are textbook chapters devoted entirely to robust statistics and I know there are (a few) R Packages (robustbase is the one I am familiar with and use), etc.
And yet given the obvious advantages of these techniques, they are often clearly the better tools for the job--why are they not used much more often? Shouldn't we expect to see robust (and resistant) statistics used far more often (perhaps even presumptively) compared with the classical analogs?
The only substantive (i.e., technical) explanation I have heard is that robust techniques (likewise for resistant methods) lack the power/sensitivity of classical techniques. I don't know if this is indeed true in some cases, but I do know it is not true in many cases.
A final word of preemption: yes I know this question does not have a single demonstrably correct answer; very few questions on this Site do. Moreover, this question is a genuine inquiry; it's not a pretext to advance a point of view--I don't have a point of view here, just a question for which i am hoping for some insightful answers.
|
I'm looking to check my logic here.
Say you measure a quantity in group A, and find the mean is 2 and your 95% confidence interval ranges from 1 to 3. Then you measure the same quantity in group B and find a mean of 4 with a 95% confidence interval that ranges from 3.5 to 4.5. Assuming that A & B are independent, what is the 95% confidence interval for the difference between the groups? Presumably you can compute this using standard t-statistics, but I'd like to know if it's also possible to compute an estimate based on the CI's alone.
I reason that the lower bound of the CI of the difference should be the minimum credible difference between A & B; that is, the lower bound of the interval for B (3.5) minus upper bound of the interval for A (3), which yields a lower bound for the difference of 0.5. Similarly, the upper bound of the CI of the difference should be the maximum credible difference between A & B; that is, the upper bound of the interval for B (4.5) minus lower bound of the interval for A (1), which yields a lower bound for the difference of 3.5. This reasoning thus yields a confidence interval for the difference that ranges from 0.5 to 3.5.
Does that make sense, or is this a case where logic and statistics diverge?
|
In my area of research, a popular way of displaying data is to use a combination of a bar chart with "handle-bars". For example,
The "handle-bars" alternate between standard errors and standard deviations depending on the author. Typically, the sample sizes for each "bar" are fairly small - around six.
These plots seem to be particularly popular in biological sciences - see the first few papers of BMC Biology, vol 3 for examples.
So how would you present this data?
Why I dislike these plots
Personally I don't like these plots.
When the sample size is small, why not just display the individual data points.
Is it the sd or the se that is being displayed? No-one agrees which to use.
Why use bars at all. The data doesn't (usually) go from 0 but a first pass at the graph suggests it does.
The graphs don't give an idea about range or sample size of the data.
R script
This is the R code I used to generate the plot. That way you can (if you want) use the same data.
#Generate the data
set.seed(1)
names = c("A1", "A2", "A3", "B1", "B2", "B3", "C1", "C2", "C3")
prevs = c(38, 37, 31, 31, 29, 26, 40, 32, 39)
n=6; se = numeric(length(prevs))
for(i in 1:length(prevs))
se[i] = sd(rnorm(n, prevs, 15))/n
#Basic plot
par(fin=c(6,6), pin=c(6,6), mai=c(0.8,1.0,0.0,0.125), cex.axis=0.8)
barplot(prevs,space=c(0,0,0,3,0,0, 3,0,0), names.arg=NULL, horiz=FALSE,
axes=FALSE, ylab="Percent", col=c(2,3,4), width=5, ylim=range(0,50))
#Add in the CIs
xx = c(2.5, 7.5, 12.5, 32.5, 37.5, 42.5, 62.5, 67.5, 72.5)
for (i in 1:length(prevs)) {
lines(rep(xx[i], 2), c(prevs[i], prevs[i]+se[i]))
lines(c(xx[i]+1/2, xx[i]-1/2), rep(prevs[i]+se[i], 2))
}
#Add the axis
axis(2, tick=TRUE, xaxp=c(0, 50, 5))
axis(1, at=xx+0.1, labels=names, font=1,
tck=0, tcl=0, las=1, padj=0, col=0, cex=0.1)
|
I know of normality tests, but how do I test for "Poisson-ness"?
I have sample of ~1000 non-negative integers, which I suspect are taken from a Poisson distribution, and I would like to test that.
|
Could you recommend an introductory reference to index decomposition analysis, including
different methods (e.g. methods linked to the Laspeyre index and methods linked to the Divisa index)
properties of decomposition methods which can be used to compare the different methods
implementations of methods, e.g. in R?
|
Back in April, I attended a talk at the UMD (University of Maryland) Math Department Statistics group seminar series called "To Explain or To Predict?". The talk was given by Prof. Galit Shmueli who teaches at UMD's Smith Business School. Her talk was based on research she did for a paper titled "Predictive vs. Explanatory Modeling in IS Research", and a follow up working paper titled "To Explain or To Predict?".
Dr. Shmueli's argument is that the terms predictive and explanatory in a statistical modeling context have become conflated, and that statistical literature lacks a a thorough discussion of the differences. In the paper, she contrasts both and talks about their practical implications. I encourage you to read the papers.
The questions I'd like to pose to the practitioner community are:
How do you define a predictive exercise vs an explanatory/descriptive
one? It would be useful if you could talk about the specific
application.
Have you ever fallen into the trap of using one when meaning to use the other? I certainly have. How do you know which one to use?
|
Many income surveys (especially older ones) truncate key variables, such as household income, at some arbitrary point, to protect confidentiality. This point changes over time. This reduces inequality measures associated with the variable. I am interested in fitting a Pareto tail to the truncated distribution, replacing truncated values with imputed values to mimic the actual distribution. What's the best way to do this?
|
The question in short: What methods can be used to quantify distributional relationships between data when the distribution is unknown?
Now the longer story: I have a list of distributions and would like to rank them based on their similarity to a given base-line distribution. Correlation jumps into my mind in such a case and the Spearman correlation coefficient in particular given that it does not make any distributional assumptions. However, I would actually need to create the coefficient based on binned data (as this is done for histograms or densities) rather than the raw data, and I don't know if this is actually a valid step or if I am just manufacturing data.
In other words, if I have a 10000 point data set for each distribution, I would first create a binned distribution for each were each bin is of equal width and contains the frequencies of how many points each bin has. Just the way this is done for density plots or histograms. Each bin is on a discrete scale. The data is actually computer screen coordinate data and values are between 1 and 1024. Each pixel position could represent a bin (but larger bins are possible e.g. every 5 pixel being one bin). I would then compare the sequence of bins with each other rather than the raw data. The data set would look like this.
bins: 1 2 3 4 .... 1024<br>
dist#base:1 2 2 3 ..... 3<br>
dist#1: 1 4 5 5 3<br>
dist#2: 2 2 3 5 6<br>
...<br>
dist#1000: 1 2 4 6 6<br>
Does this make sense? Are there better ways of doing that? Are there better statistical methods? The goal of all this is to first) test how close are distributions from measure A to measure B and second) if I can predict one, if the other is missing.
|
I want to perform a two-sample T-test to test for a difference between two independent samples which each sample abides by the assumptions of the T-test (each distribution can be assumed to be independent and identically distributed as Normal with equal variance). The only complication from the basic two-sample T-test is that the data is weighted. I am using weighted means and standard deviations, but weighted N's will artificially inflate the size of the sample, hence bias the result. Is it simply a case of replacing the weighted Ns with the unweighted Ns?
|
This post is the continuation of another post related to a generic method for outlier detection in time series.
Basically, at this point I'm interested in a robust way to discover the periodicity/seasonality of a generic time series affected by a lot of noise.
From a developer point of view, I would like a simple interface such as:
unsigned int discover_period(vector<double> v);
Where v is the array containing the samples, and the return value is the period of the signal.
The main point is that, again, I can't make any assumption regarding the analyzed signal.
I already tried an approach based on the signal autocorrelation (detecting the peaks of a correlogram), but it's not robust as I would like.
|
I have a list of sold items by size. Shoes in this case
Size Qty
35 2
36 1
37 4
38 4
39 32
40 17
41 23
42 57
43 95
44 90
45 98
46 33
47 16
48 4
total: 476
I have to tell the owner how much of every size to buy. The problem is, I can't say him.
- You should buy 95 shoes size 43 for every one of size 36...
The usual practice is to buy the whole size curve and buy extras for the most selling sizes.
This is about a year worth of data.
How should I present this information in an easy to understand way?
What I want to present is a general rule. Something like "for every size curve, you should buy x additional shoes of size x".
The idea would be to later apply this approach to other clothing items.
|
I'm looking for some robust techniques to remove outliers and errors (whatever the cause) from financial time-series data (i.e. tickdata).
Tick-by-tick financial time-series data is very messy. It contains huge (time) gaps when the exchange is closed, and make huge jumps when the exchange opens again. When the exchange is open, all kinds of factors introduce trades at price levels that are wrong (they did not occur) and/or not representative of the market (a spike because of an incorrectly entered bid or ask price for example). This paper by tickdata.com (PDF) does a good job of outlining the problem, but offers few concrete solutions.
Most papers I can find online that mention this problem either ignore it (the tickdata is assumed filtered) or include the filtering as part of some huge trading model which hides any useful filtering steps.
Is anybody aware of more in-depth work in this area?
Update: this questions seems similar on the surface but:
Financial time series is (at least at the tick level) non-periodic.
The opening effect is a big issue because you can't simply use the last day's data as initialisation even though you'd really like to (because otherwise you have nothing). External events might cause the new day's opening to differ dramatically both in absolute level, and in volatility from the previous day.
Wildly irregular frequency of incoming data. Near open and close of the day the amount of datapoints/second can be 10 times higher than the average during the day. The other question deals with regularly sampled data.
The "outliers" in financial data exhibit some specific patterns that could be detected with specific techniques not applicable in other domains and I'm -in part- looking for those specific techniques.
In more extreme cases (e.g. the flash crash) the outliers might amount to more than 75% of the data over longer intervals (> 10 minutes). In addition, the (high) frequency of incoming data contains some information about the outlier aspect of the situation.
|
With data from two centres I want to account for potential heterogeneity or confounders between two centers. So the analysis will initially be stratified by clinical center and a chi square test performed with one degree of freedom. Is this appropriate with just two centres? Or is there an alternative?
|
I am using a control chart to try to work on some infection data, and will raise an alert if the infection is considered "out of control".
Problems arrive when I come to a set of data where most of the time points have zero infection, with only a few occasions of one to two infections, but these already exceed the control limit of the chart, and raise an alert.
How should I work on the control chart if the data set is having very few positive infection counts?
|
When would you tend to use ROC curves over some other tests to determine the predictive ability of some measurement on an outcome?
When dealing with discrete outcomes (alive/dead, present/absent), what makes ROC curves more or less powerful than something like a chi-square?
|
How to find a non-trivial upper bound on $E[\exp(Z^2)]$ when $Z \sim {\rm Bin}(n, n^{-\beta})$ with $\beta \in (0,1)$? A trivial bound is obtained for substituting $Z$ with $n$.
A background on this question. In the paper by Baraud, 2002 -- Non-asymptotic minimax rates of testing in signal detection, if one is to substitute the model in Eq. (1), by a random effects model, then the above quantity appears in the computation of a lower bound.
|
I was wondering if there is a statistical model "cheat sheet(s)" that lists any or more information:
when to use the model
when not to use the model
required and optional inputs
expected outputs
has the model been tested in different fields (policy, bio, engineering, manufacturing, etc)?
is it accepted in practice or research?
expected variation / accuracy / precision
caveats
scalability
deprecated model, avoid or don't use
etc ..
I've seen hierarchies before on various websites, and some simplistic model cheat sheets in various textbooks; however, it'll be nice if there is a larger one that encompasses various types of models based on different types of analysis and theories.
|
I'm trying to compute item-item similarity using Jaccard (specifically Tanimoto) on a large list of data in the format
(userid, itemid)
An item is considered as rated if i have a userid-itemid pair. I have about 800k users and 7900 items, and 3.57 million 'ratings'. I've restricted my data to users who have rated at least n items(usually 10). However, I'm wondering if I should place an upper limit on number of items rated. When users rate 1000 or more items, each user generates 999000 pairwise-combinations of items to use in my calc, assuming the calculation
n! / (n-r)!
Adding this much input data slows the calculating process down tremendously, even when the workload is distributed(using hadoop). I'm thinking that the users who rate many, many items are not my core users and might be diluting my similarity calculations.
My gut tells me to limt the data to customers who have rated between 10 and 150-200 items but I'm not sure if there is a better way to statistically determine these boundaries.
Here are some more details about my source data's distribution. Please feel free to enlighten me on any statistical terms that I might have butchered!
The distribution of my users' itemCounts:
alt text http://www.neilkodner.com/images/littlesnapper/itemsRated.png
> summary(raw)
itemsRated
Min. : 1.000
1st Qu.: 1.000
Median : 1.000
Mean : 4.466
3rd Qu.: 3.000
Max. :2069.000
> sd(raw)
itemsRated
16.46169
If I limit my data to users who have rated at least 10 items:
> above10<-raw[raw$itemsRated>=10,]
> summary(above10)
Min. 1st Qu. Median Mean 3rd Qu. Max.
10.00 13.00 19.00 34.04 35.00 2069.00
> sd(above10)
[1] 48.64679
> length(above10)
[1] 64764
If I further limit my data to users who have rated between 10 and 150 items:
> above10less150<-above10[above10<=150]
> summary(above10less150)
Min. 1st Qu. Median Mean 3rd Qu. Max.
10.00 13.00 19.00 28.17 33.00 150.00
> sd(above10less150)
[1] 24.32098
> length(above10less150)
[1] 63080
Edit: I dont think this is an issue of outliers as much as the data is positively skewed.
|
I have a colleague who calculates correlations in which one set of scores for a subject (e.g. 100 scores) is correlated with another set of scores for that same subject. The resulting correlation reflects the degree to which those sets of scores are associated for that subject. He needs to do this for N subjects. Consider the following dataset:
ncol <- 100
nrow <- 100
x <- matrix(rnorm(ncol*nrow),nrow,ncol)
y <- matrix(rnorm(ncol*nrow),nrow,ncol)
The correct output vector of correlations would be:
diag(cor(t(x),t(y)))
Is there a faster way to do this without using a multicore package in R?
|
Consider the following sequential, adaptive data generating process for $Y_1$, $Y_2$, $Y_3$. (By sequential I mean that we generate $Y_1$, $Y_2$, $Y_3$ in sequence and by adaptive I mean that $Y_3$ is generated depending on the observed values of $Y_1$ and $Y_2$.):
$Y_1 = X_1\ \beta + \epsilon_1$
$Y_2 = X_2\ \beta + \epsilon_2$
$Y_3 = X_3\ \beta + \epsilon_3$
$
X_3 =
\begin{cases}
X_{31} & \mbox{if }Y_1 Y_2 \gt 0 \\ X_{32} & \mbox{if }Y_1 Y_2 \le 0
\end{cases}$
where,
$X_1$, $X_2$, $X_{31}$ and $X_{32}$ are all 1 x 2 vectors.
$\beta$ is a 2 x 1 vector
$\epsilon_i \sim N(0,\sigma^2)$ for $i$ = 1, 2, 3
Suppose we observe the following sequence: {$Y_1 = y_1,\ Y_2 = y_2,\ X_3 = X_{31},\ Y_3 = y_3$} and wish to estimate the parameters $\beta$ and $\sigma$.
In order to write down the likelihood function note that we have four random variables: $Y_1$, $Y_2$, $X_3$ and $Y_3$. Therefore, the joint density of $Y_1$, $Y_2$, $X_3$ and $Y_3$ is given by:
$f(Y_1, Y_2, X_3, Y_3 |-) = f(Y_1|-)\ f(Y_2|-)\ [\ f(Y_3|X_{31},-)\ P(X_3=X_{31}|-)$
$\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad + \ f(Y_3|X_{32},-)\ P(X_3={X_{32}}|-)\ ]$
(Note: I am suppressing the dependency of the density on $\beta$ and $\sigma$.)
Since the likelihood conditions on the observed data and our sequence is such that $y_1 y_2 >0$. Therefore, we have:
$L(\beta,\ \sigma | X_1,\ X_2,\ X_{31}, y_1, y_2, y_3) = f(Y_1|-)\ f(Y_2|-)\ f(Y_3|X_{31},-)\ P(X_3=X_{31}) $
Is the above the correct likelihood function for this data generating process?
|
Suppose I have a table of counts that look like this
A B C
Success 1261 230 3514
Failure 381 161 4012
I have a hypothesis that there is some probability $p$ such that $P(Success_A) = p^i$, $P(Success_B) = p^j$ and $P(Success_C) = p^k$.
Is there some way to produce estimates for $p$, $i$, $j$ and $k$? The idea I have is to iteratively try values for $p$ between 0 and 1, and values for $i$, $j$ and $k$ between 1 and 5. Given the column totals, I could produce expected values, then calculate $\chi^2$ or $G^2$.
This would produce a best fit, but it wouldn't give any confidence interval for any of the values. It's also not particularly computationally efficient.
As a side question, if I wanted to test the goodness of fit of a particular set of values for $i$, $j$ and $k$ (specifically 1, 2, and 3), once I've calculated $\chi^2$ or $G^2$, I'd want to calculate significance on the $\chi^2$ distribution with 1 degree of freedom, correct? This isn't a normal contingency table since relationship of each column to the others is fixed to a single value. Given $p$, $i$, $j$ and $k$, filling in a single value in a cell fixes what the values of the other cells must be.
|
The following question is one of those holy grails for me for some time now, I hope someone might be able to offer a good advice.
I wish to perform a non-parametric repeated measures multiway anova using R.
I have been doing some online searching and reading for some time, and so far was able to find solutions for only some of the cases: friedman test for one way nonparametric repeated measures anova, ordinal regression with {car} Anova function for multi way nonparametric anova, and so on. The partial solutions is NOT what I am looking for in this question thread. I have summarized my findings so far in a post I published some time ago (titled: Repeated measures ANOVA with R (functions and tutorials), in case it would help anyone)
If what I read online is true, this task might be achieved using a mixed Ordinal Regression model (a.k.a: Proportional Odds Model).
I found two packages that seems relevant, but couldn't find any vignette on the subject:
http://cran.r-project.org/web/packages/repolr/
http://cran.r-project.org/web/packages/ordinal/
So being new to the subject matter, I was hoping for some directions from people here.
Are there any tutorials/suggested-reading on the subject? Even better, can someone suggest a simple example code for how to run and analyse this in R (e.g: "non-parametric repeated measures multiway anova") ?
|
I am using Singular Value Decomposition as a dimensionality reduction technique.
Given N vectors of dimension D, the idea is to represent the features in a transformed space of uncorrelated dimensions, which condenses most of the information of the data in the eigenvectors of this space in a decreasing order of importance.
Now I am trying to apply this procedure to time series data. The problem is that not all the sequences have the same length, thus I cant really build the num-by-dim matrix and apply SVD. My first thought was to pad the matrix with zeros by building a num-by-maxDim matrix and filling the empty spaces with zeros, but I'm not so sure if that is the correct way.
My question is how do you the SVD approach of dimensionality reduction to time series of different length? Alternatively are there any other similar methods of eigenspace representation usually used with time series?
Below is a piece of MATLAB code to illustrate the idea:
X = randn(100,4); % data matrix of size N-by-dim
X0 = bsxfun(@minus, X, mean(X)); % standarize
[U S V] = svd(X0,0); % SVD
variances = diag(S).^2 / (size(X,1)-1); % variances along eigenvectors
KEEP = 2; % number of dimensions to keep
newX = U(:,1:KEEP)*S(1:KEEP,1:KEEP); % reduced and transformed data
(I am coding mostly in MATLAB, but I'm comfortable enough to read R/Python/.. as well)
|
If I have a (financial) time series, and I sample it with two different periods, at 5 and at 60 minute intervals, can I create an exponential moving average on the 5 minute sampled data which is the same as an exponential moving average on the 60 minute sampled data?
Something like this:
e1 = EMA(a1) applied on sampled_data(60 min)
e2 = EMA(a2) applied on sampled_data(5 min)
a1 and a2 are the smoothing factors of the exponential moving average (the period)
Can I compute the a2 value for any a1 value, such that e1 = e2?
When I say that e1 = e2 I mean that if I graph the values of the EMA computed from 5 min data on top of the 60 min data chart and EMA, the two EMAs should be superposed. This means that in between two data points for EMA(60 min) there will be 60/5=12 data points for EMA(5 min).
|
I have a set of data which consists of many different types (measurable, categorical)
For example:
name measurable_attribute_1 categorical_attribute_1 measurable_attribute_2 categorical_attribute_2 ...
Number of attributes may grows quite quickly during my study: into my spreadsheet, I can as many new entries as attribute... I have about a hundred of entries in this classification scheme, about 70 attribute, so far, and I am at the beginning of my data collection.
I would like to perform statistical analysis of this data set. For example, what are the common features of the entries that have a similar categorical_attribute and this range of values of measurable_attribute.
Well, I would like to generate relationships between attributes in order to create training images.
However, I am not sure of how to organize the data prior to classification. Even though, should I organize the data?... (referring to this question)
Also, I can hardly gather entries into classes.
I do not want to introduce any bias obviously.
I am also quite new to statistical analysis (but eager to learn).
|
[0,1,0,2,4,1,0,1,5,1,4,2,1,3,1,1,1,1,0,1,1,0,2,0,2,0,0,1,0,1,2,2,1,2,4,1,4,1,0,0,4,1,0,1,0,1,1,2,1,1,0,0]
What is the best way to convince myself that these data are correlated? that no univariate discrete distribution would approximate them well? that a time series model is necessary to better estimate the future distribution of counts?
|
(This is part-2 of my long question, you can have a look at part-1 here)
I am going to do a quasi-experiment, with measuring the base line of a sample (actually not quite a sample, but a ward, with high patient turn-over rate), and then we do a intervention, and measure the variables (i.e. infection rate) again.
I googled a bit and found that this is something called a single case experiment, and it was said that single case experiment doesn't have very solid statistics because you don't have the control, you can't conclude on the causality in a solid manner.
I have googled a bit again and found that I can compare the incidence rate (or call it infection rate), but doing something like "incidence rate difference" (IRD) or "incidence rate ratio" (IRR). (I found it from here)
What is the difference between IRD and t-test? And is there any statistical test complementary for IRR?
But mostly importantly, is it appropriate for me to use this test (does it have a name?) for single case experiment? Because the patients in the ward keep changing, this is what I worried about.
|
Sometimes I want to do an exact test by examining all possible combinations of the data to build an empirical distribution against which I can test my observed differences between means. To find the possible combinations I'd typically use the combn function. The choose function can show me how many possible combinations there are. It is very easy for the number of combinations to get so large that it is not possible to store the result of the combn function, e.g. combn(28,14) requires a 2.1 Gb vector. So I tried writing an object that steped through the same logic as the combn function in order to provide the values off an imaginary "stack" one at a time. However, this method (as I instantiated it) is easily 50 times slower than combn at reasonable combination sizes, leading me to think it will also be painfully slow for larger combination sizes.
Is there a better algorithm for doing this sort of thing than the algorithm used in combn?Specifically is there a way to generate and pull the Nth possible combination without calculating through all previous combinations?
|
I am having difficulties to select the right way to visualize data. Let's say we have bookstores that sells books, and every book has at least one category.
For a bookstore, if we count all the categories of books, we acquire a histogram that shows the number of books that falls into a specific category for that bookstore.
I want to visualize the bookstore behavior, I want to see if they favor a category over other categories. I don't want to see if they are favoring sci-fi all together, but I want to see if they are treating every category equally or not.
I have ~1M bookstores.
I have thought of 4 methods:
Sample the data, show only 500 bookstore's histograms. Show them in 5 separate pages using 10x10 grid. Example of a 4x4 grid:
Same as #1. But this time sort x axis values according to their count desc, so if there is a favoring it will be seen easily.
Imagine putting the histograms in #2 together like a deck and showing them in 3D. Something like this:
Instead of using third axis suing color to represent colors, so using a heatmap (2D histogram):
If generally bookstores prefer some categories to others it will be displayed as a nice gradient from left to right.
Do you have any other visualization ideas/tools to represent multiple histograms?
|
Decision trees seems to be a very understandable machine learning method.
Once created it can be easily inspected by a human which is a great advantage in some applications.
What are the practical weak sides of Decision Trees?
|
My girlfriend (B.B.A) is really interested in Actuarial science. She's looking at self teaching her self. She's good with basic math (Calculus 1 and 2) and stats
What are some of the essential sources she needs to read in order to learn and excel in the field?
|
I have detector which will detect an event with some probability p. If the detector says that an event occured, then that is always the case, so there are not false-positives. After I run it for some time, I get k events detected. I would like to calculate what the total number of events that occured was, detected or otherwise, with some confidence, say 95%.
So for example, let's say I get 13 events detected. I would like to be able to calculate that there were between 13 and 19 events with 95% confidence based on p.
Here's what I've tried so far:
The probability of detecting k events if there were n total is:
binomial(n, k) * p^k * (1 - p)^(n - k)
The sum of that over n from k to infinity is:
1/p
Which means, that the probability of there being n events total is:
f(n) = binomial(n, k) * p^(k + 1) * (1 - p)^(n - k)
So if I want to be 95% sure I should find the first partial sum f(k) + f(k+1) + f(k+2) ... + f(k+m) which is at least 0.95 and the answer is [k, k+m]. Is this the correct approach? Also is there a closed formula for the answer?
|
In the traditional Birthday Paradox the question is "what are the chances that two or more people in a group of $n$ people share a birthday". I'm stuck on a problem which is an extension of this.
Instead of knowing the probability that two people share a birthday, I need to extend the question to know what is the probability that $x$ or more people share a birthday. With $x=2$ you can do this by calculating the probability that no two people share a birthday and subtract that from $1$, but I don't think I can extend this logic to larger numbers of $x$.
To further complicate this I also need a solution which will work for very large numbers for $n$ (millions) and $x$ (thousands).
|
I've sampled a real world process, network ping times. The "round-trip-time" is measured in milliseconds. Results are plotted in a histogram:
Latency has a minimum value, but a long upper tail.
I want to know what statistical distribution this is, and how to estimate its parameters.
Even though the distribution is not a normal distribution, I can still show what I am trying to achieve.
The normal distribution uses the function:
with the two parameters
μ (mean)
σ2 (variance)
Parameter estimation
The formulas for estimating the two parameters are:
Applying these formulas against the data I have in Excel, I get:
μ = 10.9558 (mean)
σ2 = 67.4578 (variance)
With these parameters I can plot the "normal" distribution over top my sampled data:
Obviously it's not a normal distribution. A normal distribution has an infinite top and bottom tail, and is symmetrical. This distribution is not symmetrical.
What principles would I apply; what
flowchart would I apply to determine
what kind of distribution this is?
Given that the distribution has no negative tail, and long positive tail: what distributions match that?
Is there a reference that matches distributions to the observations you're taking?
And cutting to the chase, what is the formula for this distribution, and what are the formulas to estimate its parameters?
I want to get the distribution so I can get the "average" value, as well as the "spread":
I am actually plotting the histogram in software, and I want to overlay the theoretical distribution:
Note: Cross-posted from math.stackexchange.com
Update: 160,000 samples:
Months and months, and countless sampling sessions, all give the same distribution. There must be a mathematical representation.
Harvey suggested putting the data on a log scale. Here's the probability density on a log scale:
Tags: sampling, statistics, parameter-estimation, normal-distribution
It's not an answer, but an addendum to the question. Here's the distribution buckets. I think the more adventurous person might like to paste them into Excel (or whatever program you know) and can discover the distribution.
The values are normalized
Time Value
53.5 1.86885613545469E-5
54.5 0.00396197500716395
55.5 0.0299702228922418
56.5 0.0506460012708222
57.5 0.0625879919763777
58.5 0.069683415770654
59.5 0.0729476844872482
60.5 0.0508017392821101
61.5 0.032667605247748
62.5 0.025080049337802
63.5 0.0224138145845533
64.5 0.019703973188144
65.5 0.0183895443728742
66.5 0.0172059354870862
67.5 0.0162839664602619
68.5 0.0151688822994406
69.5 0.0142780608748739
70.5 0.0136924859524314
71.5 0.0132751080821798
72.5 0.0121849420031646
73.5 0.0119419907055555
74.5 0.0117114984488494
75.5 0.0105528076448675
76.5 0.0104219877153857
77.5 0.00964952717939773
78.5 0.00879608287754009
79.5 0.00836624596638551
80.5 0.00813575370967943
81.5 0.00760001495084908
82.5 0.00766853967581576
83.5 0.00722624372375815
84.5 0.00692099722163388
85.5 0.00679017729215205
86.5 0.00672788208763689
87.5 0.00667804592402477
88.5 0.00670919352628235
89.5 0.00683378393531266
90.5 0.00612361860383988
91.5 0.00630427469693383
92.5 0.00621706141061261
93.5 0.00596788059255199
94.5 0.00573115881539439
95.5 0.0052950923837883
96.5 0.00490886211579433
97.5 0.00505214108617919
98.5 0.0045413204091549
99.5 0.00467214033863673
100.5 0.00439181191831853
101.5 0.00439804143877004
102.5 0.00432951671380337
103.5 0.00419869678432154
104.5 0.00410525397754881
105.5 0.00440427095922156
106.5 0.00439804143877004
107.5 0.00408656541619426
108.5 0.0040616473343882
109.5 0.00389345028219728
110.5 0.00392459788445485
111.5 0.0038249255572306
112.5 0.00405541781393668
113.5 0.00393705692535789
114.5 0.00391213884355182
115.5 0.00401804069122759
116.5 0.0039432864458094
117.5 0.00365672850503968
118.5 0.00381869603677909
119.5 0.00365672850503968
120.5 0.00340131816652754
121.5 0.00328918679840026
122.5 0.00317082590982146
123.5 0.00344492480968815
124.5 0.00315213734846692
125.5 0.00324558015523965
126.5 0.00277213660092446
127.5 0.00298394029627599
128.5 0.00315213734846692
129.5 0.0030649240621457
130.5 0.00299639933717902
131.5 0.00308984214395176
132.5 0.00300885837808206
133.5 0.00301508789853357
134.5 0.00287803844860023
135.5 0.00277836612137598
136.5 0.00287803844860023
137.5 0.00265377571234566
138.5 0.00267246427370021
139.5 0.0027472185191184
140.5 0.0029465631735669
141.5 0.00247311961925171
142.5 0.00259148050783051
143.5 0.00258525098737899
144.5 0.00259148050783051
145.5 0.0023485292102214
146.5 0.00253541482376687
147.5 0.00226131592390018
148.5 0.00239213585338201
149.5 0.00250426722150929
150.5 0.0026288576305396
151.5 0.00248557866015474
152.5 0.00267869379415173
153.5 0.00247311961925171
154.5 0.00232984064886685
155.5 0.00243574249654262
156.5 0.00242328345563958
157.5 0.00231738160796382
158.5 0.00256656242602444
159.5 0.00221770928073957
160.5 0.00241705393518807
161.5 0.00228000448525473
162.5 0.00236098825112443
163.5 0.00216787311712744
164.5 0.00197475798313046
165.5 0.00203705318764562
166.5 0.00209311887170926
167.5 0.00193115133996985
168.5 0.00177541332868196
169.5 0.00165705244010316
170.5 0.00160098675603952
171.5 0.00154492107197588
172.5 0.0011150841608213
173.5 0.00115869080398191
174.5 0.00107770703811221
175.5 0.000946887108630378
176.5 0.000853444301857643
177.5 0.000822296699600065
178.5 0.00072885389282733
179.5 0.000753771974633393
180.5 0.000766231015536424
181.5 0.000566886361087923
Bonus Reading
What Is the Expected Distribution of Website Response Times?
What Do You Mean? - Revisiting Statistics for Web Response Time Measurements
Modeling Network Latency
|
What would be the best way to display changes in two scalar variables (x,y) over time (z), in one visualization?
One idea that I had was to plot x and y both on the vertical axis, with z as the horizontal.
Note: I'll be using R and likely ggplot2
|
The title is quite self-explanatory - I'd like to know if there's any other parametric technique apart from repeated-measures ANOVA, that can be utilized in order to compare several (more than 2) repeated measures?
|
Well, we've got favourite statistics quotes. What about statistics jokes?
|
I am working with a large data set (approximately 50K observations) and trying to running a Maximum likelihood estimation on 5 unknowns in Stata.
I encountered an error message of "Numerical Overflow". How can I overcome this?
I am trying to run a Stochastic Frontier analysis using the built in Stata command "frontier". The dependent variable is log of output and the independent variable is log of intermediate inputs, capital, labour, and utlities.
|
In an average (median?) conversation about statistics you will often find yourself discussing this or that method of analyzing this or that type of data. In my experience, careful study design with special thought with regards to the statistical analysis is often neglected (working in biology/ecology, this seems to be a prevailing occurrence). Statisticians often find themselves in a gridlock with insufficient (or outright wrong) collected data. To paraphrase Ronald Fisher, they are forced to do a post-mortem on the data, which often leads to weaker conclusions, if at all.
I would like to know which references you use to construct a successful study design, preferably for a wide range of methods (e.g. t-test, GLM, GAM, ordination techniques...) that helps you avoid pitfalls mentioned above.
|
I want to predict the results of a simple card game, to judge on average, how long a game will last.
My 'simple' game is;
Cards are dealt from a randomised
deck to n players (typically 2-4)
Each player gets five cards
The top
card from the deck is turned over
Each player takes it in turns to
either place a card of the same face
value (i.e 1-10, J, Q, K, A), the
same suit (i.e Hearts, Diamonds,
Spades, Clubs) or any suit of magic
card (a jack)
If the player can place
a card they do, otherwise they must
take a card from the deck
Play
continues in turn until all but one
player has no cards left
I'm guessing that I could write code to play a mythical game and report the result, then run that code thousands of times.
Has anyone done this ? Can they suggest code that does a similar job (my favoured language is R, but anything would do) ? Is there a better way ?
|
I am trying to compare it to Euclidean distance and Pearson correlation
|
In circular statistics, the expectation value of a random variable $Z$ with values on the circle $S$ is defined as
$$
m_1(Z)=\int_S z P^Z(\theta)\textrm{d}\theta
$$
(see wikipedia).
This is a very natural definition, as is the definition of the variance
$$
\mathrm{Var}(Z)=1-|m_1(Z)|.
$$
So we didn't need a second moment in order to define the variance!
Nonetheless, we define the higher moments
$$
m_n(Z)=\int_S z^n P^Z(\theta)\textrm{d}\theta.
$$
I admit that this looks rather natural as well at first sight, and very similar to the definition in linear statistics. But still I feel a little bit uncomfortable, and have the following
Questions:
1.
What is measured by the higher moments defined above (intuitively)? Which properties of the distribution can be characterized by their moments?
2.
In the computation of the higher moments we use multiplication of complex numbers, although we think of the values of our random variables merely as vectors in the plane or as angles. I know that complex multiplication is essentially addition of angles in this case, but still:
Why is complex multiplication a meaningful operation for circular data?
|
I have a given distance with a standard deviation. I have simulated now a few 100 distances and would like to draw from these distances a sample of 10-20 resembling the original distribution. Is there any standardized way of doing so?
|
I am looking for a robust version of Hotelling's $T^2$ test for the mean of a vector. As data, I have a $m\ \times\ n$ matrix, $X$, each row an i.i.d. sample of an $n$-dimensional RV, $x$. The null hypothesis I wish to test is $E[x] = \mu$, where $\mu$ is a fixed $n$-dimensional vector. The classical Hotelling test appears to be susceptible to non-normality in the distribution of $x$ (just as the 1-d analogue, the Student t-test is susceptible to skew and kurtosis).
what is the state of the art robust version of this test? I am looking for something relatively fast and conceptually simple. There was a paper in COMPSTAT 2008 on the topic, but I do not have access to the proceedings. Any help?
|