instruction
stringlengths 9
38.7k
|
---|
What is the difference between the Shapiro–Wilk test of normality and the Kolmogorov–Smirnov test of normality? When will results from these two methods differ?
|
If you could go back in time and tell yourself to read a specific book at the beginning of your career as a statistician, which book would it be?
|
Say I've got a program that monitors a news feed and as I'm monitoring it I'd like to discover when a bunch of stories come out with a particular keyword in the title. Ideally I want to know when there are an unusual number of stories clustered around one another.
I'm entirely new to statistical analysis and I'm wondering how you would approach this problem. How do you select what variables to consider? What characteristics of the problem affect your choice of an algorithm? Then, what algorithm do you choose and why?
Thanks, and if the problem needs clarification please let me know.
|
What topics in statistics are most useful/relevant to data mining?
|
From Wikipedia :
Suppose you're on a game show, and
you're given the choice of three
doors: Behind one door is a car;
behind the others, goats. You pick a
door, say No. 1, and the host, who
knows what's behind the doors, opens
another door, say No. 3, which has a
goat. He then says to you, "Do you
want to pick door No. 2?" Is it to
your advantage to switch your choice?
The answer is, of course, yes - but it's incredibly un-inituitive. What misunderstanding do most people have about probability that leads to us scratching our heads -- or better put; what general rule can we take away from this puzzle to better train our intuition in the future?
|
What are pivot tables, and how can they be helpful in analyzing data?
|
Possible Duplicate:
Testing random variate generation algorithms
What's a good way to test a series of numbers to see if they're random (or at least psuedo-random)? Is there a good statistical measure of randomness that can be used to determine how random a set is?
More importantly, how can one prove a method of generating numbers is psuedo-random?
|
I have a data set where a series of measurements are being taken each week. In general the data set shows a +/- 1mm change each week with a mean measurement staying at about 0mm. In plotting the data this week it appears that some noticeable movement has occurred at two points and looking back at the data set, it is also possible that movement occurred last week as well.
What is the best way of looking at this data set to see how likely it is that the movements that have been seen are real movements rather than just some effect caused by the natural tolerance in the readings.
Edit
Some more information on the data set. Measurements have been taken at 39 locations which should behave in a similar way although only some of the points may show signs of movement. At each point the readings have now been taken 10 times on a bi-weekly basis and up until the most recent set of readings the measurements were between -1mm and 1mm. The measurements can only be taken with mm accuracy so we only receive results to the nearest mm. The results for one of the points showing a movement is 0mm, 1mm, 0mm, -1mm, -1mm, 0mm, -1mm, -1mm, 1mm, 3mm. We are not looking for statistically significant information, just an indicator of what might have occurred. The reason is that if a measurement reaches 5mm in a subsequent week we have a problem and we'd like to be forewarned that this might occur.
|
I usually make my own idiosyncratic choices when preparing plots. However, I wonder if there are any best practices for generating plots.
Note: Rob's comment to an answer to this question is very relevant here.
|
At the moment I use standard deviation of the mean to estimate uncertainty:
$$\sigma_\textrm{mean}=\frac{\sigma}{\sqrt{N}}$$
where $N$ is in hundreds and mean is a time series (monthly) mean. I
present it then like this:
$$O_n\geq \frac{-\log_{10} 0.05}{b}$$
for each element (month) in the (annual) time series.
Is this valid? Is this appropriate for time series?
|
There are many ways to measure how similar two probability distributions are. Among methods which are popular (in different circles) are:
the Kolmogorov distance: the sup-distance between the distribution functions;
the Kantorovich-Rubinstein distance: the maximum difference between the expectations w.r.t. the two distributions of functions with Lipschitz constant $1$, which also turns out to be the $L^1$ distance between the distribution functions;
the bounded-Lipschitz distance: like the K-R distance but the functions are also required to have absolute value at most $1$.
These have different advantages and disadvantages. Only convergence in the sense of 3. actually corresponds precisely to convergence in distribution; convergence in the sense of 1. or 2. is slightly stronger in general. (In particular, if $X_n=\frac{1}{n}$ with probability $1$, then $X_n$ converges to $0$ in distribution, but not in the Kolmogorov distance. However, if the limit distribution is continuous then this pathology doesn't occur.)
From the perspective of elementary probability or measure theory, 1. is very natural because it compares the probabilities of being in some set. A more sophisticated probabilistic perspective, on the other hand, tends to focus more on expectations than probabilities. Also, from the perspective of functional analysis, distances like 2. or 3. based on duality with some function space are very appealing, because there is a large set of mathematical tools for working with such things.
However, my impression (correct me if I'm wrong!) is that in statistics, the Kolmogorov distance is the usually preferred way of measuring similarity of distributions. I can guess one reason: if one of the distributions is discrete with finite support -- in particular, if it is the distribution of some real-world data -- then the Kolmogorov distance to a model distribution is easy to compute. (The K-R distance would be slightly harder to compute, and the B-L distance would probably be impossible in practical terms.)
So my question (finally) is, are there other reasons, either practical or theoretical, to favor the Kolmogorov distance (or some other distance) for statistical purposes?
|
What is a good introduction to statistics for a mathematician who is already well-versed in probability? I have two distinct motivations for asking, which may well lead to different suggestions:
I'd like to better understand the statistics motivation behind many problems considered by probabilists.
I'd like to know how to better interpret the results of Monte Carlo simulations which I sometimes do to form mathematical conjectures.
I'm open to the possibility that the best way to go is not to look for something like "Statistics for Probabilists" and just go to a more introductory source.
|
Coming from the field of computer vision, I've often used the RANSAC (Random Sample Consensus) method for fitting models to data with lots of outliers.
However, I've never seen it used by statisticians, and I've always been under the impression that it wasn't considered a "statistically-sound" method. Why is that so? It is random in nature, which makes it harder to analyze, but so are bootstrapping methods.
Or is simply a case of academic silos not talking to one another?
|
What book would you recommend for scientists who are not statisticians?
Clear delivery is most appreciated. As well as the explanation of the appropriate techniques and methods for typical tasks: time series analysis, presentation and aggregation of large data sets.
|
Data analysis cartoons can be useful for many reasons: they help communicate; they show that quantitative people have a sense of humor too; they can instigate good teaching moments; and they can help us remember important principles and lessons.
This is one of my favorites:
As a service to those who value this kind of resource, please share your favorite data analysis cartoon. They probably don't need any explanation (if they do, they're probably not good cartoons!) As always, one entry per answer. (This is in the vein of the Stack Overflow question What’s your favorite “programmer” cartoon?.)
P.S. Do not hotlink the cartoon without the site's permission please.
|
It has been suggested by Angrist and Pischke that Robust (i.e. robust to heteroskedasticity or unequal variances) Standard Errors are reported as a matter of course rather than testing for it. Two questions:
What is impact on the standard errors of doing so when there is homoskedasticity?
Does anybody actually do this in their work?
|
We're plotting time-series metrics in the context of network/server operations. The data has a 5-minute sample rate, and consists of things like CPU utilization, error rate, etc.
We're adding a horizontal "threshold" line to the graphs, to visually indicate a value threshold above which people should worry/take notice. For example, in the CPU utilization example, perhaps the "worry" threshold is 75%.
My team has some internal debate over what color this line should be:
Something like a bright red that clearly stands out from the background grid and data lines, and indicates this is a warning condition
Something more subtle and definitely NOT red, since the "ink" for the line doesn't represent any actual data, and thus attention shouldn't be drawn to it unnecessarily.
Would appreciate guidance / best practices...
|
I am not an expert of random forest but I clearly understand that the key issue with random forest is the (random) tree generation. Can you explain me how the trees are generated? (i.e. What is the used distribution for tree generation?)
Thanks in advance !
|
Another question about time series from me.
I have a dataset which gives daily records of violent incidents in a psychiatric hospital over three years. With the help from my previous question I have been fiddling with it and am a bit happier about it now.
The thing I have now is that the daily series is very noisy. It fluctuates wildly, up and down, from 0 at times up to 20. Using loess plots and the forecast package (which I can highly recommend for novices like me) I just get a totally flat line, with massive confidence intervals from the forecast.
However, aggregating weekly or monthly the data make a lot more sense. They sweep down from the start of the series, and then increase again in the middle. Loess plotting and the forecast package both produce something that looks a lot more meaningful.
It does feel a bit like cheating though. Am I just preferring the aggregated versions because they look nice with no real validity to it?
Or would it be better to compute a moving average and use that as the basis? I'm afraid I don't understand the theory behind all this well enough to be confident about what is acceptable
|
A question previously sought recommendations for textbooks on mathematical statistics
Does anyone know of any good online video lectures on mathematical statistics?
The closest that I've found are:
Machine Learning
Econometrics
UPDATE: A number of the suggestions mentioned below are good statistics-101 type videos.
However, I'm specifically wondering whether there are any videos that provide a rigorous mathematical presentation of statistics.
i.e., videos that might accompany a course that use a textbook mentioned in this discussion on mathoverflow
|
I have calculated AIC and AICc to compare two general linear mixed models; The AICs are positive with model 1 having a lower AIC than model 2. However, the values for AICc are both negative (model 1 is still < model 2). Is it valid to use and compare negative AICc values?
|
What are the variable/feature selection that you prefer for binary classification when there are many more variables/feature than observations in the learning set? The aim here is to discuss what is the feature selection procedure that reduces the best the classification error.
We can fix notations for consistency: for $i \in \{0, 1\}$, let $\{x_1^i,\dots, x_{n_i}^i\}$ be the learning set of observations from group $i$. So $n_0 + n_1 = n$ is the size of the learning set. We set $p$ to be the number of features (i.e. the dimension of the feature space). Let $x[i]$ denote the $i$-th coordinate of $x \in \mathbb{R}^p$.
Please give full references if you cannot give the details.
EDIT (updated continuously): Procedures proposed in the answers below
Greedy forward selection Variable selection procedure for binary classification
Backward elimination Variable selection procedure for binary classification
Metropolis scanning / MCMC Variable selection procedure for binary classification
penalized logistic regression Variable selection procedure for binary classification
As this is community wiki there can be more discussion and update
I have one remark: in a certain sense, you all give a procedure that permit ordering of variables but not variable selection (you are quite evasive on how to select the number of features, I guess you all use cross validation?) Can you improve the answers in this direction? (as this is community wiki you don't need to be the answer writter to add an information about how to select the number of variables? I have openned a question in this direction here Cross validation in very high dimension (to select the number of used variables in very high dimensional classification))
|
I am proposing to try and find a trend in some very noisy long term data. The data is basically weekly measurements of something which moved about 5mm over a period of about 8 months. The data is to 1mm accuracey and is very noisy regularly changing +/-1 or 2mm in a week. We only have the data to the nearest mm.
We plan to use some basic signal processing with a fast fourier transform to separate out the noise from the raw data. The basic assumption is if we mirror our data set and add it to the end of our existing data set we can create a full wavelength of the data and therefore our data will show up in a fast fourier transform and we can hopefully then separate it out.
Given that this sounds a little dubious to me, is this a method worth purusing or is the method of mirroring and appending our data set somehow fundamentally flawed? We are looking at other approaches such as using a low pass filter as well.
|
Sometimes, I just want to do a copy & paste from the output window in SAS. I can highlight text with a mouse-drag, but only SOMETIMES does that get copied to the clipboard. It doesn't matter if I use "CTRL-C" or right click -> copy, or edit -> copy
Any other SAS users experience this, and do you know a workaround/option/technique that can fix it?
Sometimes, I can fix it by clicking in another window, and coming back to the output window, but sometimes I just have to save the output window as a .lst and get the text from another editor.
|
I've heard that when many regression model specifications (say, in OLS) are considered as possibilities for a dataset, this causes multiple comparison problems and the p-values and confidence intervals are no longer reliable. One extreme example of this is stepwise regression.
When can I use the data itself to help specify the model, and when is this not a valid approach? Do you always need to have a subject-matter-based theory to form the model?
|
What is your preferred method of checking for convergence when using Markov chain Monte Carlo for Bayesian inference, and why?
|
In the context of machine learning, what is the difference between
unsupervised learning
supervised learning and
semi-supervised learning?
And what are some of the main algorithmic approaches to look at?
|
Debugging MCMC programs is notoriously difficult. The difficulty arises because of several issues some of which are:
(a) Cyclic nature of the algorithm
We iteratively draw parameters conditional on all other parameters. Thus, if a implementation is not working properly it is difficult to isolate the bug as the issue can be anywhere in the iterative sampler.
(b) The correct answer is not necessarily known.
We have no way to tell if we have achieved convergence. To some extent this can be mitigated by testing the code on simulated data.
In light of the above issues, I was wondering if there is a standard technique that can be used to debug MCMC programs.
Edit
I wanted to share the approach I use to debug my own programs. I, of course, do all of the things that PeterR mentioned. Apart from those, I perform the following tests using simulated data:
Start all parameters from true values and see if the sampler diverges too far from the true values.
I have flags for each parameter in my iterative sampler that determines whether I am drawing that parameter in the iterative sampler. For example, if a flag 'gen_param1' is set to true then I draw 'param1' from its full conditional in the iterative sampler. If this is set to false then 'param1' is set to its true value.
Once I finish writing up the sampler, I test the program using the following recipe:
Set the generate flag for one parameter to true and everything else to false and assess convergence with respect to true value.
Set the generate flag for another parameter in conjunction with the first one and again assess convergence.
The above steps have been incredibly helpful to me.
|
As you know, there are two popular types of cross-validation, K-fold and random subsampling (as described in Wikipedia). Nevertheless, I know that some researchers are making and publishing papers where something that is described as a K-fold CV is indeed a random subsampling one, so in practice you never know what is really in the article you're reading.
Usually of course the difference is unnoticeable, and so goes my question -- can you think of an example when the result of one type is significantly different from another?
|
I have two different analytical methods that can measure the concentration of a particular molecule in a matrix (for instance measure the amount of salt in water)
The two methods are different, and each has it's own error. What ways exist to show the two methods are equivalent (or not).
I'm thinking that plotting the results from a number of samples measured by both methods on a scatter graph is a good first step, but are there any good statistical methods ?
|
We all know the mantra "correlation does not imply causation" which is drummed into all first year statistics students. There are some nice examples here to illustrate the idea.
But sometimes correlation does imply causation. The following example is taking from this Wikipedia page
For example, one could run an experiment on identical twins who were known to consistently get the same grades on their tests. One twin is sent to study for six hours while the other is sent to the amusement park. If their test scores suddenly diverged by a large degree, this would be strong evidence that studying (or going to the amusement park) had a causal effect on test scores. In this case, correlation between studying and test scores would almost certainly imply causation.
Are there other situations where correlation implies causation?
|
In answering this question on discrete and continuous data I glibly asserted that it rarely makes sense to treat categorical data as continuous.
On the face of it that seems self-evident, but intuition is often a poor guide for statistics, or at least mine is. So now I'm wondering: is it true? Or are there established analyses for which a transform from categorical data to some continuum is actually useful? Would it make a difference if the data were ordinal?
|
In an answer to this question about treating categorical data as continuous, optimal scaling was mentioned. How does this method work and how is it applied?
|
ANOVA is equivalent to linear regression with the use of suitable dummy variables. The conclusions remain the same irrespective of whether you use ANOVA or linear regression.
In light of their equivalence, is there any reason why ANOVA is used instead of linear regression?
Note: I am particularly interested in hearing about technical reasons for the use of ANOVA instead of linear regression.
Edit
Here is one example using one-way ANOVA. Suppose, you want to know if the average height of male and females is the same. To test for your hypothesis you would collect data from a random sample of male and females (say 30 each) and perform the ANOVA analysis (i.e., sum of squares for sex and error) to decide whether an effect exists.
You could also use linear regression to test for this as follows:
Define: $\text{Sex} = 1$ if respondent is a male and $0$ otherwise.
$$
\text{Height} = \text{Intercept} + \beta * \text{Sex} + \text{error}
$$
where: $\text{error}\sim\mathcal N(0,\sigma^2)$
Then a test of whether $\beta = 0$ is a an equivalent test for your hypothesis.
|
I am trying to get a global perspective on some of the essential ideas in machine learning, and I was wondering if there is a comprehensive treatment of the different notions of loss (squared, log, hinge, proxy, etc.). I was thinking something along the lines of a more comprehensive, formal presentation of John Langford’s excellent post on Loss Function Semantics.
|
This is a fairly general question:
I have typically found that using multiple different models outperforms one model when trying to predict a time series out of sample. Are there any good papers that demonstrate that the combination of models will outperform a single model? Are there any best-practices around combining multiple models?
Some references:
Hui Zoua, Yuhong Yang "Combining time series models for forecasting" International Journal of Forecasting 20 (2004) 69– 84
|
Instrumental variables are becoming increasingly common in applied economics and statistics. For the uninitiated, can we have some non-technical answers to the following questions:
What is an instrumental variable?
When would one want to employ an instrumental variable?
How does one find or choose an instrumental variable?
|
Difference in differences has long been popular as a non-experimental tool, especially in economics. Can somebody please provide a clear and non-technical answer to the following questions about difference-in-differences.
What is a difference-in-difference estimator?
Why is a difference-in-difference estimator any use?
Can we actually trust difference-in-difference estimates?
|
I'm curious if there are graphical techniques particular, or more applicable, to structural equation modeling. I guess this could fall into categories for exploratory tools for covariance analysis or graphical diagnostics for SEM model evaluation. (I'm not really thinking of path/graph diagrams here.)
|
In "Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations" by Lee et. al.(PDF) Convolutional DBN's are proposed. Also the method is evaluated for image classification. This sounds logical, as there are natural local image features, like small corners and edges etc.
In "Unsupervised feature learning for audio classification using convolutional deep belief networks" by Lee et. al. this method is applied for audio in different types of classifications. Speaker identification, gender indentification, phone classification and also some music genre / artist classification.
How can the convolutional part of this network be interpreted for audio, like it can be explained for images as edges?
|
What is the preferred method for for conducting post-hocs for within subjects tests? I've seen published work where Tukey's HSD is employed but a review of Keppel and Maxwell & Delaney suggests that the likely violation of sphericity in these designs makes the error term incorrect and this approach problematic. Maxwell & Delaney provide an approach to the problem in their book, but I've never seen it done that way in any stats package. Is the approach they offer appropriate? Would a Bonferroni or Sidak correction on multiple paired sample t-tests be reasonable? An acceptable answer will provide general R code which can conduct post-hocs on simple, multiple-way, and mixed designs as produced by the ezANOVA function in the ez package, and appropriate citations that are likely to pass muster with reviewers.
|
The AIC and BIC are both methods of assessing model fit penalized for the number of estimated parameters. As I understand it, BIC penalizes models more for free parameters than does AIC. Beyond a preference based on the stringency of the criteria, are there any other reasons to prefer AIC over BIC or vice versa?
|
I am currently using Viterbi training for an image segmentation problem. I wanted to know what the advantages/disadvantages are of using the Baum-Welch algorithm instead of Viterbi training.
|
In some papers, for example in "The Geometric Density with Unknown Location Parameter" by Klotz, a Geometric Distribution is called a Geometric Density.
For me, this claim looks erroneous, however Klotz is a serious statistician and a professor in the field.
My question is, to what extend is it legitimate to call a Geometric Distribution a Geometric Density?
|
The E-M procedure appears, to the uninitiated, as more or less black magic. Estimate parameters of an HMM (for example) using supervised data. Then decode untagged data, using forward-backward to 'count' events as if the data were tagged, more or less. Why does this make the model better? I do know something about the math, but I keep wishing for some sort of mental picture of it.
|
E-M provides a way to improve the estimation of a generative model with unannotated data. Is there anything out there that works the same way for discriminative models (e.g. perceptrons)?
For example, consider averaged perceptron tagger. It would be handy to be able to throw the entire Gigaword through some process of unsupervised model improvement.
EDIT:
So, I was pleasantly surprised to note that this site has the ambition of dealing with machine learning, but I'm learning by experiment what vocabulary is generic and what is very domain-specific. Apologies.
Consider a sequence classification problem, like part-of-speech tagging or named entity extraction. You can train a generative model (e.g. an HMM). That's a probability model, and you can apply E-M. However, the number of states grows prohibitive if you want to look at many features, and so the fashion tends toward things like CRFs (batch) or Perceptron (online).
For example, this paper talks about unsupervised learning in for a perceptron POS tagger, but the details are that they add the output of several pre-existing taggers to the training set of their model.
|
Do you know any good heuristics for finding optimal value of ν in case of ν-SVM classification? In this particular problem I have a radial basis kernel, if it helps.
|
I am puzzled by something I found using Linear Discriminant Analysis. Here is the problem - I first ran the Discriminant analysis using 20 or so independent variables to predict 5 segments. Among the outputs, I asked for the Predicted Segments, which are the same as the original segments for around 80% of the cases. Then I ran again the Discriminant Analysis with the same independent variables, but now trying to predict the Predicted Segments. I was expecting I would get 100% of correct classification rate, but that did not happen and I am not sure why. It seems to me that if the Discriminant Analysis cannot predict with 100% accuracy it own predicted segments then somehow it is not a optimum procedure since a rule exist that will get 100% accuracy. I am missing something?
Note - This situation seems to be similar to that in Linear Regression Analysis. If you fit the model $y = a + bX + \text{error}$ and use the estimated equation with the same data you will get $\hat{y}$ [$= \hat{a} + \hat{b}X$]. Now if you estimate the model $\hat{y} = \hat{a} + \hat{b}X + \text{error}$, you will find the same $\hat{a}$ and $\hat{b}$ as before, no error, and R2 = 100% (perfect fit). I though this would also happen with Linear Discriminant Analysis, but it does not.
Note 2 - I run this test with Discriminant Analysis in SPSS.
|
In a question elsewhere on this site, several answers mentioned that the AIC is equivalent to leave-one-out (LOO) cross-validation and that the BIC is equivalent to K-fold cross validation. Is there a way to empirically demonstrate this in R such that the techniques involved in LOO and K-fold are made clear and demonstrated to be equivalent to the AIC and BIC values? Well commented code would be helpful in this regard. In addition, in demonstrating the BIC please use the lme4 package. See below for a sample dataset...
library(lme4) #for the BIC function
generate.data <- function(seed)
{
set.seed(seed) #Set a seed so the results are consistent (I hope)
a <- rnorm(60) #predictor
b <- rnorm(60) #predictor
c <- rnorm(60) #predictor
y <- rnorm(60)*3.5+a+b #the outcome is really a function of predictor a and b but not predictor c
data <- data.frame(y,a,b,c)
return(data)
}
data <- generate.data(76)
good.model <- lm(y ~ a+b,data=data)
bad.model <- lm(y ~ a+b+c,data=data)
AIC(good.model)
BIC(logLik(good.model))
AIC(bad.model)
BIC(logLik(bad.model))
Per earlier comments, below I have provided a list of seeds from 1 to 10000 in which AIC and BIC disagree. This was done by a simple search through the available seeds, but if someone could provide a way to generate data which would tend to produce divergent answers from these two information criteria it may be particularly informative.
notable.seeds <- read.csv("http://student.ucr.edu/~rpier001/res.csv")$seed
As an aside, I thought about ordering these seeds by the extent to which the AIC and BIC disagree which I've tried quantifying as the sum of the absolute differences of the AIC and BIC. For example,
AICDiff <- AIC(bad.model) - AIC(good.model)
BICDiff <- BIC(logLik(bad.model)) - BIC(logLik(good.model))
disagreement <- sum(abs(c(AICDiff,BICDiff)))
where my disagreement metric only reasonably applies when the observations are notable. For example,
are.diff <- sum(sign(c(AICDiff,BICDiff)))
notable <- ifelse(are.diff == 0 & AICDiff != 0,TRUE,FALSE)
However in cases where AIC and BIC disagreed, the calculated disagreement value was always the same (and is a function of sample size). Looking back at how AIC and BIC are calculated I can see why this might be the case computationally, but I'm not sure why it would be the case conceptually. If someone could elucidate that issue as well, I'd appreciate it.
|
I have tried to reproduce some research (using PCA) from SPSS in R. In my experience, principal() function from package psych was the only function that came close (or if my memory serves me right, dead on) to match the output. To match the same results as in SPSS, I had to use parameter principal(..., rotate = "varimax"). I have seen papers talk about how they did PCA, but based on the output of SPSS and use of rotation, it sounds more like Factor analysis.
Question: Is PCA, even after rotation (using varimax), still PCA? I was under the impression that this might be in fact Factor analysis... In case it's not, what details am I missing?
|
There have been a few questions about statistical textbooks, such as the question Free statistical textbooks. However, I am looking for textbooks that are Open Source, for example, having an Creative Commons license. The reason is that in course material in other domains, you still want to include some text about basic statistics. In this case, it would be interesting to reuse existing material, instead of rewriting that material.
Therefore, what Open Source textbooks on statistics (and perhaps machine learning) are available?
|
I see these terms being used and I keep getting them mixed up. Is there a simple explanation of the differences between them?
|
In engineering, we usually have Handbooks that pretty much dictate the state of the practice. These books are usually devoid of theory and focus on the applied methodology. Is there a forecasting Handbook out there? that solely focuses on the technique and not the background?
|
What is an estimator of standard deviation of standard deviation if normality of data can be assumed?
|
I have a data set of about 3,000 field observations.
The data collected is divided into 20 variables (real numbers), 30 boolean variables, and 10 or so look up variables and one "answer" variable
We have about 20,000 objects in the field, and i'm trying to produce an "answer" for the 20,000 objects based on the 3,000 observations.
What are some of the available methods that incorporate booleans and look up tables?
any suggestions on how i should proceed?
EDIT
the answer variable is a boolean as well
EDIT 2
a sample of the variable data:
Age of specimen
length, area, volume
time since last inspection
height
design life
Lookup table
material type
coating type
design standard
design effectiveness
a sample of the boolean
is it inspected?
is it in bad shape
does it need repairs soon
the answer variable which is my f(x) is:
is it useable
|
Provided a sample size "N" that I plan on using to forecast data. What are some of the ways to subdivide the data so that I use some of it to establish a model, and the remainder data to validate the model?
I know there is no black and white answer to this, but it would be interesting to know some "rules of thumb" or usually used ratios. I know back at university, one of our professors used to say model on 60% and validate on 40%.
|
Sites like eMarketer offer general survey results about internet usage.
Who else has a big set of survey results, or regularly releases them?
Preferably marketing research focused.
Thanks!
|
My father is a math enthusiast, but not interested in statistics much. It would be neat to try to illustrate some of the wonderful bits of statistics, and the CLT is a prime candidate. How would you convey the mathematical beauty and impact of the central limit theorem to a non-statistician?
|
Having just recently started teaching myself Machine Learning and Data Analysis I'm finding myself hitting a brick wall on the need for creating and querying large sets of data. I would like to take data I've been aggregating in my professional and personal life and analyze it but I'm uncertain of the best way to do the following:
How should I be storing this data? Excel? SQL? ??
What is a good way for a beginner to begin trying to analyze this data? I am a professional computer programmer so the complexity is not in writing programs but more or less specific to the domain of data analysis.
EDIT: Apologies for my vagueness, when you first start learning about something it's hard to know what you don't know, ya know? ;)
Having said that, my aim is to apply this to two main topics:
Software team metrics (think Agile velocity, quantifying risk, likelihood of a successfully completed iteration given x number of story points)
Machine learning (ex. system exceptions have occurred in a given set of modules what is the likelihood that a module will throw an exception in the field, how much will that cost, what can the data tell me about key modules to improve that will get me the best bang for my buck, predict what portion of the system the user will want to use next in order to start loading data, etc).
|
I bought this book:
How to Measure Anything: Finding the Value of Intangibles in Business
and
Head First Data Analysis: A Learner's Guide to Big Numbers, Statistics, and Good Decisions
What other books would you recommend?
|
I am collecting textual data surrounding press releases, blog posts, reviews, etc of certain companies' products and performance.
Specifically, I am looking to see if there are correlations between certain types and/or sources of such "textual" content with market valuations of the companies' stock symbols.
Such apparent correlations can be found by the human mind fairly quickly - but that is not scalable. How can I go about automating such analysis of disparate sources?
|
What's the difference between probability and statistics, and why are they studied together?
|
What are the main ideas, that is, concepts related to Bayes' theorem?
I am not asking for any derivations of complex mathematical notation.
|
Is there something about statistics that lends itself to this sort of saying, or is it just that people will say anything to support their case, and this includes citing irrelevant or incomplete statistics?
|
Oversimplifying a bit, I have about a million records that record the entry time and exit time of people in a system spanning about ten years. Every record has an entry time, but not every record has an exit time. The mean time in the system is ~1 year.
The missing exit times happen for two reasons:
The person has not left the system at the time the data was captured.
The person's exit time was not recorded. This happens to say 50% of the records
The questions of interest are:
Are people spending less time in the system, and how much less time.
Are more exit times being recorded, and how many.
We can model this by saying that the probability that an exit gets recorded varies linearly with time, and that the time in the system has a Weibull whose parameters vary linearly with time. We can then make a maximum likelihood estimate of the various parameters and eyeball the results and deem them plausible. We chose the Weibull distribution because it seems to be used in measuring lifetimes and is fun to say as opposed to fitting the data better than say a gamma distribution.
Where should I look to get a clue as to how to do this correctly? We are somewhat mathematically savvy, but not extremely statistically savvy.
|
Is anyone aware of good data anonymization software? Or perhaps a package for R that does data anonymization? Obviously not expecting uncrackable anonymization - just want to make it difficult.
|
I am developing a multi-class perceptron algorithm and was wondering if there are any datasets that could be used to test a multi-class perceptron? - A dataset where the classes are linearly separable and have at least 100 or more instances for training?
|
I'm doing shopping cart analyses my dataset is set of transaction vectors, with the items the products being bought.
When applying k-means on the transactions, I will always get some result. A random matrix would probably also show some clusters.
Is there a way to test whether the clustering I find is a significant one, or that is can be very well be a coincidence? If yes, how can I do it?
|
An hyperspectral image is a multidimensional image with more than 200 spectral bands i.e. an image for which each pixel is a vector of dimension 200 (most often it is a sampled spectral curve that is encoutered in satellite imagery or medical imagery).
What are the implemented package (I am especially interested in R packages but if other free algorithms exist, I will try them) for frontier detection and (unsupervised) segmentation of this type of images?
|
What is your favorite statistical quote?
This is community wiki, so please one quote per answer.
|
I was having a look round a few things yesturday and came across Bayesian Search Theory. Thinking about this theory led me to think about a problem I was working on a few years ago regarding geological interpretation.
We were looking at the geology at one specific site and it was essentially made up from two different types of rocks. Boreholes had been drilled at different locations and showed differing amounts of the two different types of rocks at different levels in the ground along with different amounts of weathering of the rock. A number of geologists looked at the available data and all came up with different interpretations. It seems to me that Bayesian Search Theory could have been used in this case, particualrly where extra data was gathered with time, to give some indication of how likely the different interpretations were.
Has anyone encountered a case where Bayesian Search Theory has been used in this case. Is there a standard frameowrk for doing this? I would have thought this may be something that the oil industry may have a lot of research on because it would be applicable to the search for oil.
|
I have commonly heard that LME models are more sound in the analysis of accuracy data (i.e., in psychology experiments), in that they can work with binomial and other non-normal distributions that traditional approaches (e.g., ANOVA) can't.
What is the mathematical basis of LME models that allow them to incorporate these other distributions, and what are some not-overly-technical papers describing this?
|
When does data analysis cease to be statistics ?
Are the following examples all applications of statistics ?: computer vision, face recognition, compressed sensing, lossy data compression, signal processing.
|
During every machine learning tutorial you'll find, there is the common "You will need to know x amount of stats before starting this tutorial". As such, using your knowledge of stats, you will learn about machine learning.
My question is whether this can be reversed. Can a computer science student learn statistics through studying machine learning algorithms? Has this been tested, at all? Are there examples where this is the case already?
|
What is the difference between operations research and statistical analysis?
|
I've heard that a lot of quantities that occur in nature are normally distributed. This is typically justified using the central limit theorem, which says that when you average a large number of iid random variables, you get a normal distribution. So, for instance, a trait that is determined by the additive effect of a large number of genes may be approximately normally distributed since the gene values may behave roughly like iid random variables.
Now, what confuses me is that the property of being normally distributed is clearly not invariant under monotonic transformations. So, if there are two ways of measuring something that are related by a monotonic transformation, they are unlikely to both be normally distributed (unless that monotonic transformation is linear). For instance, we can measure the sizes of raindrops by diameter, by surface area, or by volume. Assuming similar shapes for all raindrops, the surface area is proportional to the square of the diameter, and the volume is proportional to the cube of the diameter. So all of these ways of measuring cannot be normally distributed.
So my question is whether the particular way of scaling (i.e., the particular choice of monotonic transformation) under which the distribution does become normal, must carry a physical significance. For instance, should heights be normally distributed or the square of height, or the logarithm of height, or the square root of height? Is there a way of answering that question by understanding the processes that affect height?
|
I am looking at a scientific paper in which a single measurement is calculated using a logarithmic mean
'triplicate spots were combined to
produce one signal by taking the
logarithmic mean of reliable spots'
Why choose the log-mean?
Are the authors making an assumption about the underlying distribution?
That it is what...Log-Normal?
Have they just picked something they thought was reasonable... i.e. between a a mean and a geometric mean?
Any thoughts?
|
I have data compiled by someone else where score averages have been computed over time- averages range from 0-100. The original scores have negative values in many cases and the average would have been negative also, raw average ranges from -30 to 90. How is this 'normalization' accomplished?
Thanks
|
I'm interested in finding as optimal of a method as I can for determining how many bins I should use in a histogram. My data should range from 30 to 350 objects at most, and in particular I'm trying to apply thresholding (like Otsu's method) where "good" objects, which I should have fewer of and should be more spread out, are separated from "bad" objects, which should be more dense in value. A concrete value would have a score of 1-10 for each object. I'd had 5-10 objects with scores 6-10, and 20-25 objects with scores 1-4. I'd like to find a histogram binning pattern that generally allows something like Otsu's method to threshold off the low scoring objects. However, in the implementation of Otsu's I've seen, the bin size was 256, and often I have many fewer data points that 256, which to me suggests that 256 is not a good bin number. With so few data, what approaches should I take to calculating the number of bins to use?
|
I am attempting to calculate the standard error (SE) for the positive predictive value (PPV), negative predictive value (NPV), and diagnostic odds ratio (DOR) that I have obtained using the rates of true positives, false positives, true negatives, and false negatives in a sample. I am able to get 95% CIs but not SE.
Thank you!
|
Lately, there have been numerous questions about normalization
What are some of the situations where you never ever ever should normalize your data, and what are the alternatives?
|
There are a lot of references in the statistic literature to "functional data" (i.e. data that are curves), and in parallel, to "high dimensional data" (i.e. when data are high dimensional vectors). My question is about the difference between the two type of data.
When talking about applied statistic methodologies that apply in case 1 can be understood as a rephrasing of methodologies from case 2 through a projection into a finite dimensional subspace of a space of functions, it can be polynomes, splines, wavelet, Fourier, .... and will translate the functional problem into a finite dimensional vectorial problem (since in applied mathematic everything comes to be finite at some point).
My question is: can we say that any statistical procedure that applies to functional data can also be applied (almost directly) to high dimension data and that any procedure dedicated to high dimensional data can be (almost directly) applied to functional data ?
If the answer is no, can you illustrate ?
EDIT/UPDATE with the help of Simon Byrne's answer:
sparsity (S-sparse assumption, $l^p$ ball and weak $l^p$ ball for $p<1$) is used as a structural assumption in high dimensional statistical analysis.
"smoothness" is used as a structural assumption in functional data analysis.
On the other hand, inverse Fourier transform and inverse wavelet transform are transforming sparcity into smoothness, and smoothness is transformed into sparcity by wavelet and fourier transform. This make the critical difference mentionned by Simon not so critical?
|
I am building a web application for used book trading and I am adding a feature to propose other book that would be interesting when they view an offer.
Currently the data that I store are the following (they are updated each time someone visit an offer) :
ISBN (book of the offer)
SessId (a unique id that everyone has when visiting the website)
NumberOfVisit (The number of time someone has view an offer of that book)
I also have access to some user update data which categorize the book by subject and course. It isn't necessarily up-to-date and precise, but it's nonetheless data.
What are the approach to list the most interesting books for a book ?
|
I have R-scripts for reading large amounts of csv data from different files and then perform machine learning tasks such as svm for classification.
Are there any libraries for making use of multiple cores on the server for R.
or
What is most suitable way to achieve that?
|
I've heard that AIC can be used to choose among several models (which regressor to use).
But i would like to understand formally what it is in a kind of "advanced undergraduated" level, which I think would be something formal but with intuition arising from the formula.
And is it possible to implement AIC in stata with complex survey data?
|
I am creating multiple logistic regression models using lrm from Harrell's Design package in R. One model I would like to make is the model with no predictors. For example, I want to predict a constant c such that:
logit(Y) ~ c
I know I how to compute c (divide the number of "1"s by the total), what I would like is to use lrm so I can manipulate it as a model in a consistent way with the other models I am making. Is this possible, and if so how?
I have tried so far:
library(Design)
data(mtcars)
lrm(am ~ 1, data=mtcars)
which gives the error:
Error in dimnames(stats) <- list(names(cof), c("Coef", "S.E.", "Wald Z", :
length of 'dimnames' [1] not equal to array extent
and I have tried:
lrm(am ~ ., data=mtcars)
But this uses all the predictors, rather then none of the predictors.
|
I have $N$ paired observations ($X_i$, $Y_i$) drawn from a common unknown distribution, which has finite first and second moments, and is symmetric around the mean.
Let $\sigma_X$ the standard deviation of $X$ (unconditional on $Y$), and $\sigma_Y$ the same for Y. I would like to test the hypothesis
$H_0$: $\sigma_X = \sigma_Y$
$H_1$: $\sigma_X \neq \sigma_Y$
Does anyone know of such a test? I can assume in first analysis that the distribution is normal, although the general case is more interesting. I am looking for a closed-form solution. Bootstrap is always a last resort.
|
I'm working on regression models in STATISTICA application and I need to know what is Fisher-Snedecor distribution for and how to analyze my regression model in this distribution.
What the significance level means? What is v1 and v2? I need an explanation and little tutorial on real data.
|
Google Website Optimizer (GWO) is a tool provided by Google to do A/B and MVT experiments on websites.
This has been an unanswered question for a long time so I thought I'd ask it here and see if I can get any help. Here is some documentation (clues) that Google has published about the statistics used:
All about statistics
Fractional versus Full Factorial analysis
Example Data (summary pdf):
XML
CSV
I (as well as many others) are interested in discovering the math used by Google, because no one has been able to reproduce their calculations yet.
Thank you in advance for any help that you can provide.
|
The coefficient on a logged explanatory variable when the dependent variable also is in log form is an elasticity (or the percentage change in the dependent variable if the explanatory variable changes by one percent). Suppose I estimate a regression without logging the dependent variable but I use a log link in a General Linear Model (and family gaussian) while the explanatory variable remains in log form. Is the coefficient on that explanatory variable still an elasticity?
|
Does anyone know of a variation of Fisher's Exact Test which takes weights into account? For instance sampling weights.
So instead of the usual 2x2 cross table, every data point has a "mass" or "size" value weighing the point.
Example data:
A B weight
N N 1
N N 3
Y N 1
Y N 2
N Y 6
N Y 7
Y Y 1
Y Y 2
Y Y 3
Y Y 4
Fisher's Exact Test then uses this 2x2 cross table:
A\B N Y All
N 2 2 4
Y 2 4 6
All 4 6 10
If we would take the weight as an 'actual' number of data points, this would result in:
A\B N Y All
N 4 13 17
Y 3 10 13
All 7 23 30
But that would result in much too high a confidence. One data point changing from N/Y to N/N would make a very large difference in the statistic.
Plus, it wouldn't work if any weight contained fractions.
|
Sorry for the verbose background to this question:
Occasionally in investigations of animal behaviour, an experimenter is interested in the amount of time that a subject spends in different, pre-defined zones in a test apparatus. I've often seen this sort of data analyzed using ANOVA; however, I have never been entirely convinced of the validity of such analyses, given that ANOVA assumes the observations are independent, and they never actually are independent in these analyses (since more time spent in one zone means that less is spent in other zones!).
For example,
D. R. Smith, C. D. Striplin, A. M.
Geller, R. B. Mailman, J. Drago, C. P.
Lawler, M. Gallagher, Behavioural
assessment of mice lacking D1A
dopamine receptors, Neuroscience,
Volume 86, Issue 1, 21 May 1998, Pages
135-146
In the above article, they reduce the degrees of freedom by 1 in order to compensate for the non-independence. However, I am not sure how such a manipulation can actually ameliorate this violation of ANOVA assumptions.
Perhaps a chi-squared procedure might be more appropriate? What would you do to analyze data like this (preference for zones, based on time spent in zones)?
Thanks!
|
Say I want to estimate a large number of parameters, and I want to penalize some of them because I believe they should have little effect compared to the others. How do I decide what penalization scheme to use? When is ridge regression more appropriate? When should I use lasso?
|
I created a quick fun Excel Spreadsheet tonight to try and predict which video games I'll enjoy if I buy them. I'm wondering if this quick example makes sense from a Logistic Regression perspective and if I am computing all of the values correctly.
Unfortunately, if I did everything correctly I doubt I have much to look forward to on my XBOX or PS3 ;)
I laid out a few categories and weighted them like so (Real spreadsheet lists twice as many or so):
4 4 3 1
Visually Stunning Exhilirating Artistic Sporty
Then I went through some games I have and rated them in each category (ratings of 0-4). I then set a separate cell to be the value of Beta_0 and tuned that until the resulting percentages all looked about right.
Next I entered in my expected ratings for the new games I was looking forward to and got percentages for those.
Example:
Beta_0 := -35
4 4 3 1
Visually Stunning Exhilirating Artistic Sporty
4 4 0 1
Would be calculated as
P = 1 / [1 + e^(-35 + (4*4 + 4*4 + 3*0 + 1*1)]
P = 88.1%
If I were to automate the regression am I correct in thinking I'd be tuning Beta_0 to make it so the positive training examples come out high and the negative training examples come out low?
I'm completely new to this (just started today thanks to this site actually!) so please have no concern about bruising my ego, I'm eager to learn more.
Thanks!
|
Given a list of p-values generated from independent tests, sorted in ascending order, one can use the Benjamini-Hochberg procedure for multiple testing correction. For each p-value, the Benjamini-Hochberg procedure allows you to calculate the False Discovery Rate (FDR) for each of the p-values. That is, at each "position" in the sorted list of p-values, it will tell you what proportion of those are likely to be false rejections of the null hypothesis.
My question is, are these FDR values to be referred to as "q-values", or as "corrected p-values", or as something else entirely?
EDIT 2010-07-12: I would like to more fully describe the correction procedure we are using. First, we sort the test results in increasing order by their un-corrected original p-value. Then, we iterate over the list, calculating what I have been interpreting as "the FDR expected if we were to reject the null hypothesis for this and all tests prior in the list," using the B-H correction, with an alpha equal to the observed, un-corrected p-value for the respective iteration. We then take, as what we've been calling our "q-value", the maximum of the previously corrected value (FDR at iteration i - 1) or the current value (at i), to preserve monotonicity.
Below is some Python code which represents this procedure:
def calc_benjamini_hochberg_corrections(p_values, num_total_tests):
"""
Calculates the Benjamini-Hochberg correction for multiple hypothesis
testing from a list of p-values *sorted in ascending order*.
See
http://en.wikipedia.org/wiki/False_discovery_rate#Independent_tests
for more detail on the theory behind the correction.
**NOTE:** This is a generator, not a function. It will yield values
until all calculations have completed.
:Parameters:
- `p_values`: a list or iterable of p-values sorted in ascending
order
- `num_total_tests`: the total number of tests (p-values)
"""
prev_bh_value = 0
for i, p_value in enumerate(p_values):
bh_value = p_value * num_total_tests / (i + 1)
# Sometimes this correction can give values greater than 1,
# so we set those values at 1
bh_value = min(bh_value, 1)
# To preserve monotonicity in the values, we take the
# maximum of the previous value or this one, so that we
# don't yield a value less than the previous.
bh_value = max(bh_value, prev_bh_value)
prev_bh_value = bh_value
yield bh_value
|
I realize this is pedantic and trite, but as a researcher in a field outside of statistics, with limited formal education in statistics, I always wonder if I'm writing "p-value" correctly. Specifically:
Is the "p" supposed to be capitalized?
Is the "p" supposed to be italicized? (Or in mathematical font, in TeX?)
Is there supposed to be a hyphen between "p" and "value"?
Alternatively, is there no "proper" way of writing "p-value" at all, and any dolt will understand what I mean if I just place "p" next to "value" in some permutation of these options?
|
Has anyone gone through some papers using Vector Error Correction Models in causality applications with more than one cointegration vectors, say two. I guess there will be more than one ECM terms. How to assess the endogeneity of the left hand variables if t-stats on different ECM coefficients yield different (conflicting) results. For left hand side variables to be endogenous should we have both ECM coefficients negative and significant?
Javed Iqbal
|
My question is about cross validation when there are many more variables than observations. To fix ideas, I propose to restrict to the classification framework in very high dimension (more features than observation).
Problem: Assume that for each variable $i=1,\dots,p$ you have a measure of importance $T[i]$ than exactly measure the interest of feature $i$ for the classification problem. The problem of selecting a subset of feature to reduce optimally the classification error is then reduced to that of finding the number of features.
Question: What is the most efficient way to run cross validation in this case (cross validation scheme)? My question is not about how to write the code but on the version of cross validation to use when trying to find the number of selected feature (to minimize the classification error) but how to deal with the high dimension when doing cross validation (hence the problem above may be a bit like a 'toy problem' to discuss CV in high dimension).
Notations: $n$ is the size of the learning set, p the number of features (i.e. the dimension of the feature space). By very high dimension I mean p>>n (for example $p=10000$ and $n=100$).
|
Here's something I've wondered about for a while, but haven't been able to discover the correct terminology. Say you have a relatively complicated density function that you suspect might have a close approximation as a sum of (properly weighted) simpler density functions. Have such things been studied? I'm particularly interested in reading about any applications.
Here's one example I've found:
Expansion of probability density functions as a sum of gamma densities with applications in risk theory
|
Possible Duplicate:
How to understand degrees of freedom?
I was at a talk a few months back where the speaker used the term 'degrees of freedom'. She briefly said something along the lines of it meaning the number of values used to form a statistic that are free to vary.
What does this mean? I'm specifically looking for an intuitive explanation.
|