instruction
stringlengths 9
38.7k
|
---|
How should I elicit prior distributions from experts when fitting a Bayesian model?
|
In many different statistical methods there is an "assumption of normality". What is "normality" and how do I know if there is normality?
|
What are some valuable Statistical Analysis open source projects available right now?
Edit: as pointed out by Sharpie, valuable could mean helping you get things done faster or more cheaply.
|
I have two groups of data. Each with a different distribution of multiple variables. I'm trying to determine if these two groups' distributions are different in a statistically significant way. I have the data in both raw form and binned up in easier to deal with discrete categories with frequency counts in each.
What tests/procedures/methods should I use to determine whether or not these two groups are significantly different and how do I do that in SAS or R (or Orange)?
|
Last year, I read a blog post from Brendan O'Connor entitled "Statistics vs. Machine Learning, fight!" that discussed some of the differences between the two fields. Andrew Gelman responded favorably to this:
Simon Blomberg:
From R's fortunes
package: To paraphrase provocatively,
'machine learning is statistics minus
any checking of models and
assumptions'.
-- Brian D. Ripley (about the difference between machine learning
and statistics) useR! 2004, Vienna
(May 2004) :-) Season's Greetings!
Andrew Gelman:
In that case, maybe we should get rid
of checking of models and assumptions
more often. Then maybe we'd be able to
solve some of the problems that the
machine learning people can solve but
we can't!
There was also the "Statistical Modeling: The Two Cultures" paper by Leo Breiman in 2001 which argued that statisticians rely too heavily on data modeling, and that machine learning techniques are making progress by instead relying on the predictive accuracy of models.
Has the statistics field changed over the last decade in response to these critiques? Do the two cultures still exist or has statistics grown to embrace machine learning techniques such as neural networks and support vector machines?
|
I've been working on a new method for analyzing and parsing datasets to identify and isolate subgroups of a population without foreknowledge of any subgroup's characteristics. While the method works well enough with artificial data samples (i.e. datasets created specifically for the purpose of identifying and segregating subsets of the population), I'd like to try testing it with live data.
What I'm looking for is a freely available (i.e. non-confidential, non-proprietary) data source. Preferably one containing bimodal or multimodal distributions or being obviously comprised of multiple subsets that cannot be easily pulled apart via traditional means. Where would I go to find such information?
|
Many studies in the social sciences use Likert scales. When is it appropriate to use Likert data as ordinal and when is it appropriate to use it as interval data?
|
Is there a good, modern treatment covering the various methods of multivariate interpolation, including which methodologies are typically best for particular types of problems? I'm interested in a solid statistical treatment including error estimates under various model assumptions.
An example:
Shepard's method
Say we're sampling from a multivariate normal distribution with unknown parameters. What can we say about the standard error of the interpolated estimates?
I was hoping for a pointer to a general survey addressing similar questions for the various types of multivariate interpolations in common use.
|
I have four competing models which I use to predict a binary outcome variable (say, employment status after graduating, 1 = employed, 0 = not-employed) for n subjects. A natural metric of model performance is hit rate which is the percentage of correct predictions for each one of the models.
It seems to me that I cannot use ANOVA in this setting as the data violates the assumptions underlying ANOVA. Is there an equivalent procedure I could use instead of ANOVA in the above setting to test for the hypothesis that all four models are equally effective?
|
What are some of the ways to forecast demographic census with some validation and calibration techniques?
Some of the concerns:
Census blocks vary in sizes as rural
areas are a lot larger than condensed
urban areas. Is there a need to account for the area size difference?
if let's say I have census data
dating back to 4 - 5 census periods,
how far can i forecast it into the
future?
if some of the census zone change
lightly in boundaries, how can i
account for that change?
What are the methods to validate
census forecasts? for example, if i
have data for existing 5 census
periods, should I model the first 3
and test it on the latter two? or is
there another way?
what's the state of practice in
forecasting census data, and what are
some of the state of the art methods?
|
How would you describe in plain English the characteristics that distinguish Bayesian from Frequentist reasoning?
|
How can I find the PDF (probability density function) of a distribution given the CDF (cumulative distribution function)?
|
What modern tools (Windows-based) do you suggest for modeling financial time series?
|
What is a standard deviation, how is it calculated and what is its use in statistics?
|
Which methods are used for testing random variate generation algorithms?
|
After taking a statistics course and then trying to help fellow students, I noticed one subject that inspires much head-desk banging is interpreting the results of statistical hypothesis tests. It seems that students easily learn how to perform the calculations required by a given test but get hung up on interpreting the results. Many computerized tools report test results in terms of "p values" or "t values".
How would you explain the following points to college students taking their first course in statistics:
What does a "p-value" mean in relation to the hypothesis being tested? Are there cases when one should be looking for a high p-value or a low p-value?
What is the relationship between a p-value and a t-value?
|
What R packages should I install for seasonality analysis?
|
I have a data set that I'd expect to follow a Poisson distribution, but it is overdispersed by about 3-fold. At the present, I'm modelling this overdispersion using something like the following code in R.
## assuming a median value of 1500
med = 1500
rawdist = rpois(1000000,med)
oDdist = rawDist + ((rawDist-med)*3)
Visually, this seems to fit my empirical data very well. If I'm happy with the fit, is there any reason that I should be doing something more complex, like using a negative binomial distribution, as described here? (If so, any pointers or links on doing so would be much appreciated).
Oh, and I'm aware that this creates a slightly jagged distribution (due to the multiplication by three), but that shouldn't matter for my application.
Update: For the sake of anyone else who searches and finds this question, here's a simple R function to model an overdispersed poisson using a negative binomial distribution. Set d to the desired mean/variance ratio:
rpois.od<-function (n, lambda,d=1) {
if (d==1)
rpois(n, lambda)
else
rnbinom(n, size=(lambda/(d-1)), mu=lambda)
}
(via the R mailing list: https://stat.ethz.ch/pipermail/r-help/2002-June/022425.html)
|
There is an old saying: "Correlation does not mean causation". When I teach, I tend to use the following standard examples to illustrate this point:
number of storks and birth rate in Denmark;
number of priests in America and alcoholism;
in the start of the 20th century it was noted that there was a strong correlation between 'Number of radios' and 'Number of people in Insane Asylums'
and my favorite: pirates cause global warming.
However, I do not have any references for these examples and whilst amusing, they are obviously false.
Does anyone have any other good examples?
|
I'm looking for worked out solutions using Bayesian and/or logit analysis similar to a workbook or an annal.
The worked out problems could be of any field; however, I'm interested in urban planning / transportation related fields.
|
What algorithms are used in modern and good-quality random number generators?
|
How would you explain data visualization and why it is important to a layman?
|
I have a dataset of 130k internet users characterized by 4 variables describing users' number of sessions, locations visited, avg data download and session time aggregated from four months of activity.
Dataset is very heavy-tailed. For example third of users logged only once during four months, whereas six users had more than 1000 sessions.
I wanted to come up with a simple classification of users, preferably with indication of the most appropriate number of clusters.
Is there anything you could recommend as a solution?
|
What do they mean when they say "random variable"?
|
Are there any objective methods of assessment or standardized tests available to measure the effectiveness of a software that does pattern recognition?
|
What are the main differences between performing principal component analysis (PCA) on the correlation matrix and on the covariance matrix? Do they give the same results?
|
As I understand UK Schools teach that the Standard Deviation is found using:
whereas US Schools teach:
(at a basic level anyway).
This has caused a number of my students problems in the past as they have searched on the Internet, but found the wrong explanation.
Why the difference?
With simple datasets say 10 values, what degree of error will there be if the wrong method is applied (eg in an exam)?
|
What is the back-propagation algorithm and how does it work?
|
With the recent FIFA world cup, I decided to have some fun and determine which months produced world cup football players. Turned out, most footballers in the 2010 world cup were born in the first half of the year.
Someone pointed out, that children born in the first half of the year had a physical advantage over others and hence "survivorship bias" was involved in the equation. Is this an accurate observation? Can someone please explain why he says that?
Also, when trying to understand the concept, I found most examples revolved around the financial sector. Are they any other everyday life examples explaining it?
|
Duplicate thread: I just installed the latest version of R. What packages should I obtain?
What are the R packages you couldn't imagine your daily work with data?
Please list both general and specific tools.
UPDATE:
As for 24.10.10 ggplot2 seems to be the winer with 7 votes.
Other packages mentioned more than one are:
plyr - 4
RODBC, RMySQL - 4
sqldf - 3
lattice - 2
zoo - 2
Hmisc/rms - 2
Rcurl - 2
XML - 2
Thanks all for your answers!
|
I'm using R and the manuals on the R site are really informative. However, I'd like to see some more examples and implementations with R which can help me develop my knowledge faster. Any suggestions?
|
We're trying to use a Gaussian process to model h(t) -- the hazard function -- for a very small initial population, and then fit that using the available data. While this gives us nice plots for credible sets for h(t) and so on, it unfortunately is also just pushing the inference problem from h(t) to the covariance function of our process. Perhaps predictably, we have several reasonable and equally defensible guesses for this that all produce different result.
Has anyone run across any good approaches for addressing such a problem? Gaussian-process related or otherwise?
|
I have been using various GARCH-based models to forecast volatility for various North American equities using historical daily data as inputs.
Asymmetric GARCH models are often cited as a modification of the basic GARCH model to account for the 'leverage effect' i.e. volatility tends to increase more after a negative return than a similarly sized positive return.
What kind of a difference would you expect to see between a standard GARCH and an asymmetric GARCH forecast for a broad-based equity index like the S&P 500 or the NASDAQ-100?
There is nothing particularly special about these two indices, but I think it is helpful to give something concrete to focus the discussion, as I am sure the effect would be different depending on the equities used.
|
I have some ordinal data gained from survey questions. In my case they are Likert style responses (Strongly Disagree-Disagree-Neutral-Agree-Strongly Agree). In my data they are coded as 1-5.
I don't think means would mean much here, so what basic summary statistics are considered usefull?
|
I'd like to see the answer with qualitative view on the problem, not just definition. Examples and analogous from other areas of applied math also would be good.
I understand, my question is silly, but I can't find good and intuitive introduction textbook on signal processing — if someone would suggest one, I will be happy.
|
What is the best blog on data visualization?
I'm making this question a community wiki since it is highly subjective. Please limit each answer to one link.
Please note the following criteria for proposed answers:
[A]cceptable answers to questions like this ...need to supply adequate descriptions and reasoned justification. A mere hyperlink doesn't do it. ...[A]ny future replies [must] meet ...[these] standards; otherwise, they will be deleted without further comment.
|
Following one-way ANOVA, there are many possible follow-up multiple comparison tests. Holm's test (or better, the Holm-Sidak) test has lots of power, but because it works in a stepwise manner, it cannot compute confidence intervals. Its advantage over the tests than can compute confidence intervals (Tukey, Dunnett) is that is has more power. But is it fair to say that the Holm method always has more power than the methods of Tukey and Dunnet? Or does it depend...?
|
I have been looking into theoretical frameworks for method selection (note: not model selection) and have found very little systematic, mathematically-motivated work. By 'method selection', I mean a framework for distinguishing the appropriate (or better, optimal) method with respect to a problem, or problem type.
What I have found is substantial, if piecemeal, work on particular methods and their tuning (i.e. prior selection in Bayesian methods), and method selection via bias selection (e.g. Inductive Policy: The Pragmatics of Bias Selection). I may be unrealistic at this early stage of machine learning's development, but I was hoping to find something like what measurement theory does in prescribing admissible transformations and tests by scale type, only writ large in the arena of learning problems.
Any suggestions?
|
What statistical research blogs would you recommend, and why?
|
In the definition of standard deviation, why do we have to square the difference from the mean to get the mean (E) and take the square root back at the end? Can't we just simply take the absolute value of the difference instead and get the expected value (mean) of those, and wouldn't that also show the variation of the data? The number is going to be different from square method (the absolute-value method will be smaller), but it should still show the spread of data. Anybody know why we take this square approach as a standard?
The definition of standard deviation:
$\sigma = \sqrt{E\left[\left(X - \mu\right)^2\right]}.$
Can't we just take the absolute value instead and still be a good measurement?
$\sigma = E\left[|X - \mu|\right]$
|
I'm a programmer without statistical background, and I'm currently looking at different classification methods for a large number of different documents that I want to classify into pre-defined categories. I've been reading about kNN, SVM and NN. However, I have some trouble getting started. What resources do you recommend? I do know single variable and multi variable calculus quite well, so my math should be strong enough. I also own Bishop's book on Neural Networks, but it has proven to be a bit dense as an introduction.
|
Which is the best introductory textbook for Bayesian statistics?
One book per answer, please.
|
In Plain English, how does one interpret a Bland-Altman plot?
What are the advantages of using a Bland-Altman plot over other methods of comparing two different measurement methods?
|
I had a plan of learning R in the near future. Reading another question I found out about Clojure. Now I don't know what to do.
I think a big advantage of R for me is that some people in Economics use it, including one of my supervisors (though the other said: stay away from R!). One advantage of Clojure is that it is Lisp-based, and as I have started learning Emacs and I am keen on writing my own customisations, it would be helpful (yeah, I know Clojure and Elisp are different dialects of Lisp, but they are both Lisp and thus similar I would imagine).
I can't ask which one is better, because I know this is very personal, but could someone give me the advantages (or advantages) of Clojure x R, especially in practical terms? For example, which one should be easier to learn, which one is more flexible or more powerful, which one has more libraries, more support, more users, etc?
My intended use: The bulk of my estimation should be done using Matlab, so I am not looking for anything too deep in terms of statistical analysis, but rather a software to substitute Excel for the initial data manipulation and visualisation, summary statistics and charting, but also some basic statistical analysis or the initial attempts at my estimation.
|
On smaller window sizes, n log n sorting might work. Are there any better algorithms to achieve this?
|
I'm interested in learning R on the cheap. What's the best free resource/book/tutorial for learning R?
|
Possible Duplicate:
Locating freely available data samples
Where can I find freely accessible data sources?
I'm thinking of sites like
http://www2.census.gov/census_2000/datasets/?
|
A while ago a user on R-help mailing list asked about the soundness of using PCA scores in a regression. The user is trying to use some PC scores to explain variation in another PC (see full discussion here). The answer was that no, this is not sound because PCs are orthogonal to each other.
Can someone explain in a bit more detail why this is so?
|
Label switching (i.e., the posterior distribution is invariant to switching component labels) is a problematic issue when using MCMC to estimate mixture models.
Is there a standard (as in widely accepted) methodology to deal with the issue?
If there is no standard approach then what are the pros and cons of the leading approaches to solve the label switching problem?
|
I really enjoy hearing simple explanations to complex problems. What is your favorite analogy or anecdote that explains a difficult statistical concept?
My favorite is Murray's explanation of cointegration using a drunkard and her dog. Murray explains how two random processes (a wandering drunk and her dog, Oliver) can have unit roots but still be related (cointegrated) since their joint first differences are stationary.
The drunk sets out from the bar, about to wander aimlessly in random-walk fashion. But
periodically she intones "Oliver, where are you?", and Oliver interrupts his aimless
wandering to bark. He hears her; she hears him. He thinks, "Oh, I can't let her get too far
off; she'll lock me out." She thinks, "Oh, I can't let him get too far off; he'll wake
me up in the middle of the night with his barking." Each assesses how far
away the other is and moves to partially close that gap.
|
I know this must be standard material, but I had difficulty in finding a proof in this form.
Let $e$ be a standard white Gaussian vector of size $N$. Let all the other matrices in the following be constant.
Let $v = Xy + e$, where $X$ is an $N\times L$ matrix and $y$ is an $N\times 1$ vector, and let
$$\left\{\begin{align}
\bar y &= (X^TX)^{-1}X^Tv\\
\bar e &= v - X\bar y
\end{align}\right.\quad.$$
If $c$ is any constant vector, $J = N - \mathrm{rank}(X)$, and
$$\left\{\begin{align}
u &= c^T\bar y\\
s^2 &= \bar e^T\bar ec^T(X^TX)^{-1}c
\end{align}\right.\quad,$$
then the random variable defined as $t = u/\sqrt{s^2/J}$ follows a normalized Student's T distribution with J degrees of freedom.
I would be grateful if you could provide an outline for its proof.
|
Econometricians often talk about a time series being integrated with order k, I(k). k being the minimum number of differences required to obtain a stationary time series.
What methods or statistical tests can be used to determine, given a level of confidence, the order of integration of a time series?
|
Maybe the concept, why it's used, and an example.
|
Australia is currently having an election and understandably the media reports new political poll results daily. In a country of 22 million what percentage of the population would need to be sampled to get a statistically valid result?
Is it possible that using too large a sample could affect the results, or does statistical validity monotonically increase with sample size?
|
For univariate kernel density estimators (KDE), I use Silverman's rule for calculating $h$:
\begin{equation}
0.9 \min(sd, IQR/1.34)\times n^{-0.2}
\end{equation}
What are the standard rules for multivariate KDE (assuming a Normal kernel).
|
Are there any free statistical textbooks available?
|
I recently started working for a tuberculosis clinic. We meet periodically to discuss the number of TB cases we're currently treating, the number of tests administered, etc. I'd like to start modeling these counts so that we're not just guessing whether something is unusual or not. Unfortunately, I've had very little training in time series, and most of my exposure has been to models for very continuous data (stock prices) or very large numbers of counts (influenza). But we deal with 0-18 cases per month (mean 6.68, median 7, var 12.3), which are distributed like this:
[image lost to the mists of time]
[image eaten by a grue]
I've found a few articles that address models like this, but I'd greatly appreciate hearing suggestions from you - both for approaches and for R packages that I could use to implement those approaches.
EDIT: mbq's answer has forced me to think more carefully about what I'm asking here; I got too hung-up on the monthly counts and lost the actual focus of the question. What I'd like to know is: does the (fairly visible) decline from, say, 2008 onward reflect a downward trend in the overall number of cases? It looks to me like the number of cases monthly from 2001-2007 reflects a stable process; maybe some seasonality, but overall stable. From 2008 through the present, it looks like that process is changing: the overall number of cases is declining, even though the monthly counts might wobble up and down due to randomness and seasonality. How can I test if there's a real change in the process? And if I can identify a decline, how could I use that trend and whatever seasonality there might be to estimate the number of cases we might see in the upcoming months?
|
Often times a statistical analyst is handed a set dataset and asked to fit a model using a technique such as linear regression. Very frequently the dataset is accompanied with a disclaimer similar to "Oh yeah, we messed up collecting some of these data points -- do what you can".
This situation leads to regression fits that are heavily impacted by the presence of outliers that may be erroneous data. Given the following:
It is dangerous from both a scientific and moral standpoint to throw out data for no reason other than it "makes the fit look bad".
In real life, the people who collected the data are frequently not available to answer questions such as "when generating this data set, which of the points did you mess up, exactly?"
What statistical tests or rules of thumb can be used as a basis for excluding outliers in linear regression analysis?
Are there any special considerations for multilinear regression?
|
Is there a standard and accepted method for selecting the number of layers, and the number of nodes in each layer, in a feed-forward neural network? I'm interested in automated ways of building neural networks.
|
I need to analyze the 100k MovieLens dataset for clustering with two algorithms of my choice, between the likes of k-means, agnes, diana, dbscan, and several others. What tools (like Rattle, or Weka) would be best suited to help me make some simple clustering analysis over this dataset?
|
I'm aware that this one is far from yes or no question, but I'd like to know which techniques do you prefer in categorical data analysis - i.e. cross tabulation with two categorical variables.
I've come up with:
χ2 test - well, this is quite self-explanatory
Fisher's exact test - when n < 40,
Yates' continuity correction - when n > 40,
Cramer's V - measure of association for tables which have more than 2 x 2 cells,
Φ coefficient - measure of association for 2 x 2 tables,
contingency coefficient (C) - measure of association for n x n tables,
odds ratio - independence of two categorical variables,
McNemar marginal homogeniety test,
And my question here is: Which statistical techniques for cross-tabulated data (two categorical variables) do you consider relevant (and why)?
|
I am sure that everyone who's trying to find patterns in historical stock market data or betting history would like to know about this. Given a huge sets of data, and thousands of random variables that may or may not affect it, it makes sense to ask any patterns that you extract out from the data are indeed true patterns, not statistical fluke.
A lot of patterns are only valid when they are tested in the samples. And even those that are patterns that are valid out of samples may cease to become valid when you apply it in the real world.
I understand that it is not possible to completely 100% make sure a pattern is valid all the time, but besides in and out of samples tests, are their any tests that could establish the validness of a pattern?
|
I am looking at fitting distributions to data (with a particular focus on the tail) and am leaning towards Anderson-Darling tests rather than Kolmogorov-Smirnov. What do you think are the relative merits of these or other tests for fit (e.g. Cramer-von Mises)?
|
Besides gnuplot and ggobi, what open source tools are people using for visualizing multi-dimensional data?
Gnuplot is more or less a basic plotting package.
Ggobi can do a number of nifty things, such as:
animate data along a dimension or among discrete collections
animate linear combinations varying the coefficients
compute principal components and other transformations
visualize and rotate 3 dimensional data clusters
use colors to represent a different dimension
What other useful approaches are based in open source and thus freely reusable or customizable?
Please provide a brief description of the package's abilities in the answer.
|
Following on from this question:
Imagine that you want to test for differences in central tendency between two groups (e.g., males and females)
on a 5-point Likert item (e.g., satisfaction with life: Dissatisfied to Satisfied).
I think a t-test would be sufficiently accurate for most purposes,
but that a bootstrap test of differences between group means would often provide more accurate estimate of confidence intervals.
What statistical test would you use?
|
I'm curious about why we treat fitting GLMS as though they were some special optimization problem. Are they? It seems to me that they're just maximum likelihood, and that we write down the likelihood and then ... we maximize it! So why do we use Fisher scoring instead of any of the myriad of optimization schemes that has been developed in the applied math literature?
|
What is the difference between discrete data and continuous data?
|
I have 2 ASR (Automatic Speech Recognition) models, providing me with text transcriptions for my testdata. The error measure I use is Word Error Rate.
What methods do I have to test for statistical significance of my new results?
An example:
I have an experiment with 10 speaker, all having 100 (the same) sentences, total 900 words per speaker. Method A has an WER (word error rate) of 19.0%, Method B 18.5%.
How do I test whether Method B is significantly better?
|
Suppose I have a large set of multivariate data with at least three variables. How can I find the outliers? Pairwise scatterplots won't work as it is possible for an outlier to exist in 3 dimensions that is not an outlier in any of the 2 dimensional subspaces.
I am not thinking of a regression problem, but of true multivariate data. So answers involving robust regression or computing leverage are not helpful.
One possibility would be to compute the principal component scores and look for an outlier in the bivariate scatterplot of the first two scores. Would that be guaranteed to work? Are there better approaches?
|
What are some good visualization libraries for online use? Are they easy to use and is there good documentation?
|
If $X_1, ..., X_n$ are independent identically-distributed random variables, what can be said about the distribution of $\min(X_1, ..., X_n)$ in general?
|
What are principal component scores (PC scores, PCA scores)?
|
I have a friend who is an MD and wants to refresh his Statistics. So is there any recommended resource online (or offline) ? He did stats ~20 years ago.
|
Which visualization libraries (plots, graphs, ...) would you suggest to use in a standalone application (Linux, .Net, Windows, whatever). Reasonable performance would be nice as well.
|
Why is the average of the highest value from 100 draws from a normal distribution different from the 98% percentile of the normal distribution? It seems that by definition that they should be the same. But...
Code in R:
NSIM <- 10000
x <- rep(NA,NSIM)
for (i in 1:NSIM)
{
x[i] <- max(rnorm(100))
}
qnorm(.98)
qnorm(.99)
mean(x)
median(x)
hist(x)
I imagine that I'm misunderstanding something about what the maximum of a 100 draws from the normal distribution should be. As is demonstrated by an unexpectedly asymetrical distribution of maximum values.
|
This is a bit of a flippant question, but I have a serious interest in the answer. I work in a psychiatric hospital and I have three years' of data, collected every day across each ward regarding the level of violence on that ward.
Clearly the model which fits these data is a time series model. I had to difference the scores in order to make them more normal. I fit an ARMA model with the differenced data, and the best fit I think was a model with one degree of differencing and first order auto-correlation at lag 2.
My question is, what on earth can I use this model for? Time series always seems so useful in the textbooks when it's about hare populations and oil prices, but now I've done my own the result seems so abstract as to be completely opaque. The differenced scores correlate with each other at lag two, but I can't really advise everyone to be on high alert two days after a serious incident in all seriousness.
Or can I?
|
I have a set of $N$ bodies, which is a random sample from a population whose mean and variance I want to estimate. A property of each body is being measured $m_i$ times ($m_i>1$) and different for each body index $i$ identifies which body it is; the property is expected to be distributed around zero). I would like to describe the resulting measurement. Particularly I'm interested in average property value and in the variance.
The average value is simple. First calculate the mean values for each body and then calculate the mean of means.
The variance is more tricky. There are two variances: the variance of measurement and the variance of property values. In order to have an idea on the confidence we have in any single measurement, we need to account for both the sources. Unfortunately, I can't think of a good method. It is obvious that putting all the numbers in a single pool and calculating the stdev of this pool isn't a good idea.
Any suggestion?
EDIT
Colin Gillespie suggests applying Random Effects Model. This model seems to be the right solution for my case, except for the fact that it is described (in Wikipedia) for the cases where each group (body in my case) is sampled equally ($m_i$ is constant for all the bodies), which is not correct in my case
|
What is the easiest way to understand boosting?
Why doesn't it boost very weak classifiers "to infinity" (perfection)?
|
We may assume that we have CSV file and we want a very basic line plot with several lines on one plot and a simple legend.
|
Rules:
one classifier per answer
vote up if you agree
downvote/remove duplicates.
put your application in the comment
|
If I have two lists A and B, both of which are subsets of a much larger list C, how can I determine if the degree of overlap of A and B is greater than I would expect by chance?
Should I just randomly select elements from C of the same lengths as lists A and B and determine that random overlap, and do this many times to determine some kind or empirical p-value? Is there a better way to test this?
|
What is the difference between a population and a sample? What common variables and statistics are used for each one, and how do those relate to each other?
|
Due to the factorial in a poisson distribution, it becomes unpractical to estimate poisson models (for example, using maximum likelihood) when the observations are large. So, for example, if I am trying to estimate a model to explain the number of suicides in a given year (only annual data are available), and say, there are thousands of suicides every year, is it wrong to express suicides in hundreds, so that 2998 would be 29.98 ~= 30? In other words, is it wrong to change the unit of measurement to make the data manageable?
|
Is there a rule-of thumb or even any way at all to tell how large a sample should be in order to estimate a model with a given number of parameters?
So, for example, if I want to estimate a least-squares regression with 5 parameters, how large should the sample be?
Does it matter what estimation technique you are using (e.g. maximum likelihood, least squares, GMM), or how many or what tests you are going to perform? Should the sample variability be taken into account when making the decision?
|
When would one prefer to use a Conditional Autoregressive model over a Simultaneous Autoregressive model when modelling autocorrelated geo-referenced aerial data?
|
When a non-hierarchical cluster analysis is carried out, the order of observations in the data file determine the clustering results, especially if the data set is small (i.e, 5000 observations). To deal with this problem I usually performed a random reorder of data observations. My problem is that if I replicate the analysis n times, the results obtained are different and sometimes these differences are great.
How can I deal with this problem? Maybe I could run the analysis several times and after consider that one observation belong to the group in which more times was assigned. Has someone a better approach to this problem?
|
What is meant when we say we have a saturated model?
|
Can someone explain to me the difference between method of moments and GMM (general method of moments), their relationship, and when should one or the other be used?
|
Suppose that I culture cancer cells in n different dishes g₁, g₂, … , gn and observe the number of cells ni in each dish that look different than normal. The total number of cells in dish gi is ti. There is individual differences between individual cells, but also differences between the populations in different dishes because each dish has a slightly different temperature, amount of liquid, and so on.
I model this as a beta-binomial distribution: ni ~ Binomial(pi, ti) where pi ~ Beta(α, β). Given a number of observations of ni and ti, how can I estimate α and β?
|
I know of Cameron and Trivedi's Microeconometrics Using Stata.
What are other good texts for learning Stata?
|
Am I looking for a better behaved distribution for the independent variable in question, or to reduce the effect of outliers, or something else?
|
It seems like when the assumption of homogeneity of variance is met that the results from a Welch adjusted t-test and a standard t-test are approximately the same. Why not simply always use the Welch adjusted t?
|
I'm a physics graduate who ended up doing infosec so most of the statistics I ever learned is useful for thermodynamics. I'm currently trying to think of a model for working out how many of a population of computers are infected with viruses, though I assume the maths works out the same way for real-world diseases so references in or answers relevant to that field would be welcome too.
Here's what I've come up with so far:
assume I know the total population of computers, N.
I know the fraction D of computers that have virus-detection software (i.e. the amount of the population that is being screened)
I know the fraction I of computers that have detection software that has reported an infection
I don't know, but can find out or estimate, the probability of Type I and II errors in the detection software.
I don't (yet) care about the time evolution of the population.
So where do I go from here? Would you model infection as a binomial distribution with probability like (I given D), or as a Poisson? Or is the distribution different?
|
There is a variant of boosting called gentleboost. How does gentle boosting differ from the better-known AdaBoost?
|
I'm looking for a book or online resource that explains different kinds of entropy such as Sample Entropy and Shannon Entropy and their advantages and disadvantages.
Can someone point me in the right direction?
|
I realize that the statistical analysis of financial data is a huge topic, but that is exactly why it is necessary for me to ask my question as I try to break into the world of financial analysis.
As at this point I know next to nothing about the subject, the results of my google searches are overwhelming. Many of the matches advocate learning specialized tools or the R programming language. While I will learn these when they are necessary, I'm first interested in books, articles or any other resources that explain modern methods of statistical analysis specifically for financial data. I assume there are a number of different wildly varied methods for analyzing data, so ideally I'm seeking an overview of the various methods that are practically applicable. I'd like something that utilizes real world examples that a beginner is capable of grasping but that aren't overly simplistic.
What are some good resources for learning bout the statistical analysis of financial data?
|
Do you think that unbalanced classes is a big problem for k-nearest neighbor? If so, do you know any smart way to handle this?
|
I'm looking for a good algorithm (meaning minimal computation, minimal storage requirements) to estimate the median of a data set that is too large to store, such that each value can only be read once (unless you explicitly store that value). There are no bounds on the data that can be assumed.
Approximations are fine, as long as the accuracy is known.
Any pointers?
|
Why do we seek to minimize x^2 instead of minimizing |x|^1.95 or |x|^2.05.
Are there reasons why the number should be exactly two or is it simply a convention that has the advantage of simplifying the math?
|
The Wald, Likelihood Ratio and Lagrange Multiplier tests in the context of maximum likelihood estimation are asymptotically equivalent. However, for small samples, they tend to diverge quite a bit, and in some cases they result in different conclusions.
How can they be ranked according to how likely they are to reject the null? What to do when the tests have conflicting answers? Can you just pick the one which gives the answer you want or is there a "rule" or "guideline" as to how to proceed?
|
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 3