For additional information, see the Global Shipping Program terms and conditions - opens in a new window or tab This amount includes applicable customs duties, taxes, brokerage and other fees. For additional information, see the Global Shipping Program terms and conditions - opens in a new window or tab. Any international shipping is paid in part to Pitney Bowes Inc. Learn More - opens in a new window or tab International shipping and import charges paid to Pitney Bowes Inc.
Learn More - opens in a new window or tab Any international shipping and import charges are paid in part to Pitney Bowes Inc. Learn More - opens in a new window or tab Any international shipping is paid in part to Pitney Bowes Inc. Learn More - opens in a new window or tab. Related sponsored items. Showing Slide 1 of 3. New New New. Pre-owned Pre-owned Pre-owned. Report item - opens in a new window or tab. Seller assumes all responsibility for this listing. Item specifics. Brand New: A new, unread, unused book in perfect condition with no missing or damaged pages.
See the Read more about the condition Brand New: A new, unread, unused book in perfect condition with no missing or damaged pages. See all condition definitions opens in a new window or tab.
Wall, C. Publication Year:. Publication Name:. Practical Statistics for Astronomers. Item Height:. Cambridge University Press. Item Width:. Mathematics, Astronomy. Item Weight:. Number of Pages:. About this product.
Product Information Astronomy needs statistical methods to interpret data, but statistics is a many-faceted subject that is difficult for non-specialists to access. This handbook helps astronomers analyze the complex data and models of modern astronomy. This second edition has been revised to feature many more examples using Monte Carlo simulations, and now also includes Bayesian inference, Bayes factors and Markov chain Monte Carlo integration.
Chapters cover basic probability, correlation analysis, hypothesis testing, Bayesian modelling, time series analysis, luminosity functions and clustering. Exercises at the end of each chapter guide readers through the techniques and tests necessary for most observational investigations.
The data tables, solutions to problems, and other resources are available online at www. This focuses on the probabilities right away, without the intermediate step of statistics. In the Bayesian tradition, we invert the reasoning just described.
The data, we say, are unique and known; it is the mean that is unknown, that should have probability attached to it. Without using statistics, we instead calculate the probability of various values of the mean, given the data we have. This also allows us to make decisions.
In fact, as we shall see, this approach comes a great deal closer to answering the questions that scientists actually ask. The underlying distributions may be far from known or understood; no averaging may be going on to lead us towards the central limit theorem and Gaussian distributions see Chapter 2 ; yet we still wish to draw inferences about the underlying population.
We only do so safely with non-parametric statistics, methods that do not require knowledge of the underlying distributions.
Non-parametric techniques have the power to do this. Each has admissible operations. W e may wish to make statistical inference without recourse to numerical scales. We need to understand what they are doing. They are described in the course of this book. It is a practical manual, which assumes that proofs, numerical methods and citation lists can easily be found elsewhere. Work through the examples and exercises; they are drawn from our experience and have been chosen to clarify the text.
Some need data; this may be simulated. Remember, in this subject we can do useful and revealing experiments — in the computer. None of our topics is arcane and they will be found in the index of any elementary statistics book. Exercises 9 There is little algebra in this book; it would have greatly lengthened and cluttered the presentation to have worked through details.
Likewise, we have not explained how various integrals were done or eigenvalues found. These things can be done by computers; packages such as the superb M A T H E M ATICA, used for many of the calculations in this book, can deal swiftly with more mathematical technology than most of us know. Using these packages frees us all up to think about the problem to hand, rather than searching in vain for missing minus signs or delving into handbooks for integrals which never seem to be there in quite the needed form.
The other source is the indispensable Numerical Recipes Press et al. We have not attempted exhaustive referencing. Rather, we have given enough key references to provide entry points to the literature. Online bibliographic databases provide excellent cross-referencing, showing who has cited a paper and who it cites; it is the work of minutes to collect a comprehensive reading list on any topic. The lecture notes for many excellent university courses are now on the Web; a well-phrased search may yield useful material to help with whatever is puzzling you.
Finally, use this book as you need it. It can be read from front to back, or dipped into. Of course, no interesting topic is self-contained, but we hope the cross-referencing will connect all the technology needed to explore a particular topic. Exercises 1. But it is. Describe the discovery of pulsars Hewish et al.
The object is legitimately in the dataset in terms of prestated selection criteria. Is the conclusion robust? Albert Einstein Whether He does or not, the concepts of probability are important in astronomy for two reasons. We have to express these errors as precisely and usefully as we can.
The second statement really only means anything because of some unspoken assumption about the distribution of errors. Only at this point is it safe to consider the concept of probability distributions; some common probability distributions are compared and contrasted.
For a fascinating historical study of probability, see the books by Hald ; The ideas in this chapter draw heavily on the writings of Jaynes ; ; The study of probability began with the analysis of games of chance involving cards or dice. Because of this background we often think of probabilities as a kind of limiting case of a frequency.
If we can identify equally likely cases, then calculating probabilities amounts simply to enumerating cases — not always easy, but straightforward in principle. However, identifying equally likely cases requires more thought. Sometimes we estimate probabilities from data. The probability of our precious observing run being clouded out is estimated by number of cloudy nights last year but two issues arise.
A set of nights equally likely to be cloudy? It is sometimes the only way; but the risks must be recognized. So what is probability? The notion we adopt for the present is that probability is a numerical formalization of our degree or intensity of 2.
In everyday speech we often refer to the probability of unique events, showers of rain or election results. In the desiccated example of throwing dice, x measures the strength of our belief that any face will turn up. We just need to be sure that two people with the same information will arrive at the same probabilities. It turns out that this constraint, properly expressed, is enough to develop a theory of probability which is mathematically identical to the one often interpreted in frequentist terms.
The argument is originally due to Cox and goes as follows: if A, B and C are three events and we wish to have some measure of how strongly we think each is likely to happen, then for consistent reasoning we should at least apply the rule if A is more likely than B,and B is more likely than C,then A is more likely than C. Before , four naked-eye supernovae had been recorded in ten centuries.
What, before , was the probability of a bright supernova happening in the twentieth century? Supernovae are physically determined events and when they are going to happen can, in principle, be accurately calculated.
They are not random events. This assumes supernovae were equallylikelyto be reported throughout 10 centuries, which may well not be true.
Eventually some degree of beliefabout detection e ciency willhave to be made explicit in this kind of assignment. In principle we might know the stellar mass function, the fate and lifetime as a function of mass, and the stellar birth rate. The belief-measure structure ismore complicated in thisdetailed m odel but itisstillthere. Suppose now that we sight supernova A.
Approach 3 might need to adjust some aspects of its models in the light of fresh data; predicted probabilities would change. In this case, it follows not trivially! If there are several possibilities for event B label them B1 , B2 ,.
They might be instrumental parameters, for example. This is called marginalization. We can say nothing further, although we might be able to formulate a hypothesis to carry out an experiment.
This assumes that the probabilities are independent, obviously what we would like to test. A suitable model for the probabilities is the Poisson distribution Section 2.
If the objects are all the same, the probability of a linear triple depends on the cube of the surface density and search area. The theorem is particularly useful when interpreted as a rule for induction; the data, the event A, are regarded as succeeding B, the state of belief preceding the experiment. This experience is expressed by the likelihood prob A B. Finally prob B A is the posterior probability, the state of belief after the data have been analysed. It acquires its force from its interpretation.
To see what this force is, we return to the familiar and simple problem of drawing those coloured balls from urns. It is clear, even automatic, what to calculate; if there are M red balls and N white balls, the probability of drawing three red balls and two white ones is. As a series of brilliant scientists realized, and as a series of brilliant scientists did not, this is generally not the problem we face. As scientists, we more often have a datum three red balls, two white ones and we are trying to infer something about the contents of the urn.
The urn example illustrates the principles involved; these are far more interesting than coloured balls. Thomas Bayes —61 was an English vicar, mathematician and statistician. There is speculation that it was published posthumously because of the controversy which Bayes believed would ensue.
This must be an a-posteriori judgement. How many red balls are there in the urn? Our model hypothesis is that the probability of a red ball is N. This is a Binomial distribution; see section 2. We also need probability model , or the prior. Probability 0. We are describing our state of belief about the contents of the urn, given what we know the data, and our prior information. For example, the probability of the urn containing three or fewer red balls is 11 per cent.
Our concern, as experimental scientists, is with what we can infer about the world from what we know. In Fig. Probability density 1. In this example, the prior seems to be well determined. However, in some cases we wish to estimate quantities where the argument is not so straightforward.
Or if we needed an estimate of the mean of a Gaussian, then we have to ask how we interpret the prior probability of the mean. Our data are four supernovae in ten centuries. This result is sometimes called the law of large numbers, expressing as it does the frequentist idea of a large number of repetitions resulting in a converging estimate of probability.
This makes no sense in a frequentist approach, nor indeed in any interpretation of probabilities as objective. By intricate 2. One of the ways of determining a prior is the maximum entropy principle; we will see an example of such a prior later Section 6.
At this stage, our data are four supernovae in ten centuries. Reviewing the situation at the end of the twentieth century, we take this as our prior. In these examples we have focused on the peak of the posterior probability distribution. This is one way amongst many of attempting to characterize the distribution by a single number.
Unless posterior distributions are very narrow, attempting to characterize them by a single number is frequently misleading. The basic idea is intuitive; here is a little more detail. The sum of the possibilities for getting no heads to four heads is readily seen to be 1. This probability distribution is discrete; there is a discrete set of outcomes and so a discrete set of probabilities for those outcomes. In this sort of case we have a mapping between the outcomes of the experiment and a set of integers.
Sometimes the set of outcomes maps onto real numbers instead, the set of outcomes no longer containing discrete elements. We deal with this by the contrivance of discretizing the range of real numbers into little ranges within which we assume the probability does not change. Thus if x is the real number that indexes outcomes, we associate with it a probability density f x ; the 2. Three of them are of prime importance, the Binomial, Poisson, and Gaussian or Normal, and we discuss these in turn.
Invariably tabulated and used in integral form. In the study of rounding errors; as a tool in studies of other continuous distributions. There are N! However there are n! The Binomial distribution follows from this argument. How many of these clusters do we expect to have a dominant central galaxy? What can we do with this information? The Bayesian thing to do is a calculation that parallels the supernova example.
Assuming the X-ray galaxies are a homogeneous set, we can 28 Probability Probability density 12 10 8 6 4 2 0 0 0. The posterior probability distribution for the fraction of X-rayselected clusters that are centrally dominated. The black line uses a uniform prior distribution for the fraction; the dashed line uses the prior derived from an assumed previous sample in which 10 out of clusters had dominant central members. The light curve shows the distribution for this earlier sample.
A relevant prior would be the results for the original larger survey. Figure 2. For example, there is only a 10 per cent chance that the centrally dominant fraction exceeds even 0. The Binomial distribution is the parent of two other famous distributions, the Poisson and the Gaussian. The arrivals of successive photons are independent apart from small correlations arising because photons obey Bose—Einstein statistics, negligible for our purposes.
Thus the conditions necessary for the Poisson distribution are met. This case is the subject of an exercise in the next chapter. The Normal Gaussian distribution.
The area under the curve is 1. How this comes about for the Binomial distribution is the subject of an exercise. The true importance of the Gaussian distribution and its dominant position in experimental science, however, stems from the central limit theorem.
A non-rigorous statement of this is as follows. This is a remarkable theorem. What it says is that provided certain conditions are met — and they are in almost all physical situations — a little bit of averaging will produce a Gaussian distribution of results no matter what the shape of the distribution from which the sample is drawn.
Even eyeball integration counts. The reliance on Gaussian distributions, made valid by the unsung hero of statistical theory and indeed experimentation, the central limit theorem, shapes our entire view of experimentation. It is this theorem which leads us to describe our errors in the universal language of sigmas, and indeed to argue our results in terms of sigmas as well, which we explicitly or implicitly recognize as describing our place within or at the extremities of the Gaussian distribution.
Here we have brutally truncated an exponential, clearly an extremely non-Gaussian distribution. The histogram obtained in drawing random samples from the distribution follows it closely.
When values resulting from averaging just four values have been formed, the distribution is already becoming symmetrical; by the time values of 16 long averages have been formed, it is virtually Gaussian. Before leaving the central limit miracle and Gaussian distributions, it is important to emphasize how tight the tails of the Gaussian distribution are Table A2.
Things upset the distribution; there are outlying points. In fact, experimentalists are aware of another key feature of the central limit theorem: the convergence to a Gaussian happens fastest at the centre of the distribution, but the wings may converge much more slowly to a Gaussian form.
An indication of the power of the central limit theorem. We will use these many times in the rest of this book, but here is a summary of the method. First, we may estimate parameters. Of course we might have more precise information available.
This method is related to the classical technique of maximum likelihood. This amounts to characterizing the posterior by one number, an approach which is often useful because of powerful theorems on maximum likelihood. We consider this in more detail in Section 6. Often knowing the posterior distribution of the parameter of interest is enough; we might be making a comparison with an exactly known quantity, perhaps derived from some theory.
We may express posterior probabilities by using the notion of odds, a handy way of expressing probabilities when we have only two possibilities. The odds on event A are just prob A. If we have two exclusive possibilities for a prior, say A and not A, then the posterior odds are given by the ratio of the posterior probabilities with each prior, and give an indication of which prior to bet on, given the available data.
Exercises 2. This is not an astronomical problem but does provide a warm-up exercise on probability and random numbers. Every computer has a way of producing a random number between zero and one. Use this to simulate a simple coin-tossing game where player A gets a point for heads, player B a point for tails.
Guess how often in a game of N tosses the lead will change; if A is in the lead at toss N , when was the previous change of lead most likely to be? And by how much is a player typically in the lead? Try to back these guesses up with calculations, and then simulate the game. For many more game-based illustrations of probability, see Haigh Imagine you are on a night observing run with a colleague, in settled weather.
You have an agreement that one of the nights, of your choosing, will be for your exclusive use. Bayesian inference.
Consider the proverbial bad penny, for which prior information has indicated that there is a probability of 0. What is the Bayesian posterior probability, given this information, of obtaining seven heads in a row?
In such a circumstance, how might we consider the fairness of the coin? Or of the experimenter who provided us with the prior information? What are the odds on the penny being fair?
If we have one success and no failures, consider what the rule implies, and discuss why this is odd. Bayesian reasoning in an everyday situation. The probability of a certain medical test being positive is 90 per cent, if the patient has disease D. If your doctor tells you the test is positive, what are your chances of having the disease? Inverse Chi-squared statistic. Maximum likelihood and the Poisson distribution.
Form the likelihood function by taking the product 36 2. Is it what you expect? Maximum likelihood and the exponential distribution.
Suppose we have data X1 , X2 ,. Do the differences seem reasonable? Which prior would you choose? Birth control. Are there more males than females in the population? Attack the problem in three ways: pure thought, by a simulation, and by an analytic calculation. Benjamin Disraeli In embarking on statistics we are entering a vast area, enormously developed for the Gaussian distribution in particular.
This is classical territory; historically, statistics were developed because the approach now called Bayesian had fallen out of favour. The use of statistics is not particularly easy. The alternatives to Bayesian methods are subtle and not very obvious; they are also associated with some fairly formidable mathematical machinery.
We will avoid this, presenting only results and showing the use of statistics, while trying to make clear the conceptual foundations. For a set of data X1 , X2 ,. Possible values, being variables, we will denote in the usual algebraic spirit by lower case.
Median: arrange Xi according to size; renumber. M ode: Xmode is the value of xi occurring most frequently; it is the location of the peak in the histogram of Xi. But what does it mean? It does not tell us the probability that the true value of D is between 8.
We usually assume that a Gaussian distribution applies, placing our faith in the central limit theorem. Knowing the distribution of the errors allows us to make probabilistic statements, which are what we need. After all, if there were only a 1 per cent chance that the interval [8. So this is one key aspect of statistics; they are associated with distributions.
In fact they are most useful when they are estimators of the parameters of distributions. In quoting our measurement of D, we are hoping that 8. The other key aspect of statistics is that they are to be interpreted in a classical, not Bayesian framework. We need to look carefully at this distinction; it parallels our discussion of those coloured balls in the urn. Assuming a true distance D0 , a classical analysis tells us that D is say Normally distributed around D0 , with a standard deviation of 0.
So we are to imagine many repetitions of our experiment, each yielding a value of the estimate D which dances around D0. Just as in the case of the coloured balls, this approach assumes the thing we want to know, and tells us how the data will behave. It assumes the data, and tells us the thing we want to know. There are no imagined repetitions of the experiment. Conceptually it is clearer than classical methods, but these are so well developed and established particularly for the Gaussian that we will give some explanation of classical statistics now, and indeed use classical results in many places in this book.
It is worth remembering, however, that statistics of known usefulness are quite rare; the intensive development of statistics based on the Gaussian should not blind us to this fact.
In many cases of astronomical interest we may need to derive useful statistics for ourselves. By far the easiest method for doing this is maximum likelihood Section 6. To repeat, statistics are properties of the data and only of the data; they summarize, reduce, or describe the data.
But we may anticipate that our data do follow these or other distributions and we may therefore wish to relate statistics from the data to parameters describing the distributions. This is done through expectations or expectation values, long-run average properties depending on distribution functions. We can think of the expectation as being the result of repeating an experiment many times, and averaging the results.
Take our favourite distribution, the Gaussian. In astronomy, broad or even openended power-law distributions are common. It is worth checking any piece of remembered statistics, as it is almost certain to be based on the Gaussian distribution. They are estimated analogously by suitable averages to the way in which mean and variance were estimated in the previous example.
They are sometimes useful for characterizing the shape of distributions, although they are very sensitive to outliers. Figure 3. There are, then, at least four requirements for statistics. For the Gaussian 42 Statistics and expectations Fig. The average values of xi are 0. Solid curves represent Gaussians of unit area and standard deviation. The Cauchy distribution Section 3. This is one of the most important tenets of observational astronomy. We are keeping the subscripts now because of the possibility that the data from the ith pixel, spectral channel, or time slot, are not independent of the data from the jth position.
This is a condition probably the likeliest for the familiar N averaging away of noise; our assumption is that noise from one datum to the next or one pixel to the next is independent. Suppose we had a time series, say of photometric measurements Xi. It might be a reasonable assumption that the measurements were identically distributed and independent of each other.
In this case, the probability distribution would be the same for each time, and so can just be written g x parameters. Often this simple situation does not apply. If the data are indexed in some meaningful way, for example as a time series, the data are called stationary.
As a second possibility, in photometric work it is quite likely that if one measurement is low, because of cloud, then the next few will be low too. In reality there is a continuum, with the covariance frequently non-zero. At the other far extreme, systematic errors persist no matter how much data are collected. If you are observing Arcturus when you should be observing Vega, the errors will never average away no matter how persistent you are.
Knowing data error, how do we estimate error in the desired quantity? If the errors are small, by far the easiest way is to use a Taylor expansion. Suppose we measure variables x, y, z,. The simplest case is a transformation from the measured x, with probability distribution g, to some derived quantity f x with probability distribution h.
Some care may be needed in applying this simple rule if the function f is not monotonic. Suppose we are taking the logarithm of some exponentially distributed data. The probability distribution of the logarithm of data drawn from an exponential distribution. The result generalizes to the sum of many variables, and is often best calculated with the aid of the Fourier transform Section 8.
This transform is sometimes called the characteristic function. Without details, the results are as follows. It has a logarithmic singularity at zero but is normalized to unity. Probability density 1 0. The probability distribution of the product of two identical Gaussians — the original Gaussian is the dashed curve. The case of the ratio is equally instructive.
This is a somewhat unrealistic case — it corresponds to forming the ratio of data of zero signal-to-noise ratio — but illustrates that ratios involving low signal-to-noise are likely to have very broad wings.
The Bessel function distribution will, on average, succumb to the central limit theorem; this is not the case for the Cauchy distribution. In general, deviations from Normality will occur in the tails of distributions, the outliers that are so well known to all experimentalists. The probability distribution of the ratio of two identical Gaussian variables the original Gaussian is the dashed curve. We have met this result before Section 2.
This ratio has an obvious usefulness, telling us how far our average might be from the true mean Table A2. This allows us to check if the data were indeed drawn from Gaussians of the same width Section 5.
So Y1 is the smallest value of X, and YN the largest. Both the density and the cumulative distribution are therefore of interest. Suppose the distribution of x is f x , with cumulative distribution F x. If we select 10 galaxies from this distribution, the maximum of the 10 will follow the distribution shown in Fig.
The Schechter luminosity function solid curve and the distribution of the maximum of 10 and samples from the distribution, plotted as shortand long-dash curves respectively. If we choose galaxies, then of course the distribution moves to brighter values. Their use then parallels the Bayesian method. First, we may use them to estimate parameters; but the way in which they do this is more subtle than the Bayesian case. We do not get a probability distribution for the parameter of interest, but a distribution of the statistic, given the parameter.
Second, we may test hypotheses. This again parallels the Bayesian case, but the methods are much further apart conceptually. Recall the case discussed in Section 2. This classical approach is the basis of numerous useful tests, and we discuss some of them in detail in later Chapters 4 and 5. However, there is no doubt that the method does not quite seem to answer the question we had in mind, although often its results are indistinguishable from the more intelligible Bayesian approach.
The same decisions get taken. This seems a remarkable procedure. Exercises 3. Simple error analysis. Combining Gaussian variables. Use the result of Section 3. Average of Cauchy variables. Show that the average value of Cauchy-distributed variables has the same distribution as the original data. Use characteristic functions and the convolution theorem. Find a better location estimator. Poisson statistics.
Draw random numbers from Poisson distributions Section 6. Robust statistics. Make a Gaussian with outliers by combining two Gaussians, one of unit variance, one three times wider. Leave the relative weight of the wide Gaussian as a parameter.
Compare the mean deviation with the rms, for various relative weights. How sensitive are the two measures of scatter to outliers? Repeat the exercise, with a width derived from order statistics. Change of variable. Order statistics. Use order statistics Section 3. Is this MLE biased, but consistent i. Robert Matthews, New Scientist When we make a set of measurements, it is instinct to try to correlate the observations with other results.
There are grave dangers on this expedition, and we must ask ourselves the following questions. If not, calculation of a formal correlation statistic is probably a waste of time. Radio luminosities of 3CR radio sources versus distance modulus. Consider for instance the beautiful correlation in Fig. Are the more distant objects at earlier epochs clearly not the more powerful? In fact, as Sandage recognized, it proves nothing of the kind. The lower right-hand region can never be populated; such objects are too faint to show above the limit of the 3CR catalogue.
But what about the upper left? Provided that the luminosity function the true space density in objects per megaparsec3 slopes downward with increasing luminosity, the objects are bound to crowd towards the line. This is about all that can be gleaned immediately from the diagram — the space density of powerful radio sources is less than the space density of their weaker brethren.
Small spheres, corresponding to small redshifts and distance moduli, will yield only low-luminosity radio sources because their space density is so much the higher. The lesson applies to any proposed correlation for variables with steep probability density functions dependent upon one of the variables plotted.
Further, if there is a correlation, does the regression line Section 6. Beware in particular of plots which look like those of Fig. The essential point is that correlation may simply indicate a dependence of both variables on a third variable.
But there are many famous instances, e. For the former the hidden variable is age Are tall children cleverer? No, but older , while for the latter it is time.
There are in fact ways of searching for intrinsic correlation between variables when they are known to depend mutually upon a third variable. We consider it further in Sections 4. Finally we must not get too discouraged by all the foregoing. Consider Fig. The foregoing problem appears simple: we have a set of N measurements Xi , Yi and we ask formally if they are related to each other. The contours of Fig.
The multivariate Gaussian is the most familiar of these. We generated 50 samples from a bivariate t distribution with three degrees of freedom. This second edition has been revised to feature many more examples using Monte Carlo simulations, and now also includes Bayesian inference, Bayes factors and Markov chain Monte Carlo integration. Chapters cover basic probability, correlation analysis, hypothesis testing, Bayesian modelling, time series analysis, luminosity functions and clustering.
Exercises at the end of each chapter guide readers through the techniques and tests necessary for most observational investigations. The data tables, solutions to problems, and other resources are available online at www.
Bringing together the most relevant statistical and probabilistic techniques for use in observational astronomy, this handbook is a practical manual for advanced undergraduate and graduate students and professional astronomers.
Other books in this series. Add to basket. Practical Statistics for Astronomers J. Handbook of Pulsar Astronomy D. Introduction to Astronomical Spectroscopy Immo Appenzeller. Observational Molecular Astronomy David A. Introduction to Astronomical Photometry Edwin Budding. Handbook of X-ray Astronomy Keith Arnaud. Practical Optical Interferometry David F. Handbook of Infrared Astronomy I. Table of contents 1.
Decision; 2. Probability; 3. Statistics and expectations; 4. Correlation and association; 5. Hypothesis-testing; 6. Data modelling and parameter-estimation: basics; 7. Data modelling and parameter-estimation: advanced topics; 8.
Detection and surveys; 9. Sequential data - 1D statistics;
0コメント