> On 29 May 2019, at 21:57, Eric Barnhill <ericbarnh...@gmail.com> wrote: > > At the end of the day, like we just saw on the user list today. users are > going to come around with arrays and want to get the mean, median, > variance, or quantiles of that array. The easiest way to do this is to have > some sort of static method that delivers these: > > double mean = Stats.mean(double[] data)
This Stats class can be just a utility class with static helper methods invoking the appropriate class implementation. All the algorithms should be in one place (to minimise code duplication). I don’t think calling SummaryStats under the hood is the best solution for these helper methods. It does a lot more work than is necessary to compute one metric. It should be done with individual classes for each metric and an appropriate helper method for each. Looking at math4 this would be helpers for: moment/FirstMoment.java moment/FourthMoment.java moment/GeometricMean.java moment/Kurtosis.java moment/Mean.java moment/SecondMoment.java moment/SemiVariance.java moment/Skewness.java moment/StandardDeviation.java moment/ThirdMoment.java moment/Variance.java rank/Max.java rank/Median.java rank/Min.java rank/Percentile.java summary/Product.java summary/Sum.java summary/SumOfLogs.java summary/SumOfSquares.java DescriptiveStatistics.java (mean, variance, StdDev, Max, Min, Count, Sum, Skewness, Kurtosis, Percentile) SummaryStatistics.java (mean, variance, StdDev, Max, Min, Count, Sum) Left out those operating on a double[] for each increment (not a single double): moment/VectorialCovariance.java moment/VectorialMean.java MultivariateSummaryStatistics.java Left out this as it is an approximation when the entire double[] cannot be held in memory: rank/PSquarePercentile.java Note that some metrics are not applicable to undefined data lengths and so cannot be written to support streams: Median > > and the user doesn't have to think more than that. Yes this should > implemented functionally, although in this simple case we probably just > need to call Java's SummaryStats() under the hood. If we overcomplicate > this, again like we just saw on the user list, users will simply not use > the code. > > Then yes, I agree Alex's argument for updateable instances containing state > is compelling. How to relate these more complicated instances with the > simple cases is a great design question. > > But first, let's nail the Matlab/Numpy case of just having an array of > doubles and wanting the mean / median. I am just speaking of my own use > cases here but I used exactly this functionality all the time: > > Mean m = new Mean(). > double mean = m.evaluate(data) > > and I think this should be the central use case for the new module. > > > On Wed, May 29, 2019 at 4:51 AM Gilles Sadowski <gillese...@gmail.com> > wrote: > >> Hello. >> >> Le mar. 28 mai 2019 à 20:36, Alex Herbert <alex.d.herb...@gmail.com> a >> écrit : >>> >>> >>> >>>> On 28 May 2019, at 18:09, Eric Barnhill <ericbarnh...@gmail.com> >> wrote: >>>> >>>> The previous commons-math interface for descriptive statistics used a >>>> paradigm of constructing classes for various statistical functions and >>>> calling evaluate(). Example >>>> >>>> Mean mean = new Mean(); >>>> double mn = mean.evaluate(double[]) >>>> >>>> I wrote this type of code all through grad school and always found it >>>> unnecessarily bulky. To me these summary statistics are classic use >> cases >>>> for static methods: >>>> >>>> double mean .= Mean.evaluate(double[]) >>>> >>>> I don't have any particular problem with the evaluate() syntax. >>>> >>>> I looked over the old Math 4 API to see if there were any benefits to >> the >>>> previous class-oriented approach that we might not want to lose. But I >>>> don't think there were, the functionality outside of evaluate() is >> minimal. >>> >>> A quick check shows that evaluate comes from UnivariateStatistic. This >> has some more methods that add little to an instance view of the >> computation: >>> >>> double evaluate(double[] values) throws MathIllegalArgumentException; >>> double evaluate(double[] values, int begin, int length) throws >> MathIllegalArgumentException; >>> UnivariateStatistic copy(); >>> >>> However it is extended by StorelessUnivariateStatistic which adds >> methods to update the statistic: >>> >>> void increment(double d); >>> void incrementAll(double[] values) throws MathIllegalArgumentException; >>> void incrementAll(double[] values, int start, int length) throws >> MathIllegalArgumentException; >>> double getResult(); >>> long getN(); >>> void clear(); >>> StorelessUnivariateStatistic copy(); >>> >>> This type of functionality would be lost by static methods. >>> >>> If you are moving to a functional interface type pattern for each >> statistic then you will lose the other functionality possible with an >> instance state, namely updating with more values or combining instances. >>> >>> So this is a question of whether updating a statistic is required after >> the first computation. >>> >>> Will there be an alternative in the library for a map-reduce type >> operation using instances that can be combined using Stream.collect: >>> >>> <R> R collect(Supplier<R> supplier, >>> ObjDoubleConsumer<R> accumulator, >>> BiConsumer<R, R> combiner); >>> >>> Here <R> would be Mean: >>> >>> double mean = Arrays.stream(new double[1000]).collect(Mean::new, >> Mean::add, Mean::add).getMean() with: >>> >>> void add(double); >>> void add(Mean); >>> double getMean(); >>> >>> (Untested code) >>> >>>> >>>> Finally we should consider whether we really need a separate class for >> each >>>> statistic at all. Do we want to call: >>>> >>>> Mean.evaluate() >>>> >>>> or >>>> >>>> SummaryStats.mean() >>>> >>>> or maybe >>>> >>>> Stats.mean() ? >>>> >>>> The last being nice and compact. >>>> >>>> Let's make a decision so our esteemed mentee Virendra knows in what >>>> direction to take his work this summer. :) >>> >> >> I'm not sure I understand the implicit conclusions of this conversation >> and the other one there: >> https://markmail.org/message/7dmyhzuy6lublyb5 >> >> Do we agree that the core issue is *not* how to compute a mean, or a >> median, or a fourth moment, but how any and all of those can be >> computed seamlessly through a functional API (stream)? >> >> As Alex pointed out, a useful functionality is the ability to "combine" >> instances, e.g. if data are collected by several threads. >> A potential use-case is the retrieval of the current value of (any) >> statistical quantities while the data continues to be collected. >> >> An initial idea would be: >> public interface StatQuantity { >> public double value(double[]); // For "basic" usage. >> public double value(DoubleStream); // For "advanced" usage. >> } >> >> public class StatCollection { >> /** Specify which quantities this collection will hold/compute. */ >> public StatCollection(Map<String, StatQuantity> stats) { /*... */ } >> >> /** >> * Start a worker thread. >> * @param data Values for which the stat quantities must be computed. >> */ >> public void startCollector(DoubleStream data) { /* ... */ } >> >> /** Combine current state of workers. */ >> public void collect() { /* ... */ } >> >> /** @return the current (combined) value of a named quantity. */ >> public double get(String name) { /* ... */ } >> >> private StatCollector implements Callable { >> StatCollector(DoubleStream data) { /* ... */ } >> } >> } >> >> This is all totally untested, very partial, and probably wrong-headed but >> I thought that we were looking at this kind of refactoring. >> >> Regards, >> Gilles >> >> --------------------------------------------------------------------- >> To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org >> For additional commands, e-mail: dev-h...@commons.apache.org >> >>