I am attempting to benchmark a number of queries over a 15GB dataset with ~
10mil records. When I run linux time on the query execution (single column
projection), it returns 1 minute, but the \timing command returns only 15
seconds? Can someone explain the difference? 1 minute is consistent with
reading the 15gb from disk at 250mb/s (I have SSDs), but is \timing
supposed to include that cost? Or simply the computation time plus the time
to return results.

Thank you.

Daniel

Computer Science
Yale University, Class of 2014
daniel.tah...@yale.edu
(646) 397-6379

Reply via email to