On 03/20/2013 11:17 AM, Branimir Maksimovic wrote:
Are you sure? I use ghc 7.6.2 (compiled with -O2) and without bang patterns
for 1million iterations it blows stack space.
With bang patterns it runs in constant space , same as
other version?
Okay, I have found the root of allocation problem.
On 20 March 2013 11:41, Konstantin Litvinenko wrote:
> On 03/20/2013 11:17 AM, Branimir Maksimovic wrote:
>>
>> Are you sure? I use ghc 7.6.2
>
>
> Huh, I use 7.4.2, and if 7.6.2 can handle this I will try to switch. Not
> sure how to do that on ubuntu 12.10...
I always install ghcs under my home
On 03/20/2013 11:17 AM, Branimir Maksimovic wrote:
Are you sure? I use ghc 7.6.2
Huh, I use 7.4.2, and if 7.6.2 can handle this I will try to switch. Not
sure how to do that on ubuntu 12.10...
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.o
@haskell.org
> Subject: Re: [Haskell-cafe] Streaming bytes and performance
>
> On 03/20/2013 12:47 AM, Branimir Maksimovic wrote:
> > Your problem is that main_6 thunks 'i' and 'a' .
> > If you write (S6 !i !a) <- get
> > than there is no problem
On 03/20/2013 12:47 AM, Branimir Maksimovic wrote:
Your problem is that main_6 thunks 'i' and 'a' .
If you write (S6 !i !a) <- get
than there is no problem any more...
Nope :( Unfortunately that doesn't change anything. Still allocating...
___
Hask
> To: haskell-cafe@haskell.org
> From: to.darkan...@gmail.com
> Date: Tue, 19 Mar 2013 23:27:09 +0200
> Subject: Re: [Haskell-cafe] Streaming bytes and performance
>
> On 03/19/2013 10:49 PM, Konstantin Litvinenko wrote:
> > {-# LANGUAGE BangPat
I guess the optimizations that went into making lazy bytestring IO fast (on
disks) are increasingly irrelevant as SSDs take over.
On Mar 19, 2013 9:49 PM, "Peter Simons" wrote:
> Hi Don,
>
> > "Using this input file stored in /dev/shm"
> >
> > So not measuring the IO performance at all. :)
>
>
Hi Don,
> "Using this input file stored in /dev/shm"
>
> So not measuring the IO performance at all. :)
of course the program measures I/O performance. It just doesn't measure
the speed of the disk.
Anyway, a highly optimized benchmark such as the one you posted is
eventually going to beat on
Oh I see what you're doing ... "Using this input file stored in /dev/shm"
So not measuring the IO performance at all. :)
On Mar 19, 2013 9:27 PM, "Peter Simons" wrote:
> Hi Don,
>
> > Compare your program (made lazy) on lazy bytestrings using file IO:
> [...]
>
> if I make those changes, the pr
Hi Don,
> Compare your program (made lazy) on lazy bytestrings using file IO: [...]
if I make those changes, the program runs even faster than before:
module Main ( main ) where
import Prelude hiding ( foldl, readFile )
import Data.ByteString.Lazy.Char8
countSpace :: Int -> Char -> In
On 03/19/2013 10:49 PM, Konstantin Litvinenko wrote:
{-# LANGUAGE BangPatterns #-}
import Control.Monad.State.Strict
data S6 = S6 !Int !Int
main_6 = do
let r = evalState go (S6 1 0)
print r
where
go = do
(S6 i a) <- get
if (i == 0) then return a else (pu
This isn't a valid entry -- it uses strict IO (so allocates O(n) space) and
reads from standard input, which pretty much swamps the interesting
constant factors with buffered IO overhead.
Compare your program (made lazy) on lazy bytestrings using file IO:
import Prelude hiding ( readFile, foldl )
Don Stewart writes:
> Here's the final program: [...]
Here is a version of the program that is just as fast:
import Prelude hiding ( getContents, foldl )
import Data.ByteString.Char8
countSpace :: Int -> Char -> Int
countSpace i c | c == ' ' || c == '\n' = i + 1
| oth
On 03/19/2013 10:53 PM, Nicolas Trangez wrote:
On Tue, 2013-03-19 at 20:32 +, Don Stewart wrote:
So about 8x faster. Waiting for some non-lazy bytestring benchmarks... :)
You could try something like this using Conduit:
{-# LANGUAGE BangPatterns #-}
module Main (main) where
import Data.C
On Tue, 2013-03-19 at 20:32 +, Don Stewart wrote:
> Oh, I forgot the technique of inlining the lazy bytestring chunks, and
> processing each chunk seperately.
>
> $ time ./fast
> 4166680
> ./fast 1.25s user 0.07s system 99% cpu 1.325 total
>
> Essentially inline Lazy.foldlChunks and speciali
On 03/19/2013 10:32 PM, Don Stewart wrote:
Oh, I forgot the technique of inlining the lazy bytestring chunks, and
processing each chunk seperately.
$ time ./fast
4166680
./fast 1.25s user 0.07s system 99% cpu 1.325 total
Essentially inline Lazy.foldlChunks and specializes is (the inliner
shoul
Oh, I forgot the technique of inlining the lazy bytestring chunks, and
processing each chunk seperately.
$ time ./fast
4166680
./fast 1.25s user 0.07s system 99% cpu 1.325 total
Essentially inline Lazy.foldlChunks and specializes is (the inliner should
really get that).
And now we have a nice un
Just for fun. Here's some improvements. about 6x faster.
I'd be interested to see what io-streams could do on this.
Using a 250M test file.
-- strict state monad and bang patterns on the uncons and accumulator
argument:
$ time ./A
4166680
./A 8.42s user 0.57s system 99% cpu 9.037 total
-- just
On 03/18/2013 02:14 PM, Gregory Collins wrote:
Put a bang pattern on your accumulator in "go". Since the value is not
demanded until the end of the program, you're actually just building up
a huge space leak there.
Fixed that
Secondly, unconsing from the lazy bytestring will cause a lot of
al
Put a bang pattern on your accumulator in "go". Since the value is not
demanded until the end of the program, you're actually just building up a
huge space leak there.
Secondly, unconsing from the lazy bytestring will cause a lot of allocation
churn in the garbage collector -- each byte read in th
Hi All!
I tune my toy project for performance and hit the wall on simple, in
imperative world, task. Here is the code that model what I'm trying to
achieve
import qualified Data.ByteString.Lazy as L
import Data.Word8(isSpace)
import Data.Word
import Control.Monad.State
type Stream = State L.
21 matches
Mail list logo