Bulat Ziganshin wrote:
> yes. multi-threaded GC is planned gor next ghc version, afair
To be clear, it'll be a parallel GC, not a concurrent one. The former
still stops all threads, but runs the collector on multiple cores at
once. The latter performs collection while mutator threads are still
Hello Andrew,
Monday, April 21, 2008, 12:05:28 AM, you wrote:
> My brain is telling me I've read something somewhere that had in-depth
> information about GHC's GC implementation - but I can't remember where I
> saw it... (Maybe the developer wiki? I'll go look there anyway, they
> might have so
Bulat Ziganshin wrote:
3. Is there any way for a running Haskell program to find out how much
heap space is currently allocated / used / free?
i think it's possible by asking internal RTS vars. SM once suggested
to add to GHC library that provides official way to ask this info but
no volun
Brandon S. Allbery KF8NH wrote:
On Apr 20, 2008, at 15:41 , Andrew Coppin wrote:
1. Does running the GC force all threads to stop? I know some GC
designs do this, but I have no idea how the one GHC implements works.
2. Is the GC still single-threaded? (GHC 6.8.2 here.)
Full GC is single-th
Hello Andrew,
Sunday, April 20, 2008, 11:41:52 PM, you wrote:
yes, GC behavior has significant impact on any new language (i mean
java, c#, f# and so on)
> 1. Does running the GC force all threads to stop? I know some GC designs
> do this, but I have no idea how the one GHC implements works.
ye
On Apr 20, 2008, at 15:51 , Brandon S. Allbery KF8NH wrote:
On Apr 20, 2008, at 15:41 , Andrew Coppin wrote:
3. Is there any way for a running Haskell program to find out how
much heap space is currently allocated / used / free? I know you
can find out how much wall time and CPU time you've
On Apr 20, 2008, at 15:41 , Andrew Coppin wrote:
1. Does running the GC force all threads to stop? I know some GC
designs do this, but I have no idea how the one GHC implements works.
2. Is the GC still single-threaded? (GHC 6.8.2 here.)
Full GC is single-threaded and stops the entire prog
OK, well I now have so much data sitting in from of me I don't even know
*what* I'm seeing any more. I have made several significant discoveries
though...
Consider the following:
msort [] = []
msort [x] = [x]
msort xs =
let
xs0 = msort (split0 xs)
xs1 = msort (split1 xs)
in
Andrew Coppin wrote:
The results of this little benchmark utterly defy comprehension. Allow
me to enumerate:
Weird thing #1: The first time you sort the data, it takes a few
seconds. The other 7 times, it takes a split second - roughly 100x
faster. Wuh?
The test data was a CAF. I changed it
Bulat Ziganshin wrote:
Hello Andrew,
Saturday, April 19, 2008, 7:50:30 PM, you wrote:
The data to be sorted is generated using a trivial LCG PRNG.
if you don't generate new data for each sorting run, this means that
data generation is much slower than sorting. don't forget about ghc
Bulat Ziganshin wrote:
Hello Brandon,
Saturday, April 19, 2008, 8:24:03 PM, you wrote:
contention. (Note that resource locking will be done by the threaded
runtime even with only one thread, so you will see some slowdowns
especially in I/O-related code.)
yes, i forget about this.
On Apr 19, 2008, at 11:53 , Murray Gross wrote:
2. You need to account for I/O buffering (not only by your OP
system in RAM, but by your disk controller)--after the first set of
I/O operations, your data may be in buffers, so subsequent uses may
retrieve data from buffers rather than from t
Hello Brandon,
Saturday, April 19, 2008, 8:24:03 PM, you wrote:
> contention. (Note that resource locking will be done by the threaded
> runtime even with only one thread, so you will see some slowdowns
> especially in I/O-related code.)
yes, i forget about this. Simon wrote once that locking
On Apr 19, 2008, at 11:50 , Andrew Coppin wrote:
Bulat Ziganshin wrote:
there are plenty of reasons: first, -threaded make i/o overlapped
with calculations.
Not with -N1.
Depending on how it's implemented (I not being a ghc guru), possibly
even with -N1 as long as it's using the thread-ca
Okay, here are my thoughts:
On Apr 19, 2008, at 9:56 AM, Andrew Coppin wrote:
Weird thing #1: The first time you sort the data, it takes a few
seconds. The other 7 times, it takes a split second - roughly 100x
faster. Wuh?
This looks like standard memoization to me. I know, I know, GHC
d
Hello Andrew,
Saturday, April 19, 2008, 7:50:30 PM, you wrote:
>> this looks like disk caching effects. if data are read from disj on
>> first run and from disk cache on the next runs, this only means that
>> your algorithm works faster than reading its data from disk
>>
> Negative. No data i
On Apr 19, 2008, at 9:56 AM, Andrew Coppin wrote:
Weird thing #3: Adding the "-threaded" compiler option makes
*everything* run a few percent faster. Even with only 1 OS thread.
I had a similar thing happen to me once. (http://geekrant.wordpress.com/2007/11/29/holy-shmoly-ghc-does-some-magic-a
Bulat Ziganshin wrote:
Weird thing #1: The first time you sort the data, it takes a few
seconds. The other 7 times, it takes a split second - roughly 100x
faster. Wuh?
this looks like disk caching effects. if data are read from disj on
first run and from disk cache on the next runs, thi
I can't offer definite answers to your questions, but I can suggest a few
issues you should consider:
1. Merge sort doesn't parallelize all that well--when the blocks are
small, the parallelization overhead is large in comparison with the
productive work that is to be done, and when the bl
Hello Andrew,
Saturday, April 19, 2008, 6:56:10 PM, you wrote:
> OK, so just for fun, I decided to try implementing a parallel merge sort
coincedence - now i'm writing a parallel compression algorithm, very
much like parallel bzip2, but using ghc, of course
> Weird thing #1: The first time you
Denis Bueno wrote:
It would be much easier to draw sound conclusions if you would post your code.
Erm... good point.
See attachments.
module Sort where
import Control.Parallel
import Control.Parallel.Strategies
split0 [] = []
split0 (x:xs) = x : split1 xs
split1 [] = []
split1 (x:xs) =
On Sat, Apr 19, 2008 at 10:56 AM, Andrew Coppin
<[EMAIL PROTECTED]> wrote:
> Can anybody explain any of this behaviour? I have no idea what I'm
> benchmarking, but it certainly doesn't appear to be the performance of a
> parallel merge sort!
It would be much easier to draw sound conclusions if yo
OK, so just for fun, I decided to try implementing a parallel merge sort
using the seq and par combinators. My plan was to generate some
psuedo-random data and time how long it takes to sort it. To try to
account for lazy evaluation, what the program actually does is this:
1. Write the input d
23 matches
Mail list logo