On 8 July 2014 13:41, Mark Felder wrote:
>
> On Jul 8, 2014, at 5:58, Ivan Voras wrote:
>> I'm waiting to upgrade some PostgreSQL machines running FreeBSD 9 to
>> FreeBSD 10 - are the patches committed yet / will they be committed for
>> 10.1?
>>
>
>
On 27/06/2014 14:56, Konstantin Belousov wrote:
> Hi,
> I did some measurements and hacks to see about the performance and
> scalability of PostgreSQL 9.3 on FreeBSD, sponsored by The FreeBSD
> Foundation.
>
> The results are described in https://kib.kiev.ua/kib/pgsql_perf.pdf.
> The uncommitted p
On 07/10/2013 19:28, David Wolfskill wrote:> At work, we have a bunch of
machines that developers use to build some
> software. The machines presently run FreeBSD/amd64 8.3-STABLE @rxx
> (with a few local patches, which have since been committed to stable/8),
> and the software is built within
On 03/07/2013 18:19, TJ wrote:
> Hi Guys,
> i am looking for some advise to help get the best out of one of my severs.
> It is a Dell PowerEdge R420,32GB,2x8c CPU and igb nics.
> Its primary purpose is to send outgoing mail it can send up to 3 millon
> emails a day and it running exim.
> I am rela
On 20/08/2012 17:22, Alan Cox wrote:
> Try setting kern.maxbcache to two billion and adding 50 billion to the
> setting of vm.kmem_size{,_max}.
Just as a side-note: unless it has some side-effects, it is probably
worth increasing these tunables by default, as RAM is very cheap again.
512 GB in a
On 28/01/2012 23:40, Florian Smeets wrote:
The conclusion right now seems to be that ULE is faster for database
workload,
I've done the same benchmarks with Bullet Cache last year and 4BSD is
*ridiculously* inefficient and slow for this specific workload which
involves a lot of inter-thread
On 24/01/2012 17:53, Marcin Markowski wrote:
On 24.01.2012 14:22, Ivan Voras wrote:
On Mon, Jan 23, 2012 at 12:20 PM, Marcin
Markowskiwrote:
(on 9.0 we can see also kernel thread named {ix0 que} using 100% CPU),
hw.ixgbe.num_queues=16
If there really are 16 hardware queues, shouldn
On Mon, Jan 23, 2012 at 12:20 PM, Marcin Markowskiwrote:
(on 9.0 we can see also kernel thread named {ix0 que} using 100% CPU),
hw.ixgbe.num_queues=16
If there really are 16 hardware queues, shouldn't there be 16 kernel
threads for queue processing?
__
On 21/10/2011 08:30, Hartmann, O. wrote:
> As I'm not a developer, but for scientific purposes highly interested in
> using GPUs, the only way of doing HPC computing at the moment is with
> nVidias TESLA/nVidia consumer graphics cards and LINUX, since on Linux
> one willing to use the GPU has the n
On 2 June 2011 16:24, Andriy Gapon wrote:
> on 02/06/2011 15:02 Ivan Voras said the following:
>> On 01/06/2011 13:11, Andriy Gapon wrote:
>>>
>>> Anyone knows of a benchmark/test that can measure/demonstrate difference in
>>> tlb
>>> shootdown perfo
On the second reading, if you are asking how fast a shootdown operation itself
is, then yes, it will probably not help you :)
--
Sent from my Android phone, please excuse my brevity.
Andriy Gapon wrote:
on 02/06/2011 15:02 Ivan Voras said the following: > On 01/06/2011 13:11,
Andriy Ga
On 01/06/2011 13:11, Andriy Gapon wrote:
Anyone knows of a benchmark/test that can measure/demonstrate difference in tlb
shootdown performance (or its lack)?
The "tlb" utility from lmbench may help you.
___
freebsd-performance@freebsd.org mailing l
On 04/04/2011 06:30, binto wrote:
I got error message& my server suddenly drop :
g_vfs_done() error = 6
g_vfs_done(): ad10s2a[READ(offset=1348599808, length=16384)]error = 6 anyone
can help me please??
This is the wrong list for this question. Better ask on stable@ or
file-systems@
For
On 10/01/2011 14:07, Bruce Cran wrote:
On Mon, 10 Jan 2011 13:49:08 +0100
Ivan Voras wrote:
It depends - since ZFS is logging all the time it doesn't have to
seek as much; if all transactions are WRITE and given sequentially,
they will be written to the drive sequentially, even with
On 07/01/2011 16:23, Stefan Lambrev wrote:
Hi,
Having in mind that a SAS enterprise disk normally can handle 150-180IOPS, this
benchmark is testing something else ;)
It depends - since ZFS is logging all the time it doesn't have to seek
as much; if all transactions are WRITE and given sequen
On 31/12/2010 10:06, Nicolas Haller wrote:
Someone knows if there is a page which explains FreeBSD mechanisms about
memory and fs cache management? I think I must read something on it :-)
I don't think there's a single up to date document describing all of it,
but it's conceptually simple and
On 12/06/10 17:31, O. Hartmann wrote:
I know, the essential backend of this chain will be the AMD graphics
card driver with its CAL compiler generating the binary code.
This is probably the biggest obstacle - AMD/ATI support for FreeBSD is
terrible. Specifically in this case, there is no vend
On 11/25/10 10:20, Yar Tikhiy wrote:
If you
still need greater write performance on tiny transactions, consider
getting a battery backup unit (BBU) for your RAID adapter. Quite
remarkably, HP refer to them as "Write-back Cache Enablers" because
installing one is the only way to get an HP RAID a
On 23 November 2010 10:35, David Xu wrote:
> Ivan Voras wrote:
>> and the overall behaviour is similar - the processes spend a lot of time
>> in "sbwait" and "ksem" states.
>>
> Strange, the POSIX semaphore in head branch does not use ksem, it is
On 11/23/10 01:26, Ivan Voras wrote:
On 11/22/10 17:37, David Xu wrote:
Mark Felder wrote:
I recommend posting this on the Postgres performance list, too.
Regards,
Mark
I think if PostgreSQL uses semaphore for inter-process locking,
it might be a good idea to use POSIX semaphore exits
On 11/22/10 17:37, David Xu wrote:
Mark Felder wrote:
I recommend posting this on the Postgres performance list, too.
Regards,
Mark
I think if PostgreSQL uses semaphore for inter-process locking,
it might be a good idea to use POSIX semaphore exits in our head
branch, the new POSIX semap
This is not a request for help but a report, in case it helps developers
or someone in the future. The setup is:
AMD64 machine, 24 GB RAM, 2x6-core Xeon CPU + HTT (24 logical CPUs)
FreeBSD 8.1-stable, AMD64
PostgreSQL 9.0.1, 10 GB shared buffers, using pgbench with a scale
factor of 500 (7.5 GB
On 10/27/10 13:19, David Wolfskill wrote:
>> note 2x drop in performance between outer and inner tracks.
>
> OK, but I'm not sure how that's likely to work for a multi-spindle RAID
> 0 group
Unless the RAID controller is trying to be overly smart (i.e. plays with
fire) by somehow alternating
On 10/27/10 12:55, David Wolfskill wrote:
> That *is* a problem, as I cannot justify a migration to a branch
> of FreeBSD that imposes about a 23% penalty in elapsed time on this
> workload. I want folks at work to have more reason to want to use
> (newer branches of) FreeBSD, not less.
That is
On 10/26/10 19:45, David Wolfskill wrote:
> On Tue, Oct 26, 2010 at 02:03:34PM +0200, Ivan Voras wrote:
>> ...
>> Since you now have the two kernels readily available, can you rule out
>> NFS by just repeating the step which involves it in both kernels and
>> compare th
On 10/21/10 23:53, Dan Nelson wrote:
> In the last episode (Oct 20), David Wolfskill said:
>> Almost 2 years ago, we migrated from a lightly-patched 6.2-R to 7.1-R with
>> 5 commits that were made to 7.1-S backported to it. On the same hardware
>> (not the HP mentioned above), I measured a 35% red
On 09/28/10 14:44, Stephen Sanders wrote:
Increasing MAXPHYS and turning up the stripe size won't have the effect
I'm looking for ?
I've missed you tuned MAXPHYS up. Yes, in this case it should work, if
the underlying driver supports larger IO sizes.
As I see it, if all of these are true:
*
On 09/28/10 03:08, Stephen Sanders wrote:
I'm trying a disk throughput experiment where in two 3ware raid 6's are
being put into a g_strip raid 0.
The raid 6's are using 8 7200RPM disks. The disk transfer rate is
~80MB/s. Using a load generation tool that is using O_DIRECT for I/O,
I've generate
On 09/19/10 06:57, Alexander Motin wrote:
Getting back to that topic I would like to share some more results. This
time I was testing Core(TM) i7 870 @ 2.93GHz. It has 8 logical cores and
bigger allowed TurboBoost effect. I was testing real time of net/mpd5
port building, using single CPU. I was
On 07/07/10 01:35, Stephen Sanders wrote:
> I'm wondering if anyone has heard of this.
>
> I've a system with a 3ware 9650 servicing 4 7200RPM Segate 1TB drives
> and the motherboard servicing 2 7200 RPM Segate 1TB drives.
So far so good.
> The 4 disk array is RAID 6 while the 2 disk array is RA
On 04/14/10 15:54, Christoph Weber-Fahr wrote:
> Hello,
>
> On 14.04.2010 11:04, Ivan Voras wrote:
>> On 04/13/10 22:30, Christoph Weber-Fahr wrote:
>>> Hello,
>>>
>>> on a new HP Proliant DL385 G6 I have a P410 with BBWC and
>>> 8 hard drive
On 04/13/10 22:30, Christoph Weber-Fahr wrote:
> Hello,
>
> on a new HP Proliant DL385 G6 I have a P410 with BBWC and
> 8 hard drives in RAID5.
> (BBWC is Battery Backed Write Cache Enabler, and the controller
> is configured with 300M (75%) write cache).
>
> One of the applications we want to ru
On 01/19/10 15:40, Guo, Yusheng wrote:
Hi FreeBSD users,
I get a problem when running ubench benchmark program on FreeBSD 8.0 using
following H/W
HP Proliant DL360G6 E5520 CPU and 48GB memory
And the memory performance result seems bad, any one get such situation before
? thanks
For refere
Gerd Truschinski wrote:
Hello,
is there anywhere a website that show me all the performancetools to
measure the performance of ZFS or a single Harddisk?
In Linux I have "hdparm -tT /dev/sda" to get the raw disk performance.
What is the equivalent FreeBSD command?
"diskinfo -vt /dev/blahblah"
Steven Hartland wrote:
Try with something like this, which is the standard set we use on our
file serving machines.
net.inet.tcp.inflight.enable=0
net.inet.tcp.sendspace=65536
kern.ipc.maxsockbuf=16777216
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
16 MB network buffers
Chuck Swiger wrote:
Hi, Steve--
On Oct 17, 2009, at 8:14 AM, Steve Dong wrote:
If there's a better/lighter way to show these graphics, I'd like to know.
Sure-- put 'em on a webserver somewhere, and put links to them in your
email to this mailing list.
If you wanted to do even better than t
Steve Dong wrote:
It looks the jpeg attachments were somehow dropped. Trying again with pdf
attachment. Hopefully it works this time.
Hi,
I haven't tried comparing this sort of performance with Linux so your
conclusion still might be right, but the fact that you couldn't saturate
1 Gbps on
Invernizzi Fabrizio wrote:
> Hi all
>
> i am going on with some performance tests on a 10gbe network card with
> FreeBSD.
>
> I am doing this test: I send UDP traffic to be forwarded to the other port of
> the card on both the card ports.
> Using 1492-long packets i am uppering the number of p
Olivier Mueller wrote:
> On Wed, 2009-05-06 at 16:15 +0300, Arkadi Shishlov wrote:
>> Its probably "dirhash' that is not enabled or its cache is too small for the
>> task.
>
> $ sysctl -a |grep dirha
> UFS dirhash 1262 286K - 9715683
> 16,32,64,128,256,512,1024,2048,4096
> vfs.ufs.d
David Wolfskill wrote:
> I apologize, as this is a bit tangential to the description of the list.
>
> I've been doing some measurements of workloads of interest (in my case,
> the workload is building some software, and the metric of greatest
> interest is "elapsed time" (which I obtain via /usr/b
Anthony Bourov wrote:
> Regarding performance of: lib/libc/net/nsdispatch.c
Have you tried nscd(8)? It should at least amortize the startup costs...
(see nsswitch.conf(5) for instructions how to set it up)
signature.asc
Description: OpenPGP digital signature
2009/2/11 Antony Mawer :
> How would one go about gathering data on such a scenario to help improve
> this? We were planning a project involving VMware deployments with FreeBSD
> 7.1 systems in the near future, but if performance is that bad it is likely
> to be a show stopper.
I have now tested
Antony Mawer wrote:
> Ivan Voras wrote:
>> Sebastiaan van Erk wrote:
>>> Sebastiaan van Erk wrote:
>>>> (However, just to give you an idea I attached the basic 5.1.2
>>>> unixbench outputs (the CPU info for FreeBSD is "fake", since unixbench
&
Sebastiaan van Erk wrote:
> Sebastiaan van Erk wrote:
>> (However, just to give you an idea I attached the basic 5.1.2
>> unixbench outputs (the CPU info for FreeBSD is "fake", since unixbench
>> does a cat /proc/cpuinfo, so I removed the /proc/ part and copied the
>> output under linux to the "pro
Sebastiaan van Erk wrote:
> Hi,
>
> I want to deploy a production FreeBSD web site (database cluster, apache
> cluster, ip failover using carp, etc.), however I'm experiencing painful
> disk I/O throughput problems which currently does not make the above
> project viable. I've done some rudimentar
Alex Dehaini wrote:
> Hi Guys,
>
> I have some issues with Squid on Freebsd. I am running FreeBSD release 4.9
> and Squid version 2.5.
>
> I have setup FreeBSD as a bridge so that all traffic from my network can
> transparently pass through the FreeBSD server. I am running Squid on the
> same ser
Mike Tancsa wrote:
FreeBSD 7.1-PRERELEASE #0: Fri Dec 19 19:48:15 EST 2008
mdtan...@ns3c.recycle.net:/usr/obj/usr/src/sys/recycle
Timecounter "i8254" frequency 1193182 Hz quality 0
CPU: Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz (2666.78-MHz
686-class CPU)
Origin = "GenuineIntel" I
Mike Tancsa wrote:
> Just got our first board to play around with and unlike in the past,
> having hyperthreading enabled seems to help performance At least in
> buildworld tests.
>
> doing a make -j4 vs -j6 make -j8 vs -j10 gives
>
> -j buildworld time% improvement over -j4
> 4 13
Stephen Sanders wrote:
> This may be a bad list to post to for this problem but I'm having an
> issue where in it appears that the boot loader fails and then overwrites
> the MBR with 0.
>
> The system boots to :
>
> F1 - Linux
> F3 - FreeBSD
> F5 - Drive 1"
>
> Default:
>
> But then fails on "
O. Hartmann wrote:
> Ivan Voras wrote:
> ...
>
>>
>> OTOH if the goal is to measure "operating system" performance, this
>> must also include the compiler, libraries and all. (for example, what
>> does Solaris default to nowadays? I think it ships with
2008/11/26 Alexander Leidinger <[EMAIL PROTECTED]>:
> If you want to test OS performance and use Java programs in there to do so,
> you would use the same Java version, wouldn't you? They didn't.
Linux: 1.6.0_0-b12
Solaris: 1.6.0_10-b33
FreeBSD: 1.6.0_07-b02
Since system have their local patches
2008/11/25 Adrian Chadd <[EMAIL PROTECTED]>:
> 2008/11/25 Ivan Voras <[EMAIL PROTECTED]>:
>
>>> I believe most of the synthetic numbers (mp3 encoding etc.) difference
>>> comes from the different version of gcc the different OS uses...
>>
>> You'
Roman Divacky wrote:
> On Tue, Nov 25, 2008 at 12:08:27PM +0100, Ivan Voras wrote:
>> Steven Hartland wrote:
>>> http://www.phoronix.com/scan.php?page=article&item=os_threeway_2008&num=1
>>>
>>> Was interesting until I saw this:-
>>>
>>
Steven Hartland wrote:
> http://www.phoronix.com/scan.php?page=article&item=os_threeway_2008&num=1
>
> Was interesting until I saw this:-
>
The results seem well within expectations, for the sort of benchmarks
they did: there is little difference between the systems. Depending on
the details of
Stephen Sanders wrote:
> FreeBSD 6.3
> Dual Quad Core Xeon [EMAIL PROTECTED]
FreeBSD 6.3 isn't very suited for your CPU. If your workload isn't
completely CPU bound (i.e. if isn't [EMAIL PROTECTED]), you will not only not
make use of all 8 CPU cores but will probably get worse performance with
8 C
Alexander Strange wrote:
And there's no firewalls or packet shapers in front of it.
How about on it? Do you run ipfw?
___
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send
Alexander Strange wrote:
We're running a rather high-load webserver using FreeBSD
7-RELEASE/amd64/nginx on an Intel em gigabit connection.
Performance is good for our current bandwidth use (about 20Mbit and
~2000 connections/sec at the moment), but a large number of HTTP
requests are being imme
Ivan Voras wrote:
Per my discussion with Scott Long Can you repeat the test for UFS, but
create gstripe with a really small stripe size, like 4 KB?
Actually, no need to do that - it looks like iozone is doing quite
random IO ops so it won't help you.
signature.asc
Description: Op
on with Scott Long Can you repeat the test for UFS, but
create gstripe with a really small stripe size, like 4 KB?
-Ben
Ivan Voras wrote:
Benjeman J. Meekhof wrote:
Hi,
I posted earlier about some results with this same system using UFS2.
Now trying to test ZFS. This is a Dell PE2950 with two
Benjeman J. Meekhof wrote:
> Hi,
>
> I posted earlier about some results with this same system using UFS2.
> Now trying to test ZFS. This is a Dell PE2950 with two Perc6
> controllers and 4 md1000 disk shelves with 750GB drives. 16GB RAM, dual
> quad core Xeon. I recompiled our kernel to use the
I found an interesting post:
http://thread.gmane.org/gmane.comp.db.postgresql.performance/15979
By itself the post doesn't say anything specific, except that apparently
great improvements can be gained on some loads with different IO
scheduling policies on Linux.
It's maybe something to take into
On 26/03/2008, Benjeman J. Meekhof <[EMAIL PROTECTED]> wrote:
> Hi Ivan,
>
> Thanks for the response. Your response quotes my initial uneven
> results, but are you also implying that I most likely cannot achieve
> results better than the later results which use a larger filesystem
> blocksize?
Benjeman J. Meekhof wrote:
> My baseline was this - on linux 2.6.20 we're doing 800MB/s write and
> greater read with this configuration: 2 raid6 volumes volumes striped
> into a raid0 volume using linux software raid, XFS filesystem. Each
> raid6 is a volume on one controller using 30 PD. We'v
Robert Watson wrote:
> I've CC'd John, who might have views on what we should do about this.
> It would be nice if we had a way to export information on all the
> interrupt event sources, including soft ones, and their mappings to
> ithreads, including swis, using sysctl. Or maybe we do already
Aminuddin Abdullah wrote:
I have just upgraded 5 of my machines to V7 from 6.3 and then realized that
all the machines has a high CPU usage. Almost all of them using 80%-90% CPU
with more than 8000 connections. Using previous 6.3, it only uses 40-50% CPU
with the same kind of connections.
Using
On 13/03/2008, Jeff Roberson <[EMAIL PROTECTED]> wrote:
>
> On Wed, 12 Mar 2008, Ivan Voras wrote:
>
> > On 12/03/2008, Mark Kirkwood <[EMAIL PROTECTED]> wrote:
> >
> >> Hmm - somehow read right past the bit where you say you have a 512MB
> >>
Ivan Voras wrote:
> On 12/03/2008, Mark Kirkwood <[EMAIL PROTECTED]> wrote:
>
>> Hmm - somehow read right past the bit where you say you have a 512MB
>> cache - sorry! However, worth checking it is set to write-back rather
>> than write-through.
>
> As
On 12/03/2008, Mark Kirkwood <[EMAIL PROTECTED]> wrote:
> Hmm - somehow read right past the bit where you say you have a 512MB
> cache - sorry! However, worth checking it is set to write-back rather
> than write-through.
As far as I can see it is set to write-through (though the HP's array
conf
http://www.kaltenbrunner.cc/blog/index.php?/archives/21-guid.html
alan bryan wrote:
> Here's mine for a somewhat similar setup.
> FreeBSD 7.0 PostgreSQL 8.3
> 2x Intel Xeon 2.33GHZ quad cores (8 cores total), 8GB
> RAM, 250GB RAID 10 (4x WD Raptor 10K drives).
>
> Non-default settings:
>
>
Hi,
Has anyone been able to replicate results from
http://www.kaltenbrunner.cc/blog/index.php?/archives/21-guid.html, or
get close to the performance described there on similar hardware (e.g.
thousands of transactions/s) ?
signature.asc
Description: OpenPGP digital signature
alan bryan wrote:
> --- alan bryan <[EMAIL PROTECTED]> wrote:
>> I've got a 4 disk RAID 10 array.
> Version 1.93d --Sequential Output--
> --Sequential Input- --Random-
> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per
> Chr- --Block-- --Seeks--
> MachineSize K/sec %CP
Stefan Lambrev wrote:
> Greetings.
>
> Ivan Voras wrote:
>> See this:
>> http://lists.freebsd.org/pipermail/freebsd-scsi/2008-February/003383.html
>>
> I do not see the patch in this thread :)
> Is there a patch for 7.0-RELEASE? (If not already p
Stefan Lambrev wrote:
> Greetings.
>
> Ivan Voras wrote:
>> http://lists.freebsd.org/pipermail/freebsd-scsi/2008-February/003383.html
>>
> I do not see the patch in this thread :)
> Is there a patch for 7.0-RELEASE? (If not already patched?)
Try ask
Stefan Lambrev wrote:
> Greeting,
>
> Philip Murray wrote:
>> Hi,
>>
>> I'm trying to use the new iSCSI initiator (thanks!) with 7, but I'm
>> getting dismal performance. A simple dd will will max out at about
>> 2MB/sec, and untarring the likes of the ports tree is a painful task.
> I have simila
Bill Moran wrote:
> In response to Brett Bump <[EMAIL PROTECTED]>:
>> I'm seeing signal 6's on apache and imapd (never happened before)
>> network errors, serious response time errors and generally poor
>> performance during peak activity (same box, same people).
>
> IIRC, signal 6 is an indicator
Chuck Swiger wrote:
On Feb 8, 2008, at 12:43 PM, Ivan Voras wrote:
Historically, the Python optimizer wasn't capable of doing much,
true, but the more recent versions of the optimizer can actually do
some peephole optimizations like algorithmic simplification and
constant folding:
Chuck Swiger wrote:
Historically, the Python optimizer wasn't capable of doing much, true,
but the more recent versions of the optimizer can actually do some
peephole optimizations like algorithmic simplification and constant
folding:
http://docs.python.org/whatsnew/other-lang.html#SECTION00
Brooks Davis wrote:
On Fri, Feb 08, 2008 at 09:41:09AM +0100, Erik Cederstrand wrote:
I finally got around to testing this, and with a combination of mtree
comparing md5 hashes, bsdiff compacting changed files and hardlinking
unchanged files I get a reduction in size from 256MB to 10MB. Pretty
Steven Hartland wrote:
> Yep thats where I've traced it to its requesting: kern.geom.confxml
>
> Which does:-
> static int
> sysctl_kern_geom_confxml(SYSCTL_HANDLER_ARGS)
> {
>int error;
>struct sbuf *sb;
>
>sb = sbuf_new(NULL, NULL, 0, SBUF_AUTOEXTEND);
>g_waitfor_event(g_confxm
On 31/01/2008, Niki Denev <[EMAIL PROTECTED]> wrote:
> On Jan 31, 2008 10:16 PM, Ivan Voras <[EMAIL PROTECTED]> wrote:
> > Niki Denev wrote:
> >
> > > HZ=1000
> > > Time:
> > > 239 seconds total
> > > 122 sec
Niki Denev wrote:
HZ=1000
Time:
239 seconds total
122 seconds of transactions (4 per second)
What do you think?
This is a very low result :) I don't know your machine or the parameters
you used with postmark but even FreeBSD on two striped 7.5 kRPM drives
can achieve ~~ 11
On 30/01/2008, Kris Kennaway <[EMAIL PROTECTED]> wrote:
> Rewrite of the lockmgr primitive, for starters. Then we'll see what
> remains.
Ok, I know about the lockmgr efforts, and they will surely help some
loads. I'll try to compile the results I've been talking about in a
few days and post them
Steven Hartland wrote:
> The machine is running with ULE on 7.0 as mention using an Areca 1220
> controller over 8 disks in RAID 6 + Hotspare.
I'd suggest you first try to reproduce the stall without ULE, while
keeping all other parameters exactly the same.
__
Kris Kennaway wrote:
> Write performance is something that we are working on, expect to hear
> about progress over the coming weeks/months.
Do you have some notes or descriptions about what is being worked on?
I'm currently doing some file system benchmarking for internal purposes
and I'm seeing
> I had (allready) saved the thread in my mail-account so I could look
> it up before I started testing. :-) So I compiled postgresql with the
> option WITH_THREADSAFE=3Dtrue and used sysbench with --pgsql-host=3D"" =
=2E
> As pointed out by Ivan my test also involved r/w whereas the thread
> you (
Claus Guttesen wrote:
Ubuntu 7.10:
grep "transactions:" sysbench-clients-24|sort
transactions:1 (2354.49 per sec.)
transactions:10001 (2126.28 per sec.)
transactions:10001 (2215.52 per sec.)
transactions:
Kris Kennaway wrote:
> Erik Cederstrand wrote:
>> Ivan Voras wrote:
>>>
>>> I have a suggestion to make the graphs more readable: if a long
>>> period was chosen by the user (e.g. > 100 days / plot points), don't
>>> plot points and error bars,
Erik Cederstrand wrote:
> I haven't touched malloc.conf but realize that I should. What's the
> official recommendation on malloc settings?
You'd have to patch /usr/src/lib/libc/stdlib/malloc.c and define
MALLOC_PRODUCTION. Yes, it's not elegant.
signature.asc
Description: OpenPGP digital sig
Kris Kennaway wrote:
The project still needs some work, but there's a temporary web
interface to the data here: http://littlebit.dk:5000/plot/. Apart from
the plotting it's possible to compare two dates and see the files that
have changed. Error bars are 3*standard deviation, for the points wi
On 02/01/2008, Josh Carroll <[EMAIL PROTECTED]> wrote:
> > Does anyone have a theory why syscalls are so expensive in FreeBSD? Here
> > are the results of unixbench 4.1 on two machines. First is the machine
> > running FreeBSD HEAD (debugging disabled) on a dual-core Athlon 64 (i386
> > mode), 2 GH
Bruce Evans wrote:
> FreeBSD has more layers, with less optimization in each layer. Normally
> this doesn't matter, since everyone knows that syscalls are expensive
> and avoids them :-).
My point is that the majority of applications are written for Linux and
they are both syscall-intensive and
Kris Kennaway wrote:
> So it is using getpid? It should be fine on FreeBSD with the previous
> provisos, but you also need to check Linux behaviour and compare on
> identical hardware before you can draw conclusions.
Here's the source of unixbench syscall benchmark:
unsigned long iter;
void re
Kris Kennaway wrote:
> It is likely to remain in people's custom kernels, possibly including
> the one used by Ivan. Anyway, this is all speculation until someone
> studies the claims in more detail.
I'm using GENERIC minus debugging options.
signature.asc
Description: OpenPGP digital signatu
Kris Kennaway wrote:
> That's why it's important to dig into the details of what the benchmark
> is actually doing before you conclude that "the numbers are higher for
> linux, therefore it has faster syscalls".
Can you propose a simpler syscall on the GENERIC kernel that could be
used instead of
Kris Kennaway wrote:
> Gergely CZUCZY wrote:
>>> It looks like myisam is doing huge numbers of concurrent reads of the
>>> same file which is running into exclusive locking in the kernel
>>> (vnode interlock and lockbuilder mtxpool). Does it not do any
>>> caching of the data in userspace but rel
Erich Dollansky wrote:
> I have had once the problem of a task moving from CPU to CPU and s
> performing badly on FreeBSD.
This is easy to check: either rebuild a kernel without "options SMP" or
disable processes by setting machdep.hlt_cpus (see smp(4)) or set
hint.lapic.X.disable=1, then run the
Shantanu Ghosh wrote:
> Hi,
>
> I am running FreeBSD 7.0 Beta1 and Linux FC6 on two identical pieces of
> hardware - Dell poweredge with intel core2 duo. Each system has 4 CPUs.
>
> Now, in simple memory access operations, I see the freebsd system being
> noticably slower than the linux system. A
Palle Girgensohn wrote:
Hi,
We are looking at getting a server for running postgresql. Only
postgresql, a dedicated machine. Since we know FreeBSD very well, we
plan on using it as the OS.
You might want to wait a little until 7.0 or until some more important
bits get MFC'ed to 6.x:
http:
Stefan Lambrev wrote:
> Can you make this test with default /manual/ alias instead of file.txt,
> so we can compare results ?
Sorry, I currently need that server as-is and I've deleted the manuals
some time ago (since I don't need them there). It would probably be
easier for you to create a dummy
Cheffo wrote:
> What else I can change/test to improve performance?
First you'll have to give more info about the hardware on both systems,
and the way you benchmarked them (e.g. did you benchmark over ethernet
or from the same machine?). There are also a bunch of things that may
make apache go f
1 - 100 of 125 matches
Mail list logo