Hi guys,
I just wanted to tell you, that now the originating problem, namely that
backups (with pg_dump) were much slower on the new hardware IS solved. After
setting " zone_reclaim_mode = 0", my backups last night were fast as expected
... for my 100 GByte DB it took now only 1 hour (instead o
On Tue, Jul 13, 2010 at 9:10 PM, Craig Ringer
wrote:
>> And tomorrow I will see how my nightly backup runs with this setting.
>
> It sounds like it's time for a post to the Linux Kernel Mailing List,
> and/or a Launchpad bug against the Ubuntu kernel.
>
> Make sure to have your asbestos undewear o
On 13/07/10 22:16, Andras Fabian wrote:
> I think I have found the solution. Yes, I now can get constantly high
> throughput with COPY-to-STDOUT, even if free -m only shows me 82 Mbytes (so
> no, this solution is not cleaning the cache). Always around 2 3/4 minutes.
>
> I have compared all the /
On 14/07/2010 12:09 AM, Tim Landscheidt wrote:
ced45 wrote:
I have trouble using XPath name() function in a XML field.
For example, when I execute the following query :
SELECT XPATH('name(/*)', XMLPARSE(DOCUMENT 'value'))
It seems very odd that that returns an empty set. I'd expect that i
On 13/07/2010 10:52 PM, Greg Smith wrote:
I heard a scholarly treatment of that topic from Jim Nasby recently,
where he proposed a boolean GUC to toggle the expanded search behavior
to be named plan_the_shit_out_of_it.
I was thinking that something like "duplicate subquery/function
elimitatio
Joshua Rubin wrote:
> I have two tables each with nearly 300M rows. There is a 1:1
> relationship between the two tables and they are almost always joined
> together in queries. The first table has many columns, the second has
> a foreign key to the primary key of the first table and one more
> co
HI Ben,
> Stupid question before you do this: is there a reason the design was split
> like this? For instance, if the table with the id and the single field get
> updated a lot, while the other table almost never changes, maybe this design
> isn't so bad.
We just wanted to minimize changes to
On Jul 13, 2010, at 1:46 PM, Joshua Rubin wrote:
> Hi,
>
> I have two tables each with nearly 300M rows. There is a 1:1
> relationship between the two tables and they are almost always joined
> together in queries. The first table has many columns, the second has
> a foreign key to the primary k
Anthony Presley writes:
> Every so often (usually in the early morning), we are seeing an "
> in transaction" show up. This appears to lock / block other statements
> from going through, though I'm not sure why. If left unchecked, we end
> up with all of our connections being overrun.
Well, the
Correct. We are looking to use Nagios to monitor various parameters on our
network, then store them in postgresql, which we will then synch to the ground
and distribute as a quasi realtime telemetry system.
-Original Message-
From: Magnus Hagander [mailto:mag...@hagander.net]
Sent: T
Hi,
I have two tables each with nearly 300M rows. There is a 1:1
relationship between the two tables and they are almost always joined
together in queries. The first table has many columns, the second has
a foreign key to the primary key of the first table and one more
column. It is expected that
safest path is to get the source code and build it..if you are unable to get
the source..ping the author..if the author doesnt respond..implement that
plugin or feature
in another language..(perl/java/php would be my recommendation)
keep us apprised,
Martin Gainty
_
On 13 July 2010 21:25, Magnus Hagander wrote:
> On Tue, Jul 13, 2010 at 20:10, Thom Brown wrote:
>> On 13 July 2010 17:14, Duncavage, Daniel P. (JSC-OD211)
>> wrote:
>>> We are implementing Nagios on Space Station and want to use PostgreSQL to
>>> store the data on orbit and then replicate that
On Tue, Jul 13, 2010 at 20:10, Thom Brown wrote:
> On 13 July 2010 17:14, Duncavage, Daniel P. (JSC-OD211)
> wrote:
>> We are implementing Nagios on Space Station and want to use PostgreSQL to
>> store the data on orbit and then replicate that db on the ground. The
>> problem is, most people use
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
>> Ah, good news, glad I was misinformed. I'm curious, what
>> mechanism does it use for trusted?
...
> PHP's "safe mode"
> http://www.php.net/manual/en/features.safe-mode.php
>
> ... which, now I realize, has been deprecated ...
Yeah, safe m
Thomas Kellerer wrote:
> Checking new data directory (c:/etc/Postgres9.0-beta3/datadir)ok
> ""c:/Program Files/PostgreSQL/8.4/bin/pg_ctl" -l "migrate.log" -D
> "c:/Daten/db/pgdata84" -o "-p 5432 -c autovacuum=off -c
> autovacuum_freeze_max_age=20 " start >> "nul" 2>&1" Trying to
> start old
Hi all,
I'm bordering on insanity, trying to track down an IDLE in transaction
problem.
This started a few weeks ago, and we are using a Java application,
running Spring 2.0, Hibernate 3.2 (with L2 cache), Postgres JDBC
8.3-604. We're also using pgBouncer (though, I've tried pgPool II and
gotten
On Tue, 2010-07-13 at 20:33 +0800, Craig Ringer wrote:
> That's a write rate of 34MB/min, or half a meg a second. Not pretty.
>
> Where's the load during the COPY? Mostly CPU? Or mostly disk I/O?
>
> Are you writing the output to the same disk the database is on? (Not
> that it should make this
On 13 July 2010 17:14, Duncavage, Daniel P. (JSC-OD211)
wrote:
> We are implementing Nagios on Space Station and want to use PostgreSQL to
> store the data on orbit and then replicate that db on the ground. The
> problem is, most people use MySQL with Nagios. We need an addon to ingest
> Nagios
Thomas Kellerer wrote:
> Thomas Kellerer, 12.07.2010 23:29:
> > Hi,
> >
> > I'm trying pg_upgrade on my Windows installation and I have two
> > suggestions for the manual regarding pg_upgrade:
> >
> > When specifying directories, pg_upgrade *requires* a forward slash as
> > the path separator. This
On 07/13/2010 09:27 AM, Andrew Falanga wrote:
On Jul 13, 9:12 am, adrian.kla...@gmail.com (Adrian Klaver) wrote:
Thank you both for you help. I look forward to accessing the table
tonight when I get home.
For my own sake, would there happen to be any documentation on-line
that I could read
On Jul 13, 9:12 am, adrian.kla...@gmail.com (Adrian Klaver) wrote:
> On Monday 12 July 2010 10:18:07 pm A. Kretschmer wrote:
>
>
>
>
>
> > No, the reason is another:
>
> > test=# create table "Stone"(id serial);
> > NOTICE: CREATE TABLE will create implicit sequence "Stone_id_seq" for
> > serial c
Duncavage, Daniel P. (JSC-OD211) wrote:
We are implementing Nagios on Space Station and want to use PostgreSQL
to store the data on orbit and then replicate that db on the ground.
The problem is, most people use MySQL with Nagios. We need an addon to
ingest Nagios data into PostgreSQL. It lo
We are implementing Nagios on Space Station and want to use PostgreSQL to store
the data on orbit and then replicate that db on the ground. The problem is,
most people use MySQL with Nagios. We need an addon to ingest Nagios data into
PostgreSQL. It looks like the most reasonable implementati
Thomas Kellerer wrote:
> Craig Ringer, 13.07.2010 05:11:
> > On 13/07/10 05:29, Thomas Kellerer wrote:
> >
> >> I would suggest to either manually change the autocommit mode from
> >> within pg_upgrade or to add a note in the manual to disable/remove this
> >> setting from psqlrc.conf before runnin
Thomas Kellerer wrote:
> Hi,
>
> I'm trying pg_upgrade on my Windows installation and I have two
> suggestions for the manual regarding pg_upgrade:
>
> When specifying directories, pg_upgrade *requires* a forward slash as
> the path separator. This is (still) uncommon in the Windows world
> (alt
ced45 wrote:
> I have trouble using XPath name() function in a XML field.
> For example, when I execute the following query :
> SELECT XPATH('name(/*)', XMLPARSE(DOCUMENT 'value'))
> I would like to get "unit", but I just get an empty array ({}).
> How can I get "unit" ?
AFAIK, this is not rel
On Monday 12 July 2010 10:18:07 pm A. Kretschmer wrote:
>
> No, the reason is another:
>
> test=# create table "Stone"(id serial);
> NOTICE: CREATE TABLE will create implicit sequence "Stone_id_seq" for
> serial column "Stone.id" CREATE TABLE
> test=*# \d Stone
> Did not find any relation named "
pasman pasman'ski wrote:
1. Planner will estimate 2 x statistics: time of query with cache empty
and with cache filled.
Requires planner to know something about the state of the cache; it
doesn't yet. Counting myself there's four people I know who have been
tinkering with some aspect of th
Andrew Bartley wrote:
It seems that the underlying stats tables are reset on a periodic
basis, can i stop this process? Is it a .conf setting?
Up until PostgreSQL 8.2 there's a setting named
stats_reset_on_server_start that clears everything when the server
stops:
http://www.postgresql.org/
On 07/13/2010 10:35 AM, Andras Fabian wrote:
Hi Greg,
hmmm, thats true. Thos settings for example were much higher too (on the Ubuntu
server), than on our old machine.
New machine has:
- dirty_ratio = 20 (old has 10)
- dirty_background_ratio = 10 (old has 5)
But obviously setting vm.zone_recla
Hi Greg,
hmmm, thats true. Thos settings for example were much higher too (on the Ubuntu
server), than on our old machine.
New machine has:
- dirty_ratio = 20 (old has 10)
- dirty_background_ratio = 10 (old has 5)
But obviously setting vm.zone_reclaim_mode=0 "fixes" the problem to (which was
"
Andras Fabian wrote:
So the kernel function it is always idling on seems to be congestion_wait ...
Ugh, not that thing again. See
http://www.westnet.com/~gsmith/content/linux-pdflush.htm ; that chunk of
code has cost me weeks worth of "why isn't the kernel writing things the
way I asked it?
I think I have found the solution. Yes, I now can get constantly high
throughput with COPY-to-STDOUT, even if free -m only shows me 82 Mbytes (so no,
this solution is not cleaning the cache). Always around 2 3/4 minutes.
I have compared all the /proc/sys/vm settings on my new machines and the ol
On Mon, Jul 12, 2010 at 07:46:30PM +0200, Pavel Stehule wrote:
> 2010/7/12 Josip Rodin :
> > On Mon, Jul 12, 2010 at 04:38:48PM +0200, Pavel Stehule wrote:
> >> 2010/7/12 Josip Rodin :
> >> > On Mon, Jul 12, 2010 at 02:06:43PM +0800, Craig Ringer wrote:
> >> >> Meh, personally I'll stick to the goo
Atul Goel wrote:
> > We are a data based company and are migrating from Oracle to
> > Postgresql. For this purpose I am doing POC for the same. We have a
> > business requirement to send the Data in XML files to our clients.
> > The file size of XMLs is around 700MB and is growing.
> >
> >
Thomas Kellerer, 12.07.2010 23:29:
Hi,
I'm trying pg_upgrade on my Windows installation and I have two
suggestions for the manual regarding pg_upgrade:
I found another problem and I'm not sure if this is a bug or a user error :)
My batch file to start pg_upgrade looks like this:
%~dp0server
[ Er, oops. I mucked up replying on a new thread - ugh! Sorry, should've
left well alone. ]
Atul Goel wrote:
> > We are a data based company and are migrating from Oracle to
> > Postgresql. For this purpose I am doing POC for the same. We have a
> > business requirement to send the Data in XML fi
Hi Craig,
Now I did the test with top too. Free RAM is around 900. And there doesn't seem
to be other processes eating memory away (looked at it with top/htop). The
other procs having more RAM did have it before (mostly some postgres
processes), and don't grow it at an exporbitant rate. One cou
Sure I will take care.
Regards,
Atul Goel
-Original Message-
From: Craig Ringer [mailto:cr...@postnewspapers.com.au]
Sent: 13 July 2010 13:26
To: Atul Goel
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Writing XML files to Operating System
It's helpful if you can avoid replyin
> We are a data based company and are migrating from Oracle to
> Postgresql. For this purpose I am doing POC for the same. We have a
> business requirement to send the Data in XML files to our clients. The
> file size of XMLs is around 700MB and is growing.
>
> I have been able to generate sample
Thomas Kellerer, 12.07.2010 23:29:
Hi,
I'm trying pg_upgrade on my Windows installation and I have two
suggestions for the manual regarding pg_upgrade:
When specifying directories, pg_upgrade *requires* a forward slash as
the path separator. This is (still) uncommon in the Windows world
(althou
It's helpful if you can avoid replying to an existing post to make a new
thread. Your mail client puts "In-reply-to" headers in the message that
confuses threaded mail clients.
I've replied to this message properly in a new thread to avoid confusing
things too much.
--
Craig Ringer
--
Sent via
On 13/07/10 19:49, pasman pasmański wrote:
> Hello.
>
> I propose 2 features for planner:
>
> 1. Planner will estimate 2 x statistics: time of query with cache empty
> and with cache filled.
How would it know what is in cache and how long it'd take to fetch
things into cache that aren't already
On 13/07/10 18:57, Andras Fabian wrote:
> OK, so here I should - maybe - look around the sockets. Hmm. Well, in the
> case of my experiments we are talking about Unix sockets, as I am only
> connecting locally to the server (not real networking involved). Are there
> any ideas, where such a Unix
Hi All,
We are a data based company and are migrating from Oracle to Postgresql. For
this purpose I am doing POC for the same. We have a business requirement to
send the Data in XML files to our clients. The file size of XMLs is around
700MB and is growing.
I have been able to generate sample
Hmm, I continued some testing. Now, strangely, the congestion_wait occurs even
if free -m shows me about 1500 Mbytes (before that I tried to "fill up" the
cache by doing some plain "cat from_here > to_there" ... which pushed free down
to 1500). But, interestingly, the value also doesn't gets low
On 07/13/2010 07:29 AM, Andras Fabian wrote:
Now, I have found an unorthodox way, to make a slow machine run COPY-to-STDOUT fast. I empty the cache memory
of the server, which makes "free" in "free -m" jump up to 14 GBytes (well, I just
noticed, that most of the memory on the server is in "cach
Hello.
I propose 2 features for planner:
1. Planner will estimate 2 x statistics: time of query with cache empty
and with cache filled.
2. Two levels of plannig: standard and long.
Long planning may be used when standard optimization
generate slow plan, and may use advanced algebraic transformat
I have just rechecked one of our old generation machines, which never had/have
this problem (where the backup of a 100 GB database - to a 10 GByte dump - is
still going trough in about 2 hours). They seem to have this high caching ratio
too (one of the machine says it has 15 GByte in cache out o
Now, I have found an unorthodox way, to make a slow machine run COPY-to-STDOUT
fast. I empty the cache memory of the server, which makes "free" in "free -m"
jump up to 14 GBytes (well, I just noticed, that most of the memory on the
server is in "cache" ... up to 22 GBytes). I just entered:
OK, so here I should - maybe - look around the sockets. Hmm. Well, in the case
of my experiments we are talking about Unix sockets, as I am only connecting
locally to the server (not real networking involved). Are there any ideas,
where such a Unix Socket could impose such extreme buffering ???
On 13/07/2010 6:26 PM, Andras Fabian wrote:
Wait, now, here I see some correlation! Yes, it seems to be the memory! When I start my COPY-to-STDOUT
experiment I had some 2000 MByte free (well ,the server has 24 GByte ... maybe other PostgreSQL processes
used up the rest). Then, I could monitor v
Wait, now, here I see some correlation! Yes, it seems to be the memory! When I
start my COPY-to-STDOUT experiment I had some 2000 MByte free (well ,the server
has 24 GByte ... maybe other PostgreSQL processes used up the rest). Then, I
could monitor via "ll -h" how the file nicely growed (obviou
I had another observation before your last mail:
I have compared the /proc/pid/sched Stats of a normal and a slow machine. And
there were two counters that really sticked out:
- se.sleep_start
- se.block_start
On a normal machine, both counter remaind at 0 all the time while doing
COPY-to-STDOU
On 13/07/10 17:18, Andras Fabian wrote:
> But still no definitive clue about the reasons. What is also quite
> interesting is, that when I start my COPY-to-STDOUT experiment, it is running
> quite fast in the beginning. Sometimes up to 400 Mbytes, sometimes up to 1.4
> GBytes (I didn't find a r
Hi Craig,
Yes, a first look at /proc/pid/stack shows something that smells like memory
management ... ut least up to the point where congestion_wait is called.
--
[] congestion_wait+0x70/0x90
[] shrink_inactive_list+0x667/0x7e0
[] sh
On 13/07/2010 3:53 PM, Andras Fabian wrote:
Hi Scott,
No, we didn't have a kernel update (it is still the stock Ubuntu 10.04 Server kernel ... 2.6.32.2).
And in the meantime - this morning - I have discovered, that the rebooted server is again slowing
down! It is not at the level of the not-re
On 13/07/2010 4:05 PM, Andras Fabian wrote:
Craig, thanks for that PS tip (you think, you have used PS for such a long
time, but it still has some new tricks available).
So, obviously, for some reason we are waiting too much for a backind_device ...
which ever it is at the moment. Because, a
Thom Brown wrote:
>
> Have you tried:
>
> SELECT XPATH('fn:name(/*)', XMLPARSE(DOCUMENT 'value'));
>
> Thom
>
>
Thanks for your help, but it gives the whole element and not only the markup
name.
Cedric
--
View this message in context:
http://old.nabble.com/Postgresql-8.4%2C-XPath-and-name
On 13 July 2010 09:03, ced45 wrote:
>
> Hi List,
>
> I have trouble using XPath name() function in a XML field.
> For example, when I execute the following query :
>
> SELECT XPATH('name(/*)', XMLPARSE(DOCUMENT 'value'))
>
> I would like to get "unit", but I just get an empty array ({}).
> How can
Craig, thanks for that PS tip (you think, you have used PS for such a long
time, but it still has some new tricks available).
And here is the more readable line:
26390 congestion_wait D ?00:00:26 postgres:
postgres musicload_cache [local] COPY
So the kernel fun
Hi List,
I have trouble using XPath name() function in a XML field.
For example, when I execute the following query :
SELECT XPATH('name(/*)', XMLPARSE(DOCUMENT 'value'))
I would like to get "unit", but I just get an empty array ({}).
How can I get "unit" ?
Thanks in advance,
Cedric
--
View
Hi Scott,
No, we didn't have a kernel update (it is still the stock Ubuntu 10.04 Server
kernel ... 2.6.32.2). And in the meantime - this morning - I have discovered,
that the rebooted server is again slowing down! It is not at the level of the
not-rebooted-server (about 45 mins for the 3 Gig fi
On Tue, Jul 13, 2010 at 12:31 AM, Andras Fabian wrote:
> Hi Scott,
>
> Although I can't guarantee for 100% that there was no RAID rebuild at some
> point, I am almost sure that it wasn't the case. Two machines - the ones
> which were already in production - exhibited this problem. Both of them w
On Mon, Jul 12, 2010 at 11:54 PM, tamanna madaan
wrote:
> Hi Scott
>
> Thanks for your reply . I haven't yet tried updating to latest 8.1.x version.
> Was juss googling about this error and came across
> a link discussing the same issue :
>
> http://groups.google.com/group/pgsql.general/browse_th
Hi Scott
Thanks for your reply . I haven't yet tried updating to latest 8.1.x version.
Was juss googling about this error and came across
a link discussing the same issue :
http://groups.google.com/group/pgsql.general/browse_thread/thread/75df15648bcb502b/10232d1f183a640a?lnk=raot
In this , the
Hello
2010/7/13 Andrew Bartley :
> Thanks Alexander,
> Wish i had thought of that.
> I still need some way of finding redundant functions
> Thanks again
> Andrew
>
I used a function source code injection for this task
see
http://www.postgres.cz/index.php/Injekt%C3%A1%C5%BE_zdrojov%C3%A9ho_k%C3%
68 matches
Mail list logo