On Mon, Feb 2, 2009 at 5:54 PM, rhubbell wrote:
>
> I want to find the length of those intervals.
>
> select timestamp, timeelapsed, bobble from perf where bobble like "pokerflat"
> and timeelapsed > 0.4;
>
> The records returned by that query will have an oldest and newest timestamp
> for
> whic
Based on suggestions from this list, I am trying to create a tsvector
column and index that, since it is perhaps the recheck and rebuilding
of all the vectors that is slowing things down. I don't understand
why a recheck is necessary on a gin index.
My update statement has been running
On Tue, Feb 3, 2009 at 3:33 PM, Alban Hertroys
wrote:
> On Feb 3, 2009, at 5:21 AM, Phoenix Kiula wrote:
>
>> On Mon, Jan 12, 2009 at 1:02 PM, Tom Lane wrote:
>>
>> .
>>
>>> regression=# update pg_attribute set atttypmod = 35+4 where attrelid =
>>> 't1'::regclass and attname = 'f1';
>>> U
Alex, can you somehow identify document, which has problem with
long word errors ?
Also, if you have space on disk I'd recommend to try
select *, to_tsvector('english',full_listing) as flv from source_listings;
I don't remember if you said us information about
your setup (pg version, OS, memor
I have a table with two fields:
user_id
col2
There is quite a pivotal SQL for our site that issues the query:
...WHERE user_id = 'xyz' and col2 = 'xyz'
Until now, I have had two indexes, one for user_id and one for col2.
Now, I built a compound index concurrently (user_id, col2). Afte
On Tue, 3 Feb 2009, Igor Katson wrote:
On my question about doing a first name search by it's synonyms, like
William Will Bill Billy
James Jim Jimmy
Oleg answered, that I can use dict_xsyn for that.
In the dict_xsyn manual, it is said that:
"This dictionary type replaces words with groups of
In my conf_pg, the autovacuum is on, so the DB should be (or I hope is
being) regularly vacuumed.
These are my settings:
work_mem = 20MB
temp_buffers = 4096
authentication_timeout = 10s
ssl = off
checkpoint_warning = 36
Oleg Bartunov wrote:
On Tue, 3 Feb 2009, Igor Katson wrote:
On my question about doing a first name search by it's synonyms, like
William Will Bill Billy
James Jim Jimmy
Oleg answered, that I can use dict_xsyn for that.
In the dict_xsyn manual, it is said that:
"This dictionary type replaces
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Phoenix Kiula schrieb:
> In my conf_pg, the autovacuum is on, so the DB should be (or I hope is
> being) regularly vacuumed.
>
> These are my settings:
>
>
> work_mem = 20MB
> temp_buffers = 4096
> authentication_
On Tue, Feb 03, 2009 at 08:46:37PM +0800, Phoenix Kiula wrote:
> The autovacuum was clearly not enough, so we also have a crontab that
> vacuums the tables every hour. This is PG 8.2.9.
How did you determine this? What was it not enough for? Which
tables? Why didn't you tune autovacuum differen
On Tue, Feb 3, 2009 at 9:09 PM, Andreas Wenk
wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Phoenix Kiula schrieb:
>> In my conf_pg, the autovacuum is on, so the DB should be (or I hope is
>> being) regularly vacuumed.
>>
>> These are my settings:
>>
>>
>> work_mem
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Phoenix Kiula schrieb:
> On Tue, Feb 3, 2009 at 9:09 PM, Andreas Wenk
> wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> Phoenix Kiula schrieb:
>>> In my conf_pg, the autovacuum is on, so the DB should be (or I hope is
>>> being) regul
I contacted with author of dict_xsyn, so probably, he'll add option to
support what you want.
Oleg
On Tue, 3 Feb 2009, Igor Katson wrote:
Oleg Bartunov wrote:
On Tue, 3 Feb 2009, Igor Katson wrote:
On my question about doing a first name search by it's synonyms, like
William Will Bill Billy
Phoenix Kiula writes:
> autovacuum_vacuum_cost_delay = 20
> vacuum_cost_delay= 20
These say to sleep 20ms every few pages.
> These cron jobs are taking over 35 minutes for a vacuum! What's the
> use of a vacuum if it takes that long, and the DB performance is
> tragic in the meantim
Oleg Bartunov wrote:
I contacted with author of dict_xsyn, so probably, he'll add option to
support what you want.
Thanks for cooperation, Oleg. By now, I will use that workaround.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http:/
> partitions. The OS should be able to deal with disk issues much more
> robustly than PG. If you were more or less worried about things I
As I see it now, it will be really the Soft-RAID what will suit for
everybody here (including me) as well.
> I'm not sure if you're trying to solve the wron
I downloaded the hunspell en_GB from
http://wiki.services.openoffice.org/wiki/Dictionaries#English_.28AU.2CCA.2CGB.2CNZ.2CUS.2CZA.29
and when building the Ispell dictionary I got the following error
ERROR: invalid byte sequence for encoding "UTF8": 0xe9640a
HINT: This error can also happen if
2009/2/3 Phoenix Kiula
> I have a table with two fields:
>
> user_id
> col2
>
> There is quite a pivotal SQL for our site that issues the query:
>
> ...WHERE user_id = 'xyz' and col2 = 'xyz'
>
> Until now, I have had two indexes, one for user_id and one for col2.
>
> Now, I built a compound
On Tue, 2009-02-03 at 20:46 +0800, Phoenix Kiula wrote:
> This is PG 8.2.9
VACUUM spoils the cache in 8.2
What happens is that VACUUM reads through the whole table, knocking
other blocks out of cache. These then need to be read in again by other
processes, so there is some I/O thrashing. If your
Kokas Zsolt wrote:
>> I've got (second-hand) recommendations of
>> YAFFS, and have heard good things about JFFS2 as well.
>
> What I see from them is that they supported wear-leveling before
> wear-leveling was included into the drives.
AFAIK jffs2 and yaffs are really for simple (generally memor
James Dooley writes:
> and when building the Ispell dictionary I got the following error
> ERROR: invalid byte sequence for encoding "UTF8": 0xe9640a
What PG version? 8.3.x before 8.3.4 had some problems in this area.
regards, tom lane
--
Sent via pgsql-general maili
It's postgresql-8.3.5-2 (windows)
On Tue, Feb 3, 2009 at 4:37 PM, Tom Lane wrote:
> James Dooley writes:
> > and when building the Ispell dictionary I got the following error
>
> > ERROR: invalid byte sequence for encoding "UTF8": 0xe9640a
>
> What PG version? 8.3.x before 8.3.4 had some pr
James,
you forgot to convert files to UTF8.
iconv -f ISO8859-1 -t utf8 en_GB.dic > en_gb.dict
iconv -f ISO8859-1 -t utf8 en_GB.aff > en_gb.affix
Oleg
On Tue, 3 Feb 2009, James Dooley wrote:
I downloaded the hunspell en_GB from
http://wiki.services.openoffice.org/wiki/Dictionaries#English_.28A
On Tue, Feb 03, 2009 at 12:03:11PM +0100, Kokas Zsolt wrote:
> The businees's side wants to make it as secure as it is possible,
> meaning, that the CF card will have two partitions, and the DB should
> be mirrored or distributed somehow on this two partiton, in case of a
> one-point disk-error the
On Tue, 3 Feb 2009 02:08:54 -0700
Scott Marlowe wrote:
> On Mon, Feb 2, 2009 at 5:54 PM, rhubbell wrote:
> >
> > I want to find the length of those intervals.
> >
> > select timestamp, timeelapsed, bobble from perf where bobble like
> > "pokerflat"
> > and timeelapsed > 0.4;
> >
> > The records
On my question about doing a first name search by it's synonyms, like
William Will Bill Billy
James Jim Jimmy
Oleg answered, that I can use dict_xsyn for that.
In the dict_xsyn manual, it is said that:
"This dictionary type replaces words with groups of their synonyms, and
so makes it
possibl
I've also found other queries that were really fast with MySQL are
really slow in Postgres. I'm hoping that is a matter of tuning.
Overall I'm finding the query times to be extremely unpredictable.
I added a slow query logger to my application that also does an
explain. Check these out.
Suppose I have this table:
create table test(id int, x varchar)
And I want to find rows whose x contains at least one backslash. The backslash
has to be escaped according to section 9.7.1 of the postgres docs.
select *
from test
where x like E'%\\%'
I'll skip the results of m
On Tue, Feb 03, 2009 at 03:36:15PM +0100, Kokas Zsolt wrote:
> > I've got (second-hand) recommendations of
> > YAFFS, and have heard good things about JFFS2 as well.
>
> What I see from them is that they supported wear-leveling before
> wear-leveling was included into the drives.
Or for smaller,
On Wed, Feb 04, 2009 at 12:35:33AM +0900, Craig Ringer wrote:
> BTW, I'm not sure how much good OS-level RAID on a single device will do
> for you. Linux will try to reset the interface to the drive on I/O
> errors, will hang for long periods waiting for reads, etc and I wouldn't
> be at all surpri
On Tuesday 03 February 2009, Phoenix Kiula wrote:
> The autovacuum was clearly not enough, so we also have a crontab that
> vacuums the tables every hour. This is PG 8.2.9.
How did you determine it wasn't enough? As others have stated, you're
causing your own slowdown by running vacuum so much o
On Tue, Feb 3, 2009 at 5:18 AM, Phoenix Kiula wrote:
> I have a table with two fields:
>
> user_id
> col2
>
> There is quite a pivotal SQL for our site that issues the query:
>
> ...WHERE user_id = 'xyz' and col2 = 'xyz'
>
> Until now, I have had two indexes, one for user_id and one for col2
On Thu, Jan 29, 2009 at 5:43 PM, David Fetter wrote:
>>
>> > * CTEs not yet integrated into the adjacency lists in pg_catalog,
>> > etc.
>>
>> I'm not sure what you're referring to here either.
>
> The DAG structures in pg_depend leap to mind. There's no view that
> shows the actual dependencies,
Alex,
looks like you need to read documentation and tune postgresql.conf.
Currently, it looks not good. I have no time to guide you, so search
archives for tuning postgresql.conf. This was discussed a lot of time.
Oleg
On Wed, 4 Feb 2009, Alex Neth wrote:
I've also found other queries that were
Jack Orenstein wrote:
Can someone help in my understanding of what is correct
behavior (in the literal and bound variable cases)?
Is there a bug in the driver? in postgres? in the
docs? Or in my understanding?
LIKE E'%\\%' will match a string that ends with a percent sign, not a
stri
Hi!
I'd need some advice.
I'm working on a quite special field, I have to set up an embedded DB,
which will store logs (measured values) and gives configuration to the
machine and alsothis configuration can be changed.
The system will consist of a CF card (with wear leveling) and an Intel
atom CPU
Hi,
on a linux system try software raid1 for pg data. check if pg is the
right choice for your needs here. maybe flat files for config+log is
less problematic.
regards
thomas
Kokas Zsolt schrieb:
Hi!
I'd need some advice.
I'm working on a quite special field, I have to set up an embedded D
Jack Orenstein writes:
> I wrote a JDBC test program, with two variants.
My Java is pretty weak, but doesn't it think that backslash is special
in string literals? If I assume that each pair of \'s in your examples
went to the database as one \, then the results are what I'd expect.
Thanks, Gregory and Simon, for the very useful posts.
I have increased the vacuum_cost_limit to 2000 for now, just to see if
that has an impact. Hopefully positive.
Next on my list is to be able to easily upgrade to 8.3, but Slony
seemed like a daunting task the last time I tried. I am on 8.2.9,
Tom Lane wrote:
My Java is pretty weak, but doesn't it think that backslash is special
in string literals? If I assume that each pair of \'s in your examples
went to the database as one \, then the results are what I'd expect.
regards, tom lane
Daniel Verite wrote:
>
Thanks for the suggestions.
Following is the SQL query. Actually, this is not merely a DROP INDEX
question. I am also surprised that this straight index query takes
more time than it used to! It would be under 1 second because it's a
one-row conditional match, but not it takes anywhere between 5 t
Here's a few more pet peeves. I'm not sure if any of these are known
bugs or just me being picky.
--Chris
--
1. Having to rewrite entire tables out to disk the first time I scan
them, for example:
CREATE TABLE t1 AS ...; -- writes 100 GB to d
On Tue, Feb 03, 2009 at 05:48:51PM +, Greg Stark wrote:
> On Thu, Jan 29, 2009 at 5:43 PM, David Fetter wrote:
> >>
> >> > * CTEs not yet integrated into the adjacency lists in pg_catalog,
> >> > etc.
> >>
> >> I'm not sure what you're referring to here either.
> >
> > The DAG structures in pg
What about postgreSQL's inability to re-order columns?
Please don't point out that I shouldn't rely on things being in a
certain order when I SELECT * FROM table. I'm well aware of that, I
just generally have an aesthetic preference for a table's columns
being in a certain order.
Regards,
Peter G
On Feb 3, 2009, at 12:41 PM, Peter Geoghegan wrote:
What about postgreSQL's inability to re-order columns?
Please don't point out that I shouldn't rely on things being in a
certain order when I SELECT * FROM table. I'm well aware of that, I
just generally have an aesthetic preference for a tab
> char c = PG_GETARG_CHAR(1);
>
for whatever reason, taht doesn't return the real char that was passed in.
--
GJ
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Grzegorz Jaśkiewicz wrote:
> > char c = PG_GETARG_CHAR(1);
>
> for whatever reason, taht doesn't return the real char that was passed in.
Yeah ... try DatumGetBpCharP instead. PG_GETARG_CHAR is for type "char"
with quotes, which is a completely different thing.
--
Alvaro Herrera Valdiv
Alvaro Herrera writes:
> Grzegorz JaÅkiewicz wrote:
>> for whatever reason, taht doesn't return the real char that was passed in.
> Yeah ... try DatumGetBpCharP instead. PG_GETARG_CHAR is for type "char"
> with quotes, which is a completely different thing.
Or maybe the C code does just what h
Alvaro Herrera writes:
> Grzegorz JaÅkiewicz wrote:
>> looks like it really has to be defined with "char" in double quotes. I
>> thought just char is enough...
> They're different types.
You know, maybe we should stop holding our noses and do something about
this old gotcha. That type's not go
Grzegorz Jaśkiewicz wrote:
> looks like it really has to be defined with "char" in double quotes. I
> thought just char is enough...
They're different types.
--
Alvaro Herrera http://www.flickr.com/photos/alvherre/
"Crear es tan difícil como ser libre" (Elsa Triolet)
--
On Tue, Feb 3, 2009 at 4:28 PM, Tom Lane wrote:
> Alvaro Herrera writes:
>> Grzegorz Jaśkiewicz wrote:
>>> looks like it really has to be defined with "char" in double quotes. I
>>> thought just char is enough...
>
>> They're different types.
>
> You know, maybe we should stop holding our noses a
it's defined:
create or replace function filter_text(text, char) returns text as
'test_proc.so' language 'c';
which leads me to another question.
It seems that I have to leave psql and comeback, for new version to be
loaded. (that's on 8.4 tho, I don't have 8.3 at home).
And also that 'replace'
looks like it really has to be defined with "char" in double quotes. I
thought just char is enough...
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Jack Orenstein wrote:
In this code, The string passed to JDBC is %\% (percent, one
backslash, percent), and no rows are returned.
It appears as if the one backslash is being treated as an
escape for the %.
That's right. So \% as a pattern matches a percent, and \\ as a pattern
matche
On Mon, Feb 2, 2009 at 5:48 PM, Gregory Stark wrote:
> Christopher Browne writes:
>
>> - Managing jobs (e.g. - "pgcron")
>
> A number of people have mentioned a job scheduler. I think a job scheduler
> entirely inside Postgres would be a terrible idea.
I think it's a terrible idea to put words i
On Tue, Feb 3, 2009 at 9:28 PM, Tom Lane wrote:
> On the other hand, that might be more trouble than it's worth. Even
> with a domain alias, there'd be a nontrivial chance of breaking apps
> that look at the char columns of the system catalogs.
I have to apologize, it is clearly written in quo
If I may know ?
--
GJ
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Feb 2, 2009, at 12:23 AM, durumdara wrote:
Hi!
2009.01.31. 10:13 keltezéssel, Erik Jones írta:
On Jan 30, 2009, at 11:37 AM, durumdara wrote:
The main viewpoints:
- quick (re)connect - because mod_python basically not store the
database connections persistently
mod_python is not a d
=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= writes:
> If I may know ?
A re-wrap
http://archives.postgresql.org/pgsql-committers/2009-01/msg00387.php
and some communications glitches ...
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To
Merlin Moncure writes:
> On Tue, Feb 3, 2009 at 4:28 PM, Tom Lane wrote:
>> You know, maybe we should stop holding our noses and do something about
>> this old gotcha. That type's not going away anytime soon, but could we
>> rename it to char1 or something like that? (With some sort of backward
Gregory Stark wrote:
So, what do people say? Is Postgres perfect in your world or does it do some
things which rub you the wrong way?
As a further take on the auto-tuning others have mentioned,
how about some auto-indexing?
- Jeremy
--
Sent via pgsql-general mailing list (pgsql-general@postgr
I need to return all rows in a table where one of the columns 'name' is 37+
characters. In postgres, is there a function to get the length of the
columns contents?
Thanks!
On Tue, 2009-02-03 at 18:44 -0500, blackwater dev wrote:
> I need to return all rows in a table where one of the columns 'name'
> is 37+ characters. In postgres, is there a function to get the length
> of the columns contents?
>
http://www.postgresql.org/docs/8.3/static/functions-string.html
len
blackwater dev wrote on 04.02.2009 00:44:
I need to return all rows in a table where one of the columns 'name' is
37+ characters. In postgres, is there a function to get the length of
the columns contents?
Check out char_length()
http://www.postgresql.org/docs/8.3/static/functions-string.htm
On Tue, 2009-02-03 at 15:03 -0500, Chris Mayfield wrote:
> 1. Having to rewrite entire tables out to disk the first time I scan
> them, for example:
>
> CREATE TABLE t1 AS ...; -- writes 100 GB to disk
> CREATE INDEX i1 ON t1 ...; -- rewrites 100 GB to disk
>
> The main issue is setting the hi
On Mon, 2009-02-02 at 22:48 +, Gregory Stark wrote:
> Christopher Browne writes:
>
> > - Managing jobs (e.g. - "pgcron")
>
> A number of people have mentioned a job scheduler. I think a job scheduler
> entirely inside Postgres would be a terrible idea.
You probably should explain why you t
On Feb 1, 2009, at 4:47 AM, Thomas Kellerer wrote:
Hi,
(Note: I have never used log shipping before, I'm just interested in
the concepts, so I'm might be missing a very important aspect)
I was reading the blog entry about HA and warm standby:
http://scale-out-blog.blogspot.com/2009/02/simp
Phoenix Kiula writes:
> Index Scan using new_idx_testimonials_userid on testimonials
> (cost=0.00..157.78 rows=1 width=9) (actual time=8809.715..8809.715
> rows=0 loops=1)
>Index Cond: ((user_id)::text = 'superman'::text)
>Filter: ((title_encrypted)::text =
> 'b333dc1b0992cb8c70b58a41821
I finally got around to build a configuration but the results are not good
at all and a bit odd.
Here is what I did:
I built the configuration with the hunspell + an Arabic simple dictionary
(with just the stop words as an input) because I noticed that words not
recognized will still get returned
On Tue, 3 Feb 2009, Greg Stark wrote:
Notably, there's no indication of which lock wait queue the ungranted
locks are in. That means to find out what's blocking a lock would
require comparing every other lock to it and deciding whether it
conflicts.
The tool I find myself wanting here would pa
Hey folks
I am trying to write simple function, that would filter out a char
from text/string. It's being a while since I last time wrote c
function for postgresql (8.1), and few things are gone in API there.
Can someone tell me what's wrong with that function please ?
#include "postgres.h"
#inclu
On Tue, Feb 3, 2009 at 7:04 PM, David Fetter wrote:
>
>> Notably, there's no indication of which lock wait queue the
>> ungranted locks are in. That means to find out what's blocking a
>> lock would require comparing every other lock to it and deciding
>> whether it conflicts.
>
> Interesting :)
On Tue, Feb 3, 2009 at 10:06 PM, Simon Riggs wrote:
>
> On Tue, 2009-02-03 at 15:03 -0500, Chris Mayfield wrote:
>
>> 1. Having to rewrite entire tables out to disk the first time I scan
>> them, for example:
>>
>> CREATE TABLE t1 AS ...; -- writes 100 GB to disk
>> CREATE INDEX i1 ON t1 ...; -- r
On Tue, 3 Feb 2009, Jeremy Harris wrote:
As a further take on the auto-tuning others have mentioned,
how about some auto-indexing?
That's a significantly harder problem than auto-tuning.
http://it.toolbox.com/blogs/database-soup/finding-useless-indexes-28796 is
a good intro to a subset of th
On Tue, Feb 3, 2009 at 9:27 PM, Simon Riggs wrote:
>
> On Mon, 2009-02-02 at 22:48 +, Gregory Stark wrote:
>> Christopher Browne writes:
>>
>> > - Managing jobs (e.g. - "pgcron")
>>
>> A number of people have mentioned a job scheduler. I think a job scheduler
>> entirely inside Postgres would
Greg Stark wrote:
My only point was that this would be very different from Oracle-style
job scheduler implemented *inside* the database using
database-specific code and requiring database-specific code to
interact with the outside world. That's just reimplementing the whole
world using the databa
On Tue, Feb 3, 2009 at 8:58 PM, Guy Rouillier wrote:
> Greg Stark wrote:
>>
>> My only point was that this would be very different from Oracle-style
>> job scheduler implemented *inside* the database using
>> database-specific code and requiring database-specific code to
>> interact with the outsi
Guy Rouillier wrote:
And someone else might want to play that game inside PG ;).
In fact, given how extensible PG is in other ways, it's surprising there
hasn't been more call for it. Perhaps the fact there there's presently
no facility for stored procedures to easily manage transactions has
On Wed, Feb 4, 2009 at 5:13 AM, Tom Lane wrote:
> Phoenix Kiula writes:
>> Index Scan using new_idx_testimonials_userid on testimonials
>> (cost=0.00..157.78 rows=1 width=9) (actual time=8809.715..8809.715
>> rows=0 loops=1)
>>Index Cond: ((user_id)::text = 'superman'::text)
>>Filter: ((
Craig Ringer wrote:
An internal job scheduler with the ability to fire jobs on certain
events as well as on a fixed schedule could be particularly handy in
conjunction with true stored procedures that could explicitly manage
transactions.
Craig, what kind of "events" are you thinking about?
Guy Rouillier wrote:
Craig Ringer wrote:
An internal job scheduler with the ability to fire jobs on certain
events as well as on a fixed schedule could be particularly handy in
conjunction with true stored procedures that could explicitly manage
transactions.
Craig, what kind of "events" are
On Tue, Feb 3, 2009 at 10:09 PM, Craig Ringer
wrote:
> Guy Rouillier wrote:
>>
>> Craig Ringer wrote:
>>>
>>> An internal job scheduler with the ability to fire jobs on certain events
>>> as well as on a fixed schedule could be particularly handy in conjunction
>>> with true stored procedures that
On Fri, Jan 23, 2009 at 02:18, David Goodenough
wrote:
>
> Is there a definative HOWTO that I can follow, if not does someone
> have a set of instructions that will work?
>
> If it matters I am running under Debian.
I did it once for a very large db (large for me was 5GB) and converted
it from EU
83 matches
Mail list logo