Hai Aleksey,
I once have the same problem. In my case it's because most of my table
using text datatype.
When I change the field type to character varying (1000) database size
reduced significantly
Unfortunately, I haven't investigate more, but it looks like how postgres
stores data
Regards,
Nur
On 03/12/12 12:06 AM, Nur Hidayat wrote:
I once have the same problem. In my case it's because most of my table
using text datatype.
When I change the field type to character varying (1000) database size
reduced significantly
Unfortunately, I haven't investigate more, but it looks like how
Yes, I am aware of that, but that's the fact I'm facing
Right now I'am happy enough my system runs well without eating up my drive :)
I'll investigate more later when time available :)
Cheers,
Nur Hidayat
.::.
Sent from my BlackBerry®
powered by The ESQ Way 165
-Original Message-
From
FYI, after I changed text field into character varying, I vaccuum the whole
database, resulting in much smaller database size
Cheers,
NH
.::.
Sent from my BlackBerry®
powered by The ESQ Way 165
-Original Message-
From: "Nur Hidayat"
Date: Mon, 12 Mar 2012 08:18:09
To: John R Pierce;
Selena Deckelmann wrote:
> On Thursday, March 8, 2012 at 11:40 AM, Stefan Keller wrote:
>> I do have a student who is interested in participating at the Google
>> Summer of Code (GSoC) 2012
>> Now I have the "burden" to look for a cool project... Any ideas?
>
> Also those who are on this thread, w
Tom,
We made most of our text, varchar columns citext data types so that we
could do case insensitive searches. Is this going to negate most of the
index searches? It appeared to our DBA that it would be easier to use
citext data type then need to use ILIKE instead?
Michael Gould
Intermodal S
On Sat, 2012-03-10 at 14:08 -0800, PgSQL wrote:
> >>>If the OP can identify and download the relevant .rpm file
>
Have you checked the CentOS bugzilla to see if the yum failure has been
reported? You should raise a bug report if it isn't there: if nobody
reports a bug its unlikely to get fixed.
>
Thanks Martin.
Sure!, I download this packages to same folder, but I used:
rpm -Uvh *.rpm
to install all this packages.
PD:
mi ingles es pobre, mi lenguaje es español.
Gracias
2012/3/12, Martin Gregorie-2 [via PostgreSQL]
:
>
>
> On Sat, 2012-03-10 at 14:08 -0800, PgSQL wrote:
>> >>>If the OP
On 12 March 2012 09:20, Nur Hidayat wrote:
> FYI, after I changed text field into character varying, I vaccuum the whole
> database, resulting in much smaller database size
What I think that happened in your case is that because of the
data-type change every row in the table got rewritten to a n
Where can I check in which version the default setting for fsync was
changed?
thanks
--
Ivan Sergio Borgonovo
http://www.webthatworks.it
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi,
writing a client frontend I started with mysql and migrated to postgres. Now I
found out that there is a difference in the way joined queries are returned:
The query
SELECT
persons.id,persons.lastname,persons.firstname,persons.salutation,persons.title,addresses.address1,addresses.address2,
Alexander Reichstadt writes:
> Is there a way or some setting on postgres server to tell postgres to use the
> fieldnames exactly as provided in the select?
You didn't say exactly which "API" you're using, but in libpq you could
look at the PQftable and PQftablecol column property inquiry functi
PGSQLKit for Cocoa.
Am 12.03.2012 um 18:09 schrieb Tom Lane:
> Alexander Reichstadt writes:
>> Is there a way or some setting on postgres server to tell postgres to use
>> the fieldnames exactly as provided in the select?
>
> You didn't say exactly which "API" you're using, but in libpq you c
Hello All,
I've looked through the docs, but I'm unable to find complete answers to my
questions, so thanks in advance if you can lend any expertise.
Here's the situation I'm in (always a good opener, right? :) ):
We've got a postgres database with *a lot* of data in one table. On the
order of
Hi,
the following statement worked on mysql but gives me an error on postgres:
column "addresses.address1" must appear in the GROUP BY clause or be used in an
aggregate function
I guess I am doing something wrong. I read the web answers, but none of them
seem to meet my needs:
SELECT
compani
>
> We made most of our text, varchar columns citext data types so that we
> could do case insensitive searches. Is this going to negate most of the
> index searches? It appeared to our DBA that it would be easier to use
> citext data type then need to use ILIKE instead?
>
In the same vein...
D
> However, given the size of this table, I have no idea how long something
> like this might take. In general I've had a tough time getting feedback
> from postgres on the progress of a query, how long something might take,
> etc.
>
You can always do this which would result in minimum hassles.
Tim,
It is my understanding that since the extention citext is available that
this gives you what your asking for and at least at this point isn't
going to be part of the core.
Michael Gould
Intermodal Software Solutions, LLC
904-226-0978
Original Message
Subject: Re: [GENER
Hi,
You can use one of windowing function:
http://www.postgresql.org/docs/9.1/static/tutorial-window.html
http://www.postgresql.org/docs/9.1/static/functions-window.html
this could be rank() in subquery or first_value(vale any), but there could
be performance issue
another solution could be boolea
So the mysql way for group by seems to be non-standard.
What works for postgres is the DISTINCT ON (fieldname) approach.
Thanks
Am 12.03.2012 um 20:35 schrieb Alexander Reichstadt:
> Hi,
>
> the following statement worked on mysql but gives me an error on postgres:
>
> column "addresses.add
Instead of the joins you can use a subquery to get the first address.
Or you can do the joins without the group by and use row_number()
over(partition by companies.id) on the select list to label each company
address with a number starting at 1. You can just keep rows that have
row_number = 1.
Thanks, I just posted my response to my own question for the archives. I take
it also that group by is faster than distinct on. If it is a substantial
performance gain I have to work on this some more. A subquery I would expect
would be much of a drag, so for all keystroke-updated list-tables th
On Mon, Mar 12, 2012 at 1:35 PM, Alexander Reichstadt wrote:
> Hi,
>
> the following statement worked on mysql but gives me an error on postgres:
>
> column "addresses.address1" must appear in the GROUP BY clause or be used in
> an aggregate function
>
> I guess I am doing something wrong. I read
> It is my understanding that since the extention citext is available that
> this gives you what your asking for and at least at this point isn't
> going to be part of the core.
>
For me it's more of a workaround than a solution but yes probably good
enough. Collation is more subtle than case inse
2012/3/12 Tim Uckun :
>> It is my understanding that since the extention citext is available that
>> this gives you what your asking for and at least at this point isn't
>> going to be part of the core.
>>
>
> For me it's more of a workaround than a solution but yes probably good
> enough. Collatio
On 03/12/12 1:25 PM, Tim Uckun wrote:
create a new bigint field.
copy all the IDs to it.
index it in the background
at frequency of your choosing sync the id field to the new field to keep it up.
at a time of your choosing set the default for the new field to be
serial starting at max(id)
drop th
I guess I lack the knowledge to integrate your answer in my queryActually
I'd prefer to always see the first address entered unless there is a
where-clause added. Not sure how this works out then and haven't tested. But
given the initial query extended by distinct on it would be like so:
>
If I didn't change the data type to charcater varying vaccuming the database
doesn't reduce the database size
.::.
Sent from my BlackBerry®
powered by The ESQ Way 165
-Original Message-
From: Alban Hertroys
Date: Mon, 12 Mar 2012 16:43:49
To:
Cc: John R Pierce;
Subject: Re: [GENERA
On Mon, Mar 12, 2012 at 3:19 PM, Alexander Reichstadt wrote:
> But where would I insert the max(address) piece?
>
Just put max() or min() around any field in the select list that's not
in the group by clause
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes t
On 03/12/12 2:28 PM, Nur Hidayat wrote:
If I didn't change the data type to charcater varying vaccuming the database
doesn't reduce the database size
changing the data type required every tuple to get rewritten. a
vacuum full, or a cluster likely would have done the same or better
reduct
I have a table that is around 20 GB, so I need to optimize as best as possible
the matching with another table on keywords across multiple fields. I have
around 10 fields that have keywords or phrases delimited with the "or"
operator "|". So it would be in the form of "a | b | and jack | cd
Tim,
Commando. I like it. Thanks a ton for that suggestion. I'd still like to
hear if anyone has a good way to estimate the performance of these
operations, but I'll explore what it would mean to do exactly that.
John: thankfully this is a table without any fks in, although it is indexed
to he
Hi all
2011/7/12 Chris Travers :
> I am not convinced that VoltDB is a magic bullet either. I don't
I have the chance to help preparing an interview with Mike
Stonebreaker to be published at www.odbms.org
I'd really like to know, if he is up-to-date how Postgres performs
these days and how he th
>> From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Jim Ostler
>> Sent: Monday, March 12, 2012 6:57 PM
>> To: pgsql-general@postgresql.org
>> Subject: [GENERAL] Matching on keyword or phrases within a field that is
delimited with an "or" operator "|
trying to update a varchar numeric string column
by converting it to int, adding a numeric value and insert it back
as a varchar
Having trouble with cast
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpr
On 03/12/2012 06:28 PM, Bret Stern wrote:
trying to update a varchar numeric string column
by converting it to int, adding a numeric value and insert it back
as a varchar
Having trouble with cast
Possibly having trouble with two casts: one from string to int, one
from int to string? You sql
On Mon, 2012-03-12 at 15:57 -0700, Jim Ostler wrote:
> I have a table that is around 20 GB, so I need to optimize as best as
> possible the matching with another table on keywords across multiple
> fields. I have around 10 fields that have keywords or phrases
> delimited with the "or" operator "|"
Bret Stern wrote:
>
> trying to update a varchar numeric string column
> by converting it to int, adding a numeric value and insert it back
> as a varchar
>
> Having trouble with cast
I assume you are doing an update as opposed to an insert. You use
both above (and both numeric and int as well)
On Mon, 2012-03-12 at 17:39 -0700, Bosco Rama wrote:
> Bret Stern wrote:
> >
> > trying to update a varchar numeric string column
> > by converting it to int, adding a numeric value and insert it back
> > as a varchar
> >
> > Having trouble with cast
>
> I assume you are doing an update as oppos
On 03/12/12 5:01 PM, Carson Gross wrote:
We are also considering sharding the table and maybe the right thing
is to simply fix it when we do the sharding.
postgres generally calls that partitioning... Sharding usually means
splitting data across multiple servers.
--
john r pierce
Hi all,
When using COPY FROM STDIN to stream thousands of rows (20k and more hourly),
what happens with indices? Are they updated only once after the operation, or
are they updated once per row? Note that I'm not replacing the table's data:
I'm appending to what's already there. I suspect bat
Hi,
On 13 March 2012 15:11, François Beausoleil wrote:
> When using COPY FROM STDIN to stream thousands of rows (20k and more hourly),
> what happens with indices? Are they updated only once after the operation, or
> are they updated once per row? Note that I'm not replacing the table's data:
2012/3/12 François Beausoleil :
> Hi all,
>
> When using COPY FROM STDIN to stream thousands of rows (20k and more hourly),
> what happens with indices? Are they updated only once after the operation, or
> are they updated once per row? Note that I'm not replacing the table's data:
> I'm appendi
Scott Marlowe wrote:
2012/3/12 François Beausoleil :
> Hi all,
>
> When using COPY FROM STDIN to stream thousands of rows (20k and more hourly),
> what happens with indices? Are they updated only once after the operation, or
> are they updated once per row? Note that I'm not replacing the table'
44 matches
Mail list logo