Thanks, but no, we do need the performance
And we have admins (not users) enter the names and codes, but we can't make
it way complicated to do that.
I thought you meant that they see to it that the names end up in the
database under the correct encoding (which is a logical thing to do..)
Thanks a
On Wed, Oct 10, 2012 at 1:47 AM, Vineet Deodhar wrote:
> Hi !
> At present, I am using MySQL as backend for my work.
> Because of the licensing implications, I am considering to shift from
> MySQL to pgsql.
> Typically, my apps are multi-user, web based or LAN based.
>
> 1) Read over the internet
On 10/10/2012 10:47 AM, Vineet Deodhar
wrote:
Hi !
At present, I am using MySQL as backend for my work.
Because of the licensing implications, I am considering to shift
from MySQL to pgsql.
Typically, my apps are multi-user, web based or LAN based.
On Wed, Oct 10, 2012 at 2:38 PM, Chris Travers wrote:
>
>
> On Wed, Oct 10, 2012 at 1:47 AM, Vineet Deodhar
> wrote: PostgreSQL has an excellent optimizer and
> the on-disk layout is
> completely different. This will dwarf any changes due to threads vs
> queries.
>
> However be prepared to re
On Wed, Oct 10, 2012 at 2:20 AM, Vineet Deodhar wrote:
> On Wed, Oct 10, 2012 at 2:38 PM, Chris Travers wrote:
>
>>
>>
>> On Wed, Oct 10, 2012 at 1:47 AM, Vineet Deodhar > > wrote: PostgreSQL has an excellent optimizer and the on-disk layout is
>> completely different. This will dwarf any changes
Hi,
I need to store large files (from several MB to 1GB) in Postgres database.
The database has multiple schemas. It looks like Postgres has 2 options to
store large objects: LOB and BYTEA. However we seem to hit problems with
each of these options.
1. LOB. This works almost ideal, can stor
Divakar Singh wrote:
> While making connection to PGSQL using libpq, is there any option to
mention the schema name?
> Something similar exists in java, however not sure about libpq.
Use the "options" connection parameter, like this:
psql "dbname=test user=laurenz port=5432 options='-c
search_pat
Hi all,
first of all thanks for the great new "index only scan" feature in
9.2. We have managed to adapt our app (dhis2.org) to take advantage of
it and it really speeds up several queries significantly.
We are now in the process of designing a new component for analytics
and this feature got me
On 10/10/2012 05:16 AM, tigran2-postg...@riatest.com wrote:
I need to store large files (from several MB to 1GB) in Postgres
database. The database has multiple schemas. It looks like Postgres
has 2 options to store large objects: LOB and BYTEA. However we seem
to hit problems with each of these
On Mon, Oct 8, 2012 at 4:59 PM, Seref Arikan
wrote:
> Thanks Merlin,
> I've tried arrays but plpython does not support returning arrays of custom
> db types (which is what I'd need to do)
hm -- yeah. can your custom types be broken down into plain SQL types
(that is, composite types?). maybe
Hi Merlin,
Thanks for the response. At the moment, the main function is creating two
temp tables that drops on commit, and python functions fills these. Not too
bad, but I'd like to push these temp tables to ram, which is a bit tricky
due to not having a direct method of doing this with postgresql.
tigran2-postg...@riatest.com wrote:
> Is there any other way to store large data in Postgres that allows
streaming and correctly works with
> multiple schemas per database?
Large Objects and bytea are the only ways.
If you want to pg_dump only certain large objects, that won't work
as far as I kn
On Wed, Oct 10, 2012 at 3:47 AM, Vineet Deodhar
wrote:
> Hi !
> At present, I am using MySQL as backend for my work.
> Because of the licensing implications, I am considering to shift from MySQL
> to pgsql.
> Typically, my apps are multi-user, web based or LAN based.
>
> 1) Read over the internet
On Wed, Oct 10, 2012 at 8:27 AM, Seref Arikan
wrote:
> Hi Merlin,
> Thanks for the response. At the moment, the main function is creating two
> temp tables that drops on commit, and python functions fills these. Not too
> bad, but I'd like to push these temp tables to ram, which is a bit tricky
>
Comments inline (sorry, did not cc the group in the other mail)
On Wed, Oct 10, 2012 at 2:55 PM, Merlin Moncure wrote:
> On Wed, Oct 10, 2012 at 8:27 AM, Seref Arikan
> wrote:
> > Hi Merlin,
> > Thanks for the response. At the moment, the main function is creating two
> > temp tables that drops
Where do I find more information about PG fork you mentioned?
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Compression-tp4304322p5727363.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent via pgsql-general mailing list (pgsql-gener
Thanks Bret,
I'm concerned about what happens when my functions under high load fills
the ramdrive with temporary tables I'm using. The advantage of telling
postgres to use ram with an option to fall back to disk is significantly
better in terms of uptime.
However, I was thinking about some mechani
Hi Laurenz,
Thanks for quick reply.
I hope it will help. Will check and revert.
Best Regards,
dpsmails
--- On Wed, 10/10/12, Albe Laurenz wrote:
From: Albe Laurenz
Subject: RE: [GENERAL] libpq-how to add a schema to search path
To: "Divakar Singh *EXTERN*" , pgsql-general@postgresql.org
Date
On Saturday, January 29, 2011 9:08:22 PM UTC+8, orgilhp wrote:
> Hello
> I am using pymssql to connect to MSSQL2008 database. But an error
> occurs:
> --
> >>> import pymssql
> >>> conn = pymssql.connect(host='orgilhpnb\mssql2008', use
create a ramdrive
On Wed, 2012-10-10 at 14:27 +0100, Seref Arikan wrote:
> Hi Merlin,
> Thanks for the response. At the moment, the main function is creating
> two temp tables that drops on commit, and python functions fills
> these. Not too bad, but I'd like to push these temp tables to ram,
> w
On Tue, Oct 9, 2012 at 09:50:22PM -0600, Chris Ernst wrote:
> Hi all,
>
> I'm trying to test using pg_upgrade to go from 9.1.6 to 9.2.1 on Ubuntu
> server 10.04. But when I run pg_upgrade, it tells me I can only run it
> on 8.3 or later.
>
> Old:
> postgres=# SELECT version();
>
Instead of this:
create function some_trigger() returns trigger as $$
begin
if TG_OP = 'DELETE' then
insert into audits values (OLD.value);
else
insert into audits values (NEW.value);
end if;
return NULL;
end
$$ language plpgsql;
create trigger some_trigger after insert on products
fo
Hello,
I am trying to identify how best to handle the situation where a controlled
vocabulary needs to be searched on using full text search.
I have a list of brand names that have, what FTS deems, blank characters in
them that I need to search against. E.g. (+)people, D&G, 100% Design.
These pa
On 10/10/2012 09:56 AM, Bruce Momjian wrote:
> Can you show me what is in the PG_VERSION file in the old cluster? It
> should be "9.1".
Hi Bruce,
Thank you for the reply. Indeed it is "9.1":
# cat /postgresql/9.1/main/PG_VERSION
9.1
And just for good measure:
cat /postgresql/9.2/main/PG_VERS
We are seeing a performance regression when moving to 9.2. There is a
complex query that is doing a self-join, but long story short, it
is choosing to use a multi-column index when it really ought not to be.
I was not able to develop a standalone test case without resorting
to changing enable_se
Greg Sabino Mullane writes:
> We are seeing a performance regression when moving to 9.2. There is a
> complex query that is doing a self-join, but long story short, it
> is choosing to use a multi-column index when it really ought not to be.
> I was not able to develop a standalone test case wit
On Wed, Oct 10, 2012 at 10:35:06AM -0600, Chris Ernst wrote:
> On 10/10/2012 09:56 AM, Bruce Momjian wrote:
> > Can you show me what is in the PG_VERSION file in the old cluster? It
> > should be "9.1".
>
> Hi Bruce,
>
> Thank you for the reply. Indeed it is "9.1":
>
> # cat /postgresql/9.1/ma
On 10 Oct 2012, at 3:17, thomson...@sourcephotonics.com wrote:
> On Saturday, January 29, 2011 9:08:22 PM UTC+8, orgilhp wrote:
>> Hello
>> I am using pymssql to connect to MSSQL2008 database. But an error
>> occurs:
>> --
> impor
On Wed, Oct 10, 2012 at 01:31:29PM -0400, Tom Lane wrote:
> The above doesn't seem like a regression to me. You told it not to use
> a seqscan, and it didn't. (The reason it now considers the index is
> that an index-only scan is possible; before 9.2 there was no reason to
> consider an indexscan
On Wed, Oct 10, 2012 at 9:22 AM, Joe Van Dyk wrote:
> I wish I could do:
>
> create trigger some_trigger after insert on products
> execute procedure do $$ begin
> insert into audits values (CHANGED.value);
> end $$ language plpgsql;
IF TG_OP = 'DELETE' THEN RENAME OLD TO myrow;
ELSE RENAME NEW T
On 11/10/12 01:03, Lars Helge Øverland wrote:
Hi all,
first of all thanks for the great new "index only scan" feature in
9.2. We have managed to adapt our app (dhis2.org) to take advantage of
it and it really speeds up several queries significantly.
We are now in the process of designing a new
Hi,
On 10 October 2012 23:03, Lars Helge Øverland wrote:
> We are now in the process of designing a new component for analytics
> and this feature got me thinking we could utilize postgres over other
> alternatives like column-oriented databases. Basically we will have a
> wide, denormalized tabl
Gavin Flower writes:
> On 11/10/12 01:03, Lars Helge Øverland wrote:
>> My question is: Would it be feasible and/or possible to implement
>> index only scans in a way that it could take advantage of several,
>> single-column indexes? For example, a query spanning columns a, b, c
>> could take adva
Greg Sabino Mullane writes:
> Found a good demonstration of the problem. Here's explain analyze of a
> query on 9.2 with enable_indexonlyscan = off; This produces the exact same
> plan as 8.3. The tables in question have been analyzed. Changing
> random_page_cost has no effect. The main foobar
Hi,
On 10 October 2012 19:47, Vineet Deodhar wrote:
> 3) Can I simulate MySQL's TINYINT data-type (using maybe the custom data
> type or something else)
What do you exactly mean? Do you care about storage requirements or
constraints? The smallest numeric type in postgres is smallint: range
is +/
Found a good demonstration of the problem. Here's explain analyze of a
query on 9.2 with enable_indexonlyscan = off; This produces the exact same
plan as 8.3. The tables in question have been analyzed. Changing
random_page_cost has no effect. The main foobar table has 17M rows.
I did multiple r
I noticed something here that none of the other replies addressed.
Vineet Deodhar wrote:
3) Can I simulate MySQL's TINYINT data-type (using maybe the custom data
type or something else)
The answer to your question depends on what you were using the TINYINT for.
For example, many people use TI
On 11/10/12 12:41, Tom Lane wrote:
Gavin Flower writes:
On 11/10/12 01:03, Lars Helge Øverland wrote:
My question is: Would it be feasible and/or possible to implement
index only scans in a way that it could take advantage of several,
single-column indexes? For example, a query spanning column
Greg Sabino Mullane writes:
> -> Bitmap Heap Scan on foobar o (C=30389..835271 R=8980 W=8)
> (AT=0.06..0.07 R=1 L=1)
>Recheck Cond: (((id) >= (m.id)) AND ((id) <= ((m.id) ||
> '.99')))
>Filter: (((status) <> ALL ('{panda,penguin}'[])) \
> AND ((id)
Steve A writes:
> In a nutshell, I'm curious about the order in which PG will lock rows during
> a SELECT FOR UPDATE. If two simultaneous SELECT FOR UPDATE statements select
> intersecting rows from the same table, can PG be relied upon to lock the rows
> in a consistent manner that always avoi
>> Oct 5 15:00:25 db01 postgres[76648]: [5944-1] javafail@dbcluster 76648 0:
>> FATAL: pipe() failed: Too many open files in system
>
> This message must be coming from initSelfPipe(), and after poking around
> a bit I think the failure must be occurring while a new backend is
> attempting to d
On 10/10/2012 02:18 AM, Sim Zacks wrote:
2) I run MySQL from a USB stick.
There is no installation required (on WinXP.). (not tried on Ubuntu)
Is it the same for pgsql?
To use postgres on a USB stick, see
http://www.postgresonline.com/journal/archives/172-Starting-PostgreSQL-in-windows-without
On Wed, Oct 10, 2012 at 09:24:32PM -0400, Tom Lane wrote:
> I'd not been thinking of that change as something we'd risk
> back-patching, but maybe we should consider putting it into 9.2. It
> seems like the index-only scan support has put a new level of premium on
> the quality of the planner's ro
On 10/10/2012 04:47 PM, Vineet Deodhar wrote:
2) I run MySQL from a USB stick.
There is no installation required (on WinXP.). (not tried on Ubuntu)
Is it the same for pgsql?
On Windows PostgreSQL is usually installed as a system service with its
own user account (pre-9.2) or running in the ne
On 10/11/2012 12:22 AM, Joe Van Dyk wrote:
3. Triggers can access a special CHANGED value that's either NEW for
insert or updates, or OLD for deletes.
I'm not a big fan of the prior comments about small syntax changes, but
this would simplify quite a bit of code. I'd *really* like a way to
r
>Large Objects and bytea are the only ways.
>
>If you want to pg_dump only certain large objects, that won't work as far
as I know (maybe using permissions and a non-superuser can help).
>
>You absolutely need to pg_dump parts of the database regularly?
>
>Yours,
>Laurenz Albe
It is n
>I believe the general consensus around here is to not do that, if you can
avoid it. File systems are much better equipped to handle files of that
magnitude, especially when it comes to retrieving them, scanning >through
their contents, or really, any access pattern aside from simple storage.
>
On 10/11/2012 01:35 PM, tigran2-postg...@riatest.com wrote:
Using files stored outside the database creates all sorts of problems.
For starters you lose ACID guaranties. I would prefer to keep them in
database. We did a lot of experiments with Large Objects and they really
worked fine (stored hun
On Thu, Oct 11, 2012 at 5:26 AM, Ondrej Ivanič wrote:
> Hi,
>
> On 10 October 2012 19:47, Vineet Deodhar wrote:
> > 3) Can I simulate MySQL's TINYINT data-type (using maybe the custom data
> > type or something else)
>
> What do you exactly mean? Do you care about storage requirements or
> constr
>Yeah, a pg_dump mode that dumped everything but large objects would be
nice.
There is option -b for pg_dump which controls whether large objects are
dumped or no. The problem is that with option -b it dumps all large objects
regardless of what schema you requested it to dump using option -n.
Othe
Le 10/10/2012 10:47, Vineet Deodhar a écrit :
Hi !
3) Can I simulate MySQL's TINYINT data-type (using maybe the custom data
type or something else)
Another way, and a good practice toot is to use SQL DOMAINs wich is a
part of the ISO SQL since 1992 that MySQL don't have...
CREATE DOMAIN TI
51 matches
Mail list logo