In fact, disabling toast compression will probably improve the performance
(the indirection will still take place). A float array is not usually very
compressible anyway.
On May 3, 2016 10:37 AM, "John R Pierce" wrote:
> On 5/3/2016 1:21 AM, Marcus Engene wrote:
>
> For each array I've added, and
On 5/3/2016 1:21 AM, Marcus Engene wrote:
For each array I've added, and populated, any dealings with the table
has become way way slower. I can only assume this is because the array
data is inline in the datablock on disk that stores the row.
any field on a table thats more than a few dozen
Hi,
I have some whopper tables for machine learning. One table has a handful
of 200-500 double precision arrays (representing feature vectors). It's
a 9.5 on a SSD (over USB3). Each table has 5-15M rows in them.
For each array I've added, and populated, any dealings with the table
has become
Tom, I was unable to reproduce the issue with standard libpq. Moreover, I
found why it was returned as Text. It was actually a bug in passing
resultFormats in the Bind message. Sorry for the false alert, my fault.
Thank you for the help!
Konstantin
On Fri, Mar 4, 2016 at 10:52 PM, Tom Lane wrote
Konstantin Izmailov writes:
> Whole point of my question was why PG does not return
> binary formatted field when requested (this is a feature supported in the
> protocol).
You haven't presented a test case demonstrating that that happens in
unmodified community source code. If it does happen, I
Tom, that was only a modification for the client-side libpq. The PG is
standard, we are using both 9.4 and 9.5 that were officially released.
I guess there is no standard test for the scenario. But if such test was
created (for checking the format of the returned arrays) it would fail.
Maybe I'm w
Konstantin Izmailov writes:
> Oops, I forgot to mention that we slightly modified libpq to request
> resulting fields formats (since Postgres protocol v3 supports this).
Um. I'm not that excited about supporting bugs in modified variants of
PG. If you can present a test case that fails in stock
Oops, I forgot to mention that we slightly modified libpq to request
resulting fields formats (since Postgres protocol v3 supports this). See
our additions in *Bold*:
PQexec(PGconn *conn, const char *query*, int resultFormatCount, const int*
resultFormats*)
{
if (!PQexecStart(conn))
re
Konstantin Izmailov writes:
> I'm using libpq to read array values, and I noticed that sometimes the
> values are returned in Binary and sometimes - in Text format.
> 1. Returned in Binary format:
>int formats[1] = { 1 }; // request binary format
>res = PQexec(conn, "SELECT rgField FROM a
I'm using libpq to read array values, and I noticed that sometimes the
values are returned in Binary and sometimes - in Text format.
1. Returned in Binary format:
int formats[1] = { 1 }; // request binary format
res = PQexec(conn, "SELECT rgField FROM aTable", 1, formats);
assert(PQfforma
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 04/30/2014 12:52 PM, Torsten Förtsch wrote:
> Hi,
>
> we have the ROW type and we have arrays. We also can create arrays
> of rows like:
>
> select array_agg(r) from (values (1::int, 'today'::timestamp,
> 'a'::text), (2, 'yesterday', 'b')) r(a,b,c
Torsten Förtsch wrote
> On 30/04/14 20:19, David G Johnston wrote:
>> ISTM that you have to "CREATE TYPE ..." as appropriate then
>>
>> ... tb ( col_alias type_created_above[] )
>>
>> There is only so much you can do with anonymous types (which is what the
>> ROW
>> construct creates; ROW is not
On 30/04/14 20:19, David G Johnston wrote:
> ISTM that you have to "CREATE TYPE ..." as appropriate then
>
> ... tb ( col_alias type_created_above[] )
>
> There is only so much you can do with anonymous types (which is what the ROW
> construct creates; ROW is not a type but an expression anchor
Torsten Förtsch wrote
> Hi,
>
> we have the ROW type and we have arrays. We also can create arrays of
> rows like:
>
> select array_agg(r)
> from (values (1::int, 'today'::timestamp, 'a'::text),
>(2, 'yesterday', 'b')) r(a,b,c);
> array_agg
> ---
Hi,
we have the ROW type and we have arrays. We also can create arrays of
rows like:
select array_agg(r)
from (values (1::int, 'today'::timestamp, 'a'::text),
(2, 'yesterday', 'b')) r(a,b,c);
array_agg
-
On Wed, Sep 14, 2011 at 21:05, Fabrízio de Royes Mello
wrote:
> postgres@bdteste=# SELECT array_upper(ARRAY['foo', 'bar'], 1);
On Wed, Sep 14, 2011 at 21:09, Merlin Moncure wrote:
> select count(*) from unnest(_array_);
On Wed, Sep 14, 2011 at 21:15, Steve Crawford
wrote:
> Look at array_dims,
On 09/14/2011 11:01 AM, Bob Pawley wrote:
Hi
Is there a method of counting the number of elements in an array??
Bob
Look at array_dims, array_upper and array_lower.
But note that PostgreSQL allows multi-dimensional arrays. The array_dims
function gives you all the dimensions. If you have a one
On Wed, Sep 14, 2011 at 1:05 PM, Fabrízio de Royes Mello
wrote:
>
> 2011/9/14 Bob Pawley
>>
>> Hi
>>
>> Is there a method of counting the number of elements in an array??
>>
>
> Yes...
> Use function array_upper [1].
> See an example:
> postgres@bdteste=# SELECT array_upper(ARRAY['foo', 'bar'], 1
2011/9/14 Bob Pawley
> Hi
>
> Is there a method of counting the number of elements in an array??
>
>
Yes...
Use function array_upper [1].
See an example:
postgres@bdteste=# SELECT array_upper(ARRAY['foo', 'bar'], 1);
array_upper
-
2
(1 row)
[1] http://www.postgresq
Hi
Is there a method of counting the number of elements in an array??
Bob
Merlin Moncure Thursday 07 April 2011 15:53:00
> On Thu, Apr 7, 2011 at 4:39 AM, rsmogura wrote:
> > Hello,
> >
> > May I ask if PostgreSQL supports arrays of arrays directly or indirectly,
> > or if such support is planned? I'm interested about pseudo constructs
> > like: 1. Directly - (integer
On Thu, Apr 7, 2011 at 4:39 AM, rsmogura wrote:
> Hello,
>
> May I ask if PostgreSQL supports arrays of arrays directly or indirectly, or
> if such support is planned? I'm interested about pseudo constructs like:
> 1. Directly - (integer[4])[5] - this is equivalent to multidimensional
> array, but
Hello,
May I ask if PostgreSQL supports arrays of arrays directly or
indirectly, or if such support is planned? I'm interested about pseudo
constructs like:
1. Directly - (integer[4])[5] - this is equivalent to multidimensional
array, but may be differently represented on protocol serializatio
Thanks all normally I would have gone with a linked table but since support for
arrays has improved in pg lately I thought I would give them a go again but I
guess they are still not ready for what I want.
I did think of another solution overnight though that still uses arrays but
also a subtab
On Sat, Aug 08, 2009 at 05:04:29PM +0930, David wrote:
> Done a bit of hunting and can't seem to find an answer as to if this
> sort of thing is possible:
>
> SELECT * FROM mail WHERE recipients ILIKE 'david%';
>
> Where recipients is a VARCHAR(128)[]
It's a bit of a fiddle:
CREATE FUNCTION f
On 2009-08-08, David wrote:
> Done a bit of hunting and can't seem to find an answer as to if this sort of
> thing is possible:
>
> SELECT * FROM mail WHERE recipients ILIKE 'david%';
>
> Where recipients is a VARCHAR(128)[]
>
> The above doesn't work but thats the sort of thing I want to do...
>
David wrote:
> Done a bit of hunting and can't seem to find an answer as to if this sort of
> thing is possible:
>
> SELECT * FROM mail WHERE recipients ILIKE 'david%';
>
> Where recipients is a VARCHAR(128)[]
>
> The above doesn't work but thats the sort of thing I want to do...
> If this is
Done a bit of hunting and can't seem to find an answer as to if this sort of
thing is possible:
SELECT * FROM mail WHERE recipients ILIKE 'david%';
Where recipients is a VARCHAR(128)[]
The above doesn't work but thats the sort of thing I want to do...
If this is possible and can use an index as
On Mon, Feb 02, 2009 at 09:48:37AM -0800, Scara Maccai wrote:
> > I need to store a lot of int8 columns (2000-2500) in a table.
> > I was thinking about using int8[]
An array of ints sounds like the way to go here as you wouldn't be able
to have that many columns. TOAST is one non-obvious impleme
Anyone?
- Messaggio inoltrato -
> Da: Scara Maccai
> A: pgsql-general
> Inviato: Venerdì 30 gennaio 2009, 13:59:09
> Oggetto: [GENERAL] arrays and block size
>
> Hi,
>
> I need to store a lot of int8 columns (2000-2500) in a table.
>
> I was thinking a
Hi,
I need to store a lot of int8 columns (2000-2500) in a table.
I was thinking about using int8[], and I would like to know:
1) is there a max size for arrays? I guess I could have 1 GB "worth" of values,
but I would like a confirmation
2) there won't be any updates, only inserts and selects;
I respond myself:
Enrico Sirola ha scritto:
[...]
seems to work). The problem for the code above is that it doesn't work
for vectors longer than 1000 elements or so (try it with 2000 and it
doesn't work). I guess I should manage the "toasting" machinery in some
ways - any suggestion is appre
Hi Webb,
Webb Sprague ha scritto:
I'm quite proud, this is my first C extension function ;-)
I'd gladly post the code if it's ok for the list users. It's more or
less 100 lines of code. This approach seems promising...
I would definitely like to see it.
here it goes:
---
> I'm quite proud, this is my first C extension function ;-)
> I'd gladly post the code if it's ok for the list users. It's more or
> less 100 lines of code. This approach seems promising...
I would definitely like to see it.
> By the way, Webb: I took a look at GSL and it seems to me that, from
Hi Webb, Joe, Martijn
Webb Sprague ha scritto:
On Feb 1, 2008 2:31 AM, Enrico Sirola <[EMAIL PROTECTED]> wrote:
Hello,
I'd like to perform linear algebra operations on float4/8 arrays
Having avoided a bunch of real work wondering about linear algebra and
PG, did you consider the Gnu Scientifi
Ted Byers wrote:
> --- Webb Sprague <[EMAIL PROTECTED]> wrote:
>>> ...linear algebra ...
>> ... matrices and vectors .
> ...Especially if some GIST or similar index
>> could efficiently search
> for vectors "close" to other vectors...
>
> I see a potential problem here, in terms of
On Feb 1, 2008 2:31 AM, Enrico Sirola <[EMAIL PROTECTED]> wrote:
> Hello,
> I'd like to perform linear algebra operations on float4/8 arrays
Having avoided a bunch of real work wondering about linear algebra and
PG, did you consider the Gnu Scientific Library ? We would still need
to hook everyth
--- Webb Sprague <[EMAIL PROTECTED]> wrote:
> > ...linear algebra ...
> > >>> ... matrices and vectors .
> > >> ...Especially if some GIST or similar index
> could efficiently search
> > >> for vectors "close" to other vectors...
> > >
I see a potential problem here, in terms of how one
define
> ...linear algebra ...
> >>> ... matrices and vectors .
> >> ...Especially if some GIST or similar index could efficiently search
> >> for vectors "close" to other vectors...
> >
> > Hmm. If I get some more interest on this list (I need just one LAPACK
> > / BLAS hacker...), I will apply for
Webb Sprague wrote:
> On Feb 1, 2008 12:19 PM, Ron Mayer <[EMAIL PROTECTED]> wrote:
>> Webb Sprague wrote:
>>> On Feb 1, 2008 2:31 AM, Enrico Sirola <[EMAIL PROTECTED]> wrote:
...linear algebra ...
>>> ... matrices and vectors .
>> ...Especially if some GIST or similar index could efficiently
Hi Joe,
I don't know if the speed will meet your needs, but you might test
to see if PL/R will work for you:
http://www.joeconway.com/plr/
You could use pg.spi.exec() from within the R procedure to grab the
arrays, do all of your processing inside R (which uses whatever BLAS
you've set
Enrico Sirola wrote:
typically, arrays contain 1000 elements, and an operation is either
multiply it by a scalar or multiply it element-by-element with another
array. The time to rescale 1000 arrays, multiply it for another array
and at the end sum all the 1000 resulting arrays should be enough
(I had meant also to add that a linear algebra package would help
Postgres to be the mediator for real-time data, from things like
temprature sensors, etc, and their relationship to not-so-scientific
data, say in a manufacturing environment).
On Feb 1, 2008 12:19 PM, Ron Mayer <[EMAIL PROTECTED]
Webb Sprague wrote:
> On Feb 1, 2008 2:31 AM, Enrico Sirola <[EMAIL PROTECTED]> wrote:
>> I'd like to perform linear algebra operations on float4/8 arrays...
>
> If there were a coherently designed, simple, and fast LAPACK/ MATLAB
> style library and set of datatypes for matrices and vectors in
>
On Feb 1, 2008 2:31 AM, Enrico Sirola <[EMAIL PROTECTED]> wrote:
> Hello,
> I'd like to perform linear algebra operations on float4/8 arrays.
> These tasks are tipically carried on using ad hoc optimized libraries
> (e.g. BLAS).
If there were a coherently designed, simple, and fast LAPACK/ MATLAB
Hi Colin,
Il giorno 01/feb/08, alle ore 15:22, Colin Wetherbee ha scritto:
I'm not sure about the internals of PostgreSQL (eg. the Datum
object(?) you mention), but if you're just scaling vectors,
consecutive memory addresses shouldn't be absolutely necessary. Add
and multiply operations
On Fri, Feb 01, 2008 at 11:31:37AM +0100, Enrico Sirola wrote:
> Hello,
> I'd like to perform linear algebra operations on float4/8 arrays.
> These tasks are tipically carried on using ad hoc optimized libraries
> (e.g. BLAS). In order to do this, I studied a bit how arrays are
> stored inter
Enrico Sirola wrote:
Hello,
I'd like to perform linear algebra operations on float4/8 arrays. These
tasks are tipically carried on using ad hoc optimized libraries (e.g.
BLAS). In order to do this, I studied a bit how arrays are stored
internally by the DB: from what I understood, arrays are b
Hello,
I'd like to perform linear algebra operations on float4/8 arrays.
These tasks are tipically carried on using ad hoc optimized libraries
(e.g. BLAS). In order to do this, I studied a bit how arrays are
stored internally by the DB: from what I understood, arrays are
basically a vector
Hello,
Thanks everyone for your input. Then, it sounds like I won't use an
array of foreign keys. I was just curious about the array
functionality.
However, I didn't think about setting up a view above the intermediary
table with an array_accum, now I have never heard of array_accum. I
did some r
On Fri, 07 Sep 2007 23:47:40 -
Max <[EMAIL PROTECTED]> wrote:
> Hello,
>
> And pardon me if I posted this question to the wrong list, it seems
> this list is the most appropriate.
>
> I am trying to create a table with an array containing foreign keys.
> I've searched through the documentati
Max wrote:
> I am trying to create a table with an array containing foreign keys.
> I've searched through the documentation and couldn't find a way to do
> so.
>
> Is this something that one can do?
>
> Basically, I have two tables:
>
> create table user (
> user_id serial,
> login varchar(5
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 09/07/07 18:47, Max wrote:
> Hello,
>
> And pardon me if I posted this question to the wrong list, it seems
> this list is the most appropriate.
>
> I am trying to create a table with an array containing foreign keys.
> I've searched through the d
On Sep 7, 2007, at 18:47 , Max wrote:
I am trying to create a table with an array containing foreign keys.
I've searched through the documentation and couldn't find a way to do
so.
It's because this is not how relational databases are designed to
work. From the server's point of view, an ar
On Fri, Sep 07, 2007 at 11:47:40PM -, Max wrote:
> Hello,
>
> And pardon me if I posted this question to the wrong list, it seems
> this list is the most appropriate.
>
> I am trying to create a table with an array containing foreign keys.
> I've searched through the documentation and couldn'
Hello,
And pardon me if I posted this question to the wrong list, it seems
this list is the most appropriate.
I am trying to create a table with an array containing foreign keys.
I've searched through the documentation and couldn't find a way to do
so.
Is this something that one can do?
Basical
Hello
you can test it. PostgreSQL 8.3 supports it.
postgres=# CREATE TYPE at AS (a integer, b integer);
CREATE TYPE
postgres=# CREATE TABLE foo(a at[]);
CREATE TABLE
postgres=# INSERT INTO foo VALUES(ARRAY[(10,20)::at]);
INSERT 0 1
postgres=# INSERT INTO foo VALUES(ARRAY[(10,20)::at, (20,30)::at
Hi all;
I was wondering how one would define an array of complex data types or
records. Any ideas or is this simply not supported?
Best Wishes,
Chris Travers
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your de
I have a table
select * from history;
idx | tokens
-+-
2 | {10633,10634,10636}
And the values in the tokens field are taken from sequence values
from another table.
Can I use this kind of storage to identify all the tokens in the
first table that make
On Fri, 13 Apr 2007 12:15:30 +0200, Alexander Presber
<[EMAIL PROTECTED]> wrote:
Listmail schrieb:
Then, other languages will make you feel the pain of having to
quote all your arguments YOURSELF and provide all results as string.
The most famous offender is PHP (this causes countless
Listmail schrieb:
Then, other languages will make you feel the pain of having to
quote all your arguments YOURSELF and provide all results as string.
The most famous offender is PHP (this causes countless security
holes).
I partially did this for PHP. It's a lifesaver. No more
addsl
On Fri, 13 Apr 2007 10:30:29 +0200, Tino Wildenhain <[EMAIL PROTECTED]>
wrote:
Joshua D. Drake schrieb:
Rick Schumeyer wrote:
Has anyone here used a postgres array with Rails? If so, how?
split()?
Err... there is no type mapping?
You know, some languages spoil us developers, so tha
Joshua D. Drake schrieb:
Rick Schumeyer wrote:
Has anyone here used a postgres array with Rails? If so, how?
split()?
Err... there is no type mapping?
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http:/
Rick Schumeyer wrote:
Has anyone here used a postgres array with Rails? If so, how?
split()?
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
--
=== The PostgreSQL Company: Command Prompt, Inc. =
Has anyone here used a postgres array with Rails? If so, how?
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
William Garrison wrote:
I've never worked with a database with arrays, so I'm curious what the
advantages and disadvantages of using it are. For example:
I am prejudiced against arrays because they violate the relational model. I do
not see an advantage over a related table.
Arrays seem to
William Garrison wrote:
> I've never worked with a database with arrays, so I'm curious...
>
> + Efficiency: To return the set_ids for an Item, I could return an array
> back to my C# code instead of a bunch of rows with integers. That's
> probably faster, right?
You should look in to the contri
I've never worked with a database with arrays, so I'm curious what the
advantages and disadvantages of using it are. For example:
-- METHOD 1: The "usual" way --
Items table:
item_id int,
item_data1 ...,
item_data2 ...
Primary Key = item_id
ItemSet table: <-- Join table
item_id int,
[EMAIL PROTECTED] writes:
> Is it possible to have an array of user-defined types?
Scalar types, yes; composite types, no. Unfortunately the case you're
talking about is a composite type.
regards, tom lane
---(end of broadcast)
Hello,
I'm a bit new to PostgreSQL, and I have a question about user-defined
types.
Is it possible to have an array of user-defined types?
Suppose the type looks like this:
CREATE TYPE part AS
(id int2,
count int2);
Now I want to have a column in a table that is a list of parts:
alter tab
Colin DuPlantis <[EMAIL PROTECTED]> writes:
> I am having trouble inserting a value into the array column that
> contains a single '\' (no quotes) character. The following examples are
> my attempts to produce a value of 'x\y' (no quotes) in both the regular
> text and the text array columns in
I'm running:
colin=# select version();
version
---
PostgreSQL 8.1.1 on x86_64-unknown-linux-gnu, compiled by GCC gcc
Bob Pawley schrieb:
> Our application will be dispersed amongst many users.
>
> I want to keep the datbase as generic as possible.
>
you can "disperse" custom datatypes as well.
If this isnt an option, I'd go for a true relational
approach with a units table and your main table
(value,min,max,un
TECTED]>; "Stephan Szabo" <[EMAIL PROTECTED]>; "Tom
Lane" <[EMAIL PROTECTED]>; "Postgresql"
Sent: Thursday, January 26, 2006 2:35 PM
Subject: Re: [GENERAL] Arrays
I can't imagine
test=# create type stat1 as (i1 int, i2 int, i3 int, t1 text);
CRE
On Jan 27, 2006, at 4:41 , Eric E wrote:
I second that, and I'd love to have someone clarify the appropriate
time to use arrays vs. more columns or an referenced tabled. I've
always found that confusing.
I would only use arrays if the natural data type of the data is an
array, such as s
> - Original Message -
> From: "Tino Wildenhain" <[EMAIL PROTECTED]>
> To: "Bob Pawley" <[EMAIL PROTECTED]>
> Cc: "Joshua D. Drake" <[EMAIL PROTECTED]>; "Stephan Szabo"
> <[EMAIL PROTECTED]>; "Tom Lane"
;[EMAIL PROTECTED]>; "Stephan Szabo"
<[EMAIL PROTECTED]>; "Tom Lane" <[EMAIL PROTECTED]>; "Postgresql"
Sent: Thursday, January 26, 2006 1:09 PM
Subject: Re: [GENERAL] Arrays
Bob Pawley schrieb:
The order for the array is Min, Norm, Max, Unit.
I'l
Bob Pawley schrieb:
> The order for the array is Min, Norm, Max, Unit.
>
> I'll probably reorder it with the unit first as every value has a unit.
>
I'd rather create/use a custom datatype for your needs.
This array stuff seems overly hackish for me.
Regards
Tino
---(en
;Bob Pawley" <[EMAIL PROTECTED]>; "Stephan Szabo"
<[EMAIL PROTECTED]>; "Tom Lane" <[EMAIL PROTECTED]>; "Postgresql"
Sent: Thursday, January 26, 2006 12:30 PM
Subject: Re: [GENERAL] Arrays
Joshua D. Drake schrieb:
Bob Pawley wrote:
ERROR: m
Joshua D. Drake schrieb:
> Bob Pawley wrote:
>
>> ERROR: malformed array literal: "{100, 250, 500, DegF)"
>
>
> Well you have a typo:
>
> "{100, 250, 500, DegF)" is wrong...
>
> "{100, 250, 500, DegF}" is correct...
>
I'd say both are wrong ;)
'{100,250,500,DegF}' could work. But I'm not sur
I missed that - thanks for the help.
Bob
- Original Message -
From: "Tom Lane" <[EMAIL PROTECTED]>
To: "Bob Pawley" <[EMAIL PROTECTED]>
Cc: "Stephan Szabo" <[EMAIL PROTECTED]>; "Postgresql"
Sent: Thursday, January 26,
PROTECTED]>; "Postgresql"
Sent: Thursday, January 26, 2006 12:20 PM
Subject: Re: [GENERAL] Arrays
Bob Pawley wrote:
ERROR: malformed array literal: "{100, 250, 500, DegF)"
Well you have a typo:
"{100, 250, 500, DegF)" is wrong...
"{100, 250
Bob Pawley wrote:
ERROR: malformed array literal: "{100, 250, 500, DegF)"
Well you have a typo:
"{100, 250, 500, DegF)" is wrong...
"{100, 250, 500, DegF}" is correct...
Sincerely,
Joshua D. Drake
--
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consu
Bob Pawley <[EMAIL PROTECTED]> writes:
> ERROR: malformed array literal: "{100, 250, 500, DegF)"
You wrote a right paren, not a right brace ...
> I want to do single dimension arrays.
> How did I turn it into multidmensional?
The multiple levels of braces create a multidimensional array.
CTED]>
Cc: "Tom Lane" <[EMAIL PROTECTED]>; "Postgresql"
Sent: Thursday, January 26, 2006 11:43 AM
Subject: Re: [GENERAL] Arrays
On Thu, 26 Jan 2006, Bob Pawley wrote:
Because it gives me an error otherwise.
What error?
insert into specifications values ('1',
CTED]>
Cc: "Tom Lane" <[EMAIL PROTECTED]>; "Postgresql"
Sent: Thursday, January 26, 2006 11:43 AM
Subject: Re: [GENERAL] Arrays
On Thu, 26 Jan 2006, Bob Pawley wrote:
Because it gives me an error otherwise.
What error?
insert into specifications values ('1',
On Thu, 26 Jan 2006, Bob Pawley wrote:
> Because it gives me an error otherwise.
What error?
insert into specifications values ('1', '{25, 50, 100, gpm}',
'{100, 250, 500, DegF}',
'{10, 40, 100, psi}', '{60, 120, 150, psi}' );
seems to insert fine for me given the table definition you gave.
>
I second that, and I'd love to have someone clarify the appropriate time
to use arrays vs. more columns or an referenced tabled. I've always
found that confusing.
Thanks,
Eric
Karsten Hilbert wrote:
And why would that be undesirable ?
On Thu, Jan 26, 2006 at 10:15:22AM -0800, Bob Pawley wr
- Original Message -
From: "Bob Pawley" <[EMAIL PROTECTED]>
To: "Karsten Hilbert" <[EMAIL PROTECTED]>
Sent: Thursday, January 26, 2006 11:26 AM
Subject: Re: [GENERAL] Arrays
Because with arrays I can include other information such as pointers to
I second that, and I'd love to have someone clarify the appropriate
time to use arrays vs. more columns or an referenced tabled. I've
always found that confusing.
Thanks,
Eric
Karsten Hilbert wrote:
And why would that be undesirable ?
On Thu, Jan 26, 2006 at 10:15:22AM -0800, Bob Pawley
00,1,1} |
{{meeting,lunch},{training,presentation}}
Carol | {2,25000,25000,25000} |
{{breakfast,consulting},{meeting,lunch}}
(2 rows)
- Original Message -
From: "Tom Lane" <[EMAIL PROTECTED]>
To: "Bob Pawley" <[EMAIL PROTECTED]>
Cc: &quo
Bob Pawley <[EMAIL PROTECTED]> writes:
> insert into specifications values ('1', '{25, 50, 100, gpm}', '{{100, 250,
> 500, DegF}}',
> '{{{10, 40, 100, psi}}}', '60, 120, 150, psi' );
Why are you putting in all those extra braces?
regards, tom lane
On Thu, Jan 26, 2006 at 10:15:22AM -0800, Bob Pawley wrote:
> I would like to make a table of 20 plus columns the
> majority of columns being arrays.
>
> The following test works. The array will hold up to five
> characteristics of each parameter including the unit of
> measurement used. Using tra
I would like to make a table of 20 plus columns the
majority of columns being arrays.
The following test works. The array will hold up to
five characteristics of each parameter including the unit of measurement used.
Using traditional methods I would need six columns to accomplish the same
No, we don't get deadlock errors, but when running a vacuum and another
process writing into the database there progress will stop at some point
and nothing happens until one process is being killed.
I think we used to vacuum every two nights and did a full vacuum once a
week.
Regards, Marc Phili
Sorry for the duplicate post! My first post was stalled and my mail
server down for a day or so. I will reply to your original posts.
Regards, Marc Philipp
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriat
On Fri, Jan 06, 2006 at 09:43:53AM +0100, [EMAIL PROTECTED] wrote:
> What we have been observing in the last few weeks is, that the
> overall database size is increasing rapidly due to this table and
> vacuum processes seem to deadlock with other processes querying data
> from this table.
Are you
[EMAIL PROTECTED] wrote:
Would it be more efficient to not use an array for this purpose but
split the table in two parts?
Any help is appreciated!
This is a duplicate of your post from the other day, to which I
responded, as did Tom Lane:
http://archives.postgresql.org/pgsql-general/2006-0
A few performance issues using PostgreSQL's arrays led us to the
question how postgres actually stores variable length arrays. First,
let me explain our situation.
We have a rather large table containing a simple integer primary key
and a couple more columns of fixed size. However, there is a date
# [EMAIL PROTECTED] / 2005-09-11 12:11:39 -0400:
> Roman Neuhauser <[EMAIL PROTECTED]> writes:
>
> > I'm looking for an equivalent of my_composite_type[] for use as a
> > parameter of a pl/pgsql function. What do people use to dodge this
> > limitation?
> >
> > Background: I have a few plpgsql fu
1 - 100 of 118 matches
Mail list logo