I have to agree with both of you...
But unfortunately there are still some loose ends... See bug 3596...
http://archives.postgresql.org/pgsql-bugs/2007-09/msg9.php
But leaving bugs aside, I will have to say "Bravo!" to the development team!
Ciprian Craciun.
P.S.: I forgot
>
> http://archives.postgresql.org/pgsql-hackers/2006-10/msg00665.php
> # Allow SQL-language functions to reference parameters by parameter name
>
> Currently SQL-language functions can only refer to dollar parameters, e.g. $1
>
> Regards
> Pavel Stehule
>
> 20
ssage --
From: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
Date: Sep 3, 2007 8:13 PM
Subject: Stalled post to pgsql-bugs
To: Ciprian Dorin Craciun <[EMAIL PROTECTED]>
Your message to pgsql-bugs has been delayed, and requires the approval
of the moderators, for the following reaso
On Fri, Jun 13, 2008 at 11:11 PM, Alvaro Herrera
<[EMAIL PROTECTED]> wrote:
> Tom Lane wrote:
>> "James B. Byrne" <[EMAIL PROTECTED]> writes:
>
>> > GiT works by compressing deltas of the contents of successive versions of
>> > file
>> > systems under repository control. It treats binary objects
) the insert rate is
good at start (for the first 2 million readings), but then drops to
about 200 inserts / s;
So could someone point me where I'me wrong, or what can I do to
optimize Postgres for this particular task?
Thanks for your help,
Ciprian Dorin Craciun.
P.S.: I'
On Fri, Nov 21, 2008 at 2:55 PM, Gerhard Heift
<[EMAIL PROTECTED]> wrote:
> On Fri, Nov 21, 2008 at 02:50:45PM +0200, Ciprian Dorin Craciun wrote:
>> Hello all!
>>
>> I would like to ask some advice about the following problem
>> (related to the Deh
randomly chosen between 0 and 10;
>* the timestamp is always increasing by one;
>* the insert is done in batches of 500 thousand inserts (I've
> also tried 5, 25, 50 and 100 thousand without big impact);
>* the banch inserts are done through COPY sds_benc
On Fri, Nov 21, 2008 at 3:29 PM, Grzegorz Jaśkiewicz <[EMAIL PROTECTED]> wrote:
> see, I am affraid of the part when it says "randomly", because you probably
> used random(), which isn't the fastest thing on earth :)
I can assure you this is not the problem... The other storage
engines work qu
Thank's for your info! Please see below...
On Fri, Nov 21, 2008 at 4:14 PM, Rafael Martinez
<[EMAIL PROTECTED]> wrote:
> Ciprian Dorin Craciun wrote:
> []
>>
>> So what can I do / how could I optimize the use of Postgres for this
>> usag
On Fri, Nov 21, 2008 at 6:06 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Ciprian Dorin Craciun" <[EMAIL PROTECTED]> writes:
>> In short the data is inserted by using COPY sds_benchmark_data
>> from STDIN, in batches of 500 thousand data points.
>
> N
On Fri, Nov 21, 2008 at 7:12 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Ciprian Dorin Craciun" <[EMAIL PROTECTED]> writes:
>> On Fri, Nov 21, 2008 at 6:06 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>>> Not sure if it applies to your real use-case, but
On Fri, Nov 21, 2008 at 7:42 PM, Ciprian Dorin Craciun
<[EMAIL PROTECTED]> wrote:
> On Fri, Nov 21, 2008 at 7:12 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>> "Ciprian Dorin Craciun" <[EMAIL PROTECTED]> writes:
>>> On Fri, Nov 21, 2008 at 6:06 PM, Tom La
On Fri, Nov 21, 2008 at 7:45 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Fri, 21 Nov 2008, Tom Lane wrote:
>
>> Not sure if it applies to your real use-case, but if you can try doing
>> the COPY from a local file instead of across the network link, it
>> might go faster.
>
> The fact that the in
On Fri, Nov 21, 2008 at 8:41 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Fri, 21 Nov 2008, Sam Mason wrote:
>
>> It's not quite what you're asking for; but have you checked out any
>> of the databases that have resulted from the StreamSQL research?
>
> A streaming database approach is in fact id
On Fri, Nov 21, 2008 at 10:26 PM, Diego Schulz <[EMAIL PROTECTED]> wrote:
>
>
> On Fri, Nov 21, 2008 at 9:50 AM, Ciprian Dorin Craciun
> <[EMAIL PROTECTED]> wrote:
>>
>>Currently I'm benchmarking the following storage solutions for this:
>>* H
On Fri, Nov 21, 2008 at 3:12 PM, Michal Szymanski <[EMAIL PROTECTED]> wrote:
> On 21 Lis, 13:50, [EMAIL PROTECTED] ("Ciprian Dorin Craciun")
> wrote:
>> Hello all!
>>
>> I would like to ask some advice about the following problem
>> (rela
(I'm adding the discussion also to the Postgres list.)
On Fri, Nov 21, 2008 at 11:19 PM, Dann Corbit <[EMAIL PROTECTED]> wrote:
> What is the schema for your table?
> If you are using copy rather than insert, 1K rows/sec for PostgreSQL seems
> very bad unless the table is extremely wide.
T
On Sat, Nov 22, 2008 at 8:04 PM, Shane Ambler <[EMAIL PROTECTED]> wrote:
> Ciprian Dorin Craciun wrote:
>
>>
>>I would try it if I would know that it could handle the load... Do
>> you have some info about this? Any pointers about the configuration
>> issu
On Sat, Nov 22, 2008 at 11:51 PM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On Sat, Nov 22, 2008 at 2:37 PM, Ciprian Dorin Craciun
> <[EMAIL PROTECTED]> wrote:
>>
>>Hello all!
> SNIP
>>So I would conclude that relational stores will not make it for
On Sun, Nov 23, 2008 at 1:02 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
>> The problem is, most likely, on updating the indexes. Heap inserts
>> should always take more or less the same time, but index insertion
>> requires walking down the index struct fo
On Sun, Nov 23, 2008 at 12:26 AM, Alvaro Herrera
<[EMAIL PROTECTED]> wrote:
> Ciprian Dorin Craciun escribió:
>
>> I've tested also Sqlite3 and it has the same behavior as
>> Postgres... Meaning at beginning it goes really nice 20k inserts,
>> drops to
On Sun, Nov 23, 2008 at 3:09 AM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On Sat, Nov 22, 2008 at 5:54 PM, Scara Maccai <[EMAIL PROTECTED]> wrote:
>> Since you always need the timestamp in your selects, have you tried indexing
>> only the timestamp field?
>> Your selects would be slower, but sin
On Sun, Nov 23, 2008 at 12:32 AM, Alvaro Herrera
<[EMAIL PROTECTED]> wrote:
>
>> On 21 Lis, 13:50, [EMAIL PROTECTED] ("Ciprian Dorin Craciun")
>> wrote:
>
>> > What have I observed / tried:
>> > * I've tested without the primary
On Sun, Nov 23, 2008 at 3:28 PM, Stephen Frost <[EMAIL PROTECTED]> wrote:
> * Ciprian Dorin Craciun ([EMAIL PROTECTED]) wrote:
>> > Even better might be partitioning on the timestamp. IF all access is
>> > in a certain timestamp range it's usually a big win, especi
n
> be removed -- and the application would then need to gurantee
> correctness
Nop, no triggers or constraints (other than not null).
> VSP
>
>
> On Sun, 23 Nov 2008 08:34:57 +0200, "Ciprian Dorin Craciun"
> <[EMAIL PROTECTED]> said:
>> On Sun,
On Mon, Nov 24, 2008 at 3:42 AM, marcin mank <[EMAIL PROTECTED]> wrote:
>>Yes, the figures are like this:
>>* average number of raw inserts / second (without any optimization
>> or previous aggregation): #clients (~ 100 thousand) * #sensors (~ 10)
>> / 6seconds = 166 thousand inserts / seco
26 matches
Mail list logo