On Mon, Nov 24, 2008 at 3:42 AM, marcin mank <[EMAIL PROTECTED]> wrote:
>>Yes, the figures are like this:
>>* average number of raw inserts / second (without any optimization
>> or previous aggregation): #clients (~ 100 thousand) * #sensors (~ 10)
>> / 6seconds = 166 thousand inserts / seco
>Yes, the figures are like this:
>* average number of raw inserts / second (without any optimization
> or previous aggregation): #clients (~ 100 thousand) * #sensors (~ 10)
> / 6seconds = 166 thousand inserts / second...
this is average?
166 000 * 20 bytes per record * 86400 seconds per da
On Sun, Nov 23, 2008 at 10:01 AM, Ciprian Dorin Craciun
<[EMAIL PROTECTED]> wrote:
> On Sun, Nov 23, 2008 at 3:28 PM, Stephen Frost <[EMAIL PROTECTED]> wrote:
>> * Ciprian Dorin Craciun ([EMAIL PROTECTED]) wrote:
>>> > Even better might be partitioning on the timestamp. IF all access is
>>> > in a
> If you watch the speed, you'll see that the insert
> speed is the
> same, but the scan speed is worse (from 32k to 200).
As I said, I don't know a lot about these things.
But I would like someone to comment on this (so that maybe I will know
something!):
1) I thought the poor insert perfo
Thanks for your info! Please see my observations below.
By the way, we are planning to also try Informix (the time series
extension?)... Do you have some other tips about Informix?
Ciprian Craciun.
On Sun, Nov 23, 2008 at 6:06 PM, V S P <[EMAIL PROTECTED]> wrote:
> While most of my
On Sun, Nov 23, 2008 at 3:28 PM, Stephen Frost <[EMAIL PROTECTED]> wrote:
> * Ciprian Dorin Craciun ([EMAIL PROTECTED]) wrote:
>> > Even better might be partitioning on the timestamp. IF all access is
>> > in a certain timestamp range it's usually a big win, especially
>> > because he can move to
While most of my experience with oracle/informix
I would also recommend
a) partitioning on DB level
Put partitions on on separate hard disks, have the system to be
at least dual core, and make the disks to be attached via SCSI
controller (not IDE) for parallel performance.
b) partitioning on a
> But unfortunately the query speed is not good at all
> because most
> queries are for a specific client (and sensor) in a given
> time
> range...
Maybe I'm wrong, I don't know a lot of these things; but defining the index as
(timestamp, clientid, sensor) instead of (clientid, sensor, timest
* Ciprian Dorin Craciun ([EMAIL PROTECTED]) wrote:
> > Even better might be partitioning on the timestamp. IF all access is
> > in a certain timestamp range it's usually a big win, especially
> > because he can move to a new table every hour / day / week or whatever
> > and merge the old one into
On Sun, Nov 23, 2008 at 12:32 AM, Alvaro Herrera
<[EMAIL PROTECTED]> wrote:
>
>> On 21 Lis, 13:50, [EMAIL PROTECTED] ("Ciprian Dorin Craciun")
>> wrote:
>
>> > What have I observed / tried:
>> > * I've tested without the primary key and the index, and the
>> > results were the best for inse
On Sun, Nov 23, 2008 at 3:09 AM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On Sat, Nov 22, 2008 at 5:54 PM, Scara Maccai <[EMAIL PROTECTED]> wrote:
>> Since you always need the timestamp in your selects, have you tried indexing
>> only the timestamp field?
>> Your selects would be slower, but sin
On Sun, Nov 23, 2008 at 12:26 AM, Alvaro Herrera
<[EMAIL PROTECTED]> wrote:
> Ciprian Dorin Craciun escribió:
>
>> I've tested also Sqlite3 and it has the same behavior as
>> Postgres... Meaning at beginning it goes really nice 20k inserts,
>> drops to about 10k inserts, but after a few million
On Sun, Nov 23, 2008 at 1:02 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
>> The problem is, most likely, on updating the indexes. Heap inserts
>> should always take more or less the same time, but index insertion
>> requires walking down the index struct fo
On Sat, Nov 22, 2008 at 5:54 PM, Scara Maccai <[EMAIL PROTECTED]> wrote:
> Since you always need the timestamp in your selects, have you tried indexing
> only the timestamp field?
> Your selects would be slower, but since client and sensor don't have that
> many distinct values compared to the nu
Since you always need the timestamp in your selects, have you tried indexing
only the timestamp field?
Your selects would be slower, but since client and sensor don't have that many
distinct values compared to the number of rows you are inserting maybe the
difference in selects would not be that
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> The problem is, most likely, on updating the indexes. Heap inserts
> should always take more or less the same time, but index insertion
> requires walking down the index struct for each insert, and the path to
> walk gets larger the more data you have.
> On 21 Lis, 13:50, [EMAIL PROTECTED] ("Ciprian Dorin Craciun")
> wrote:
> > What have I observed / tried:
> > * I've tested without the primary key and the index, and the
> > results were the best for inserts (600k inserts / s), but the
> > readings, worked extremly slow (due to the lack
Ciprian Dorin Craciun escribió:
> I've tested also Sqlite3 and it has the same behavior as
> Postgres... Meaning at beginning it goes really nice 20k inserts,
> drops to about 10k inserts, but after a few million records, the HDD
> led starts to blink non-stop, and then it drops to unde 1k
On Sat, Nov 22, 2008 at 4:54 PM, Ciprian Dorin Craciun
<[EMAIL PROTECTED]> wrote:
> On Sat, Nov 22, 2008 at 11:51 PM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
>> On Sat, Nov 22, 2008 at 2:37 PM, Ciprian Dorin Craciun
>> <[EMAIL PROTECTED]> wrote:
>>>
>>>Hello all!
>> SNIP
>>>So I would conc
On Sat, Nov 22, 2008 at 11:51 PM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On Sat, Nov 22, 2008 at 2:37 PM, Ciprian Dorin Craciun
> <[EMAIL PROTECTED]> wrote:
>>
>>Hello all!
> SNIP
>>So I would conclude that relational stores will not make it for
>> this use case...
>
> I was wondering y
On Sat, Nov 22, 2008 at 2:37 PM, Ciprian Dorin Craciun
<[EMAIL PROTECTED]> wrote:
>
>Hello all!
SNIP
>So I would conclude that relational stores will not make it for
> this use case...
I was wondering you guys are having to do all individual inserts or if
you can batch some number together
On Sat, Nov 22, 2008 at 8:04 PM, Shane Ambler <[EMAIL PROTECTED]> wrote:
> Ciprian Dorin Craciun wrote:
>
>>
>>I would try it if I would know that it could handle the load... Do
>> you have some info about this? Any pointers about the configuration
>> issues?
>>
>>Ciprian.
>>
>
>
> Apart fr
Ciprian Dorin Craciun wrote:
I would try it if I would know that it could handle the load... Do
you have some info about this? Any pointers about the configuration
issues?
Ciprian.
Apart from the configure options at build time you should read -
http://www.sqlite.org/pragma.html
I
On Fri, Nov 21, 2008 at 3:12 PM, Michal Szymanski <[EMAIL PROTECTED]> wrote:
> On 21 Lis, 13:50, [EMAIL PROTECTED] ("Ciprian Dorin Craciun")
> wrote:
>> Hello all!
>>
>> I would like to ask some advice about the following problem
>> (related to the Dehems project:http://www.dehems.eu/):
>>
On 21 Lis, 13:50, [EMAIL PROTECTED] ("Ciprian Dorin Craciun")
wrote:
> Hello all!
>
> I would like to ask some advice about the following problem
> (related to the Dehems project:http://www.dehems.eu/):
> * there are some clients; (the clients are in fact house holds;)
> * each devi
On Fri, Nov 21, 2008 at 10:26 PM, Diego Schulz <[EMAIL PROTECTED]> wrote:
>
>
> On Fri, Nov 21, 2008 at 9:50 AM, Ciprian Dorin Craciun
> <[EMAIL PROTECTED]> wrote:
>>
>>Currently I'm benchmarking the following storage solutions for this:
>>* Hypertable (http://www.hypertable.org/) -- which
On Fri, Nov 21, 2008 at 9:50 AM, Ciprian Dorin Craciun <
[EMAIL PROTECTED]> wrote:
>
>Currently I'm benchmarking the following storage solutions for this:
>* Hypertable (http://www.hypertable.org/) -- which has good insert
> rate (about 250k inserts / s), but slow read rate (about 150k rea
On Fri, Nov 21, 2008 at 8:41 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Fri, 21 Nov 2008, Sam Mason wrote:
>
>> It's not quite what you're asking for; but have you checked out any
>> of the databases that have resulted from the StreamSQL research?
>
> A streaming database approach is in fact id
On Fri, 21 Nov 2008, Sam Mason wrote:
It's not quite what you're asking for; but have you checked out any
of the databases that have resulted from the StreamSQL research?
A streaming database approach is in fact ideally suited to handling this
particular problem. Looking at the original requ
On Fri, Nov 21, 2008 at 7:45 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Fri, 21 Nov 2008, Tom Lane wrote:
>
>> Not sure if it applies to your real use-case, but if you can try doing
>> the COPY from a local file instead of across the network link, it
>> might go faster.
>
> The fact that the in
On Fri, Nov 21, 2008 at 7:42 PM, Ciprian Dorin Craciun
<[EMAIL PROTECTED]> wrote:
> On Fri, Nov 21, 2008 at 7:12 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>> "Ciprian Dorin Craciun" <[EMAIL PROTECTED]> writes:
>>> On Fri, Nov 21, 2008 at 6:06 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
Not sure if i
On Fri, 21 Nov 2008, Tom Lane wrote:
Not sure if it applies to your real use-case, but if you can try doing
the COPY from a local file instead of across the network link, it
might go faster.
The fact that the inserts are reported as fast initially but slow as the
table and index size grow mea
On Fri, Nov 21, 2008 at 7:12 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Ciprian Dorin Craciun" <[EMAIL PROTECTED]> writes:
>> On Fri, Nov 21, 2008 at 6:06 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>>> Not sure if it applies to your real use-case, but if you can try doing
>>> the COPY from a local fil
"Ciprian Dorin Craciun" <[EMAIL PROTECTED]> writes:
> On Fri, Nov 21, 2008 at 6:06 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>> Not sure if it applies to your real use-case, but if you can try doing
>> the COPY from a local file instead of across the network link, it
>> might go faster. Also, as alr
On Fri, Nov 21, 2008 at 6:06 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Ciprian Dorin Craciun" <[EMAIL PROTECTED]> writes:
>> In short the data is inserted by using COPY sds_benchmark_data
>> from STDIN, in batches of 500 thousand data points.
>
> Not sure if it applies to your real use-case, b
Thank's for your info! Please see below...
On Fri, Nov 21, 2008 at 4:14 PM, Rafael Martinez
<[EMAIL PROTECTED]> wrote:
> Ciprian Dorin Craciun wrote:
> []
>>
>> So what can I do / how could I optimize the use of Postgres for this
>> usage?
>>
>
> Hello, here you have some com
"Ciprian Dorin Craciun" <[EMAIL PROTECTED]> writes:
> In short the data is inserted by using COPY sds_benchmark_data
> from STDIN, in batches of 500 thousand data points.
Not sure if it applies to your real use-case, but if you can try doing
the COPY from a local file instead of across the net
You might want to look into how OpenNMS uses RRDTool. It is able to handle
a huge number of nodes by queuing inserts into the RRDs and using JRobin.
I'm not sure if it is a great solution for what you are looking for, but
I've found its performance scales quite well. I'm getting well over 500
up
On Fri, Nov 21, 2008 at 3:29 PM, Grzegorz Jaśkiewicz <[EMAIL PROTECTED]> wrote:
> see, I am affraid of the part when it says "randomly", because you probably
> used random(), which isn't the fastest thing on earth :)
I can assure you this is not the problem... The other storage
engines work qu
On Fri, Nov 21, 2008 at 02:50:45PM +0200, Ciprian Dorin Craciun wrote:
> Currently I'm benchmarking the following storage solutions for this:
> * Hypertable (http://www.hypertable.org/) -- which has good insert
> rate (about 250k inserts / s), but slow read rate (about 150k reads /
> s); (t
Ciprian Dorin Craciun wrote:
[]
>
> So what can I do / how could I optimize the use of Postgres for this
> usage?
>
Hello, here you have some comments that will probably help you to get
more from this test machine ..
>
> * test machine: Linux (Ubuntu 8.04 x64), IBM x37
see, I am affraid of the part when it says "randomly", because you probably
used random(), which isn't the fastest thing on earth :)
On Fri, Nov 21, 2008 at 3:18 PM, Grzegorz Jaśkiewicz <[EMAIL PROTECTED]> wrote:
> you'll have to provide us with some sort of test-case to get some answers,
> please. (set of scripts, queries, etc).
Bellow is the content of my original post. Inside I mention
exactly the may the benchmark was c
you'll have to provide us with some sort of test-case to get some answers,
please. (set of scripts, queries, etc).
On Fri, Nov 21, 2008 at 2:55 PM, Gerhard Heift
<[EMAIL PROTECTED]> wrote:
> On Fri, Nov 21, 2008 at 02:50:45PM +0200, Ciprian Dorin Craciun wrote:
>> Hello all!
>>
>> I would like to ask some advice about the following problem
>> (related to the Dehems project: http://www.dehems.eu/ ):
>>
On Fri, Nov 21, 2008 at 02:50:45PM +0200, Ciprian Dorin Craciun wrote:
> Hello all!
>
> I would like to ask some advice about the following problem
> (related to the Dehems project: http://www.dehems.eu/ ):
> * there are some clients; (the clients are in fact house holds;)
> * each
46 matches
Mail list logo