Thanks for your input!
On Do, 2015-09-17 at 11:21 -0300, Matheus de Oliveira wrote:
>
> On Thu, Sep 17, 2015 at 9:19 AM, Eildert Groeneveld <
> eildert.groenev...@fli.bund.de> wrote:
> > > * one COPY per bulk (20 000 rows)
> > copy does not fit so well, as it
On Do, 2015-09-17 at 14:11 +0200, Ladislav Lenart wrote:
> On 17.9.2015 13:32, Eildert Groeneveld wrote:
> > Dear list
> >
> > I am experiencing a rather severe degradation of insert performance
> > starting from an empty database:
> >
> >
> >
Dear list
I am experiencing a rather severe degradation of insert performance
starting from an empty database:
120.000 mio SNPs imported in28.9 sec -4.16 mio/sec
120.000 mio SNPs imported in40.9 sec -2.93 mio/sec
120.000 mio SNPs imported in49
On Di, 2014-02-11 at 18:58 -0200, Claudio Freire wrote:
> On Tue, Feb 11, 2014 at 5:54 PM, Eildert Groeneveld
> wrote:
> > Dear All
> >
> > this probably not the best list to post this question:
> >
> > I use cascading deletes but would like to first inform
Dear All
this probably not the best list to post this question:
I use cascading deletes but would like to first inform the user what she
is about to do.
Something like : explain delete from PANEL where panel_id=21;
-- you are about to delete 32144 records in tables abc aaa wewew
This is clearly
On Mo, 2012-11-12 at 12:18 +0100, Albe Laurenz wrote:
> Eildert Groeneveld wrote:
> > I am currently implementing using a compressed binary storage scheme
> > genotyping data. These are basically vectors of binary data which may be
> > megabytes in size.
> >
> >
no problem. Maybe there is someone here who knows the PG
internals sufficiently well to give advice on how big blocks of memory
(i.e. bit varying records) can between transferred UNALTERED between
backend and clients.
looking forward to you response.
greetings
Eildert
--
Eildert Groeneveld