On Thursday 04 May 2006 23:06, Jim C. Nasby wrote:
>On Thu, May 04, 2006 at 10:58:24PM +0200, Leif B. Kristensen wrote:
>> On Thursday 04 May 2006 22:30, Jim C. Nasby wrote:
>> >I believe transactions are limited to 4B commands, so the answer
>> > would be 4B rows.
>>
>> That is definitely not the
On Thu, 2006-05-04 at 16:06, Jim C. Nasby wrote:
> On Thu, May 04, 2006 at 10:58:24PM +0200, Leif B. Kristensen wrote:
> > I know that there is one hard-wired limit due to the OID wrap-around
> > problem, at 2^31 commands in one transaction. But the practical limit
> > due to hardware resources
On Thu, May 04, 2006 at 10:58:24PM +0200, Leif B. Kristensen wrote:
> On Thursday 04 May 2006 22:30, Jim C. Nasby wrote:
> >On Wed, May 03, 2006 at 04:28:10PM +0200, Leif B. Kristensen wrote:
> >> However, I'm wondering if there's a practical limit to how many rows
> >> you can insert within one tr
On Thursday 04 May 2006 22:30, Jim C. Nasby wrote:
>On Wed, May 03, 2006 at 04:28:10PM +0200, Leif B. Kristensen wrote:
>> However, I'm wondering if there's a practical limit to how many rows
>> you can insert within one transaction?
>
>I believe transactions are limited to 4B commands, so the answ
On Wed, May 03, 2006 at 04:11:36PM +0200, Javier de la Torre wrote:
> It is inserts.
>
> I create the inserts myself with a Python programmed I hace created to
> migrate MySQL databases to PostgreSQL (by th way if someone wants
> it...)
Have you looked at http://pgfoundry.org/projects/my2postgres
On Wed, May 03, 2006 at 04:43:15PM +0200, Javier de la Torre wrote:
> Yes,
>
> Thanks. I am doing this now...
>
> Is definetly faster, but I will also discover now if there is a limit
> in a transaction side... I am going to try to insert into one single
> transaction 60 million records in a tabl
On Wed, May 03, 2006 at 04:28:10PM +0200, Leif B. Kristensen wrote:
> However, I'm wondering if there's a practical limit to how many rows you
> can insert within one transaction?
I believe transactions are limited to 4B commands, so the answer would
be 4B rows.
--
Jim C. Nasby, Sr. Engineering
Javier de la Torre wrote:
Great! Then there will be no problems.
I would use COPY but I think I can not. While moving from MySQL to
PostgreSQL I am also transforming a pair of fields, latitude,
longitude, into a geometry field, POINT, that is understood for
Potgis. I though I will not be able to
Javier de la Torre wrote:
Great! Then there will be no problems.
I would use COPY but I think I can not. While moving from MySQL to
PostgreSQL I am also transforming a pair of fields, latitude,
longitude, into a geometry field, POINT, that is understood for
Potgis. I though I will not be able to
Martijn van Oosterhout writes:
>> However, I'm wondering if there's a practical limit to how many rows you
>> can insert within one transaction?
> There's a limit of (I think) 2-4 billion commands per transaction. Each
> command can insert any number of tuples.
> So if you're doing one tuple per
Great! Then there will be no problems.
I would use COPY but I think I can not. While moving from MySQL to
PostgreSQL I am also transforming a pair of fields, latitude,
longitude, into a geometry field, POINT, that is understood for
Potgis. I though I will not be able to use COPY when inserting da
On Wed, May 03, 2006 at 04:28:10PM +0200, Leif B. Kristensen wrote:
> However, I'm wondering if there's a practical limit to how many rows you
> can insert within one transaction?
There's a limit of (I think) 2-4 billion commands per transaction. Each
command can insert any number of tuples.
So
Javier de la Torre wrote:
> Yes,
>
> Thanks. I am doing this now...
>
> Is definetly faster, but I will also discover now if there is a limit
> in a transaction side... I am going to try to insert into one single
> transaction 60 million records in a table.
>
> In any case I still don't understa
Yes,
Thanks. I am doing this now...
Is definetly faster, but I will also discover now if there is a limit
in a transaction side... I am going to try to insert into one single
transaction 60 million records in a table.
In any case I still don't understand how why PostgreSQL was not taking
resour
On Wednesday 03 May 2006 16:12, Larry Rosenman wrote:
>Javier de la Torre wrote:
>> It is inserts.
>>
>> I create the inserts myself with a Python programmed I hace created
>> to migrate MySQL databases to PostgreSQL (by th way if someone wants
>> it...)
>
>Ok, that makes *EACH* insert a transactio
Javier de la Torre wrote:
> It is inserts.
>
> I create the inserts myself with a Python programmed I hace created to
> migrate MySQL databases to PostgreSQL (by th way if someone wants
> it...)
Ok, that makes *EACH* insert a transaction, with all the overhead.
You need to batch the inserts betw
It is inserts.
I create the inserts myself with a Python programmed I hace created to
migrate MySQL databases to PostgreSQL (by th way if someone wants
it...)
Thanks.
Javier.
On 5/3/06, Larry Rosenman <[EMAIL PROTECTED]> wrote:
Javier de la Torre wrote:
> Hi all,
>
> I've been searching aro
Javier de la Torre wrote:
> Hi all,
>
> I've been searching around for an answer to this, but I coulnd't find
> anything. So here we go.
>
> I am running PostgreSQL 8.1.3 on Red Hat on an Intel server with 2GB
> of RAM and lot of free HD space.
>
> I have a very large dump file, more then 4GB, t
Hi all,
I've been searching around for an answer to this, but I coulnd't find
anything. So here we go.
I am running PostgreSQL 8.1.3 on Red Hat on an Intel server with 2GB
of RAM and lot of free HD space.
I have a very large dump file, more then 4GB, to recreate a database.
When I run:
psql -U
19 matches
Mail list logo