#x27;m wrapping about 1500 inserts in a transaction block. Since
its an I/O bottlenecks, COPY statements might not give me much advantage.
Its definitely a work in progress :)
Ben
On 09/12/2009 5:31 AM, Andy Colson wrote:
On 12/07/2009 12:12 PM, Ben Brehmer wrote:
Hello All,
I'm in the
input connection' I mean "psql -U postgres -d dbname
-f one_of_many_sql_files".
Thanks,
Ben
On 07/12/2009 12:59 PM, Greg Smith wrote:
Ben Brehmer wrote:
By "Loading data" I am implying: "psql -U postgres -d somedatabase -f
sql_file.sql". The sql_file.sql conta
here
were any disk options in Amazon?
Thanks!
Ben
On 07/12/2009 10:39 AM, Thom Brown wrote:
2009/12/7 Kevin Grittner <mailto:kevin.gritt...@wicourts.gov>>
Ben Brehmer mailto:benbreh...@gmail.com>>
wrote:
> -7.5 GB memory
> -4 EC2 Compute Units (2 v
Kevin,
This is running on on x86_64-unknown-linux-gnu, compiled by GCC gcc
(GCC) 4.1.2 20080704 (Red Hat 4.1.2-44)
Ben
On 07/12/2009 10:33 AM, Kevin Grittner wrote:
Ben Brehmer wrote:
-7.5 GB memory
-4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units
each)
-64-bit
Hello All,
I'm in the process of loading a massive amount of data (500 GB). After
some initial timings, I'm looking at 260 hours to load the entire 500GB.
10 days seems like an awfully long time so I'm searching for ways to
speed this up. The load is happening in the Amazon cloud (EC2), on a