> Well, what problem exactly are you trying to solve?
> Having large tables itself isn't a problem, but it often
> tends to imply other things that might be problematic:
I'm trying to troubleshoot a very low cache hit rate as returned by this query:
SELECT sum(heap_blks_read) as heap_read,
On 03/11/2013 10:46 AM, Tom Lane wrote:
Steve Erickson writes:
Thanks for the reply. I deleted all rows in pg_statistic and the VACUUM
ANALYZE on pg_attribute still fails.
You would probably need to actually truncate pg_statistic (in a
standalone backend) to make that safe. If you only wan
On Mar 9, 2013, at 11:54 AM, Paul Jungwirth wrote:
> Hello,
>
> I'm running a specialized search engine that indexes a few tens of millions
> of web pages, keeping everything in Postgres, and one problem I'm starting to
> see is poor cache hit rates. My database has two or three tables just for
I'm in an unfortunate position of needing to add a unique, not null index to a
very large table with heavy churn. Without having much impact, I can add a NULL
column that reads default values from a sequence for new rows, and then do
batch updates over time to fill in the old values but then
Balázs Keresztury writes:
> hi,
> I have a postgres 8.4 on a Ubuntu box at my workplace, and I plan to
> implement continuous?backup with the WAL
> technique. This backup is mainly used to provide the usual backup for the
> company, but I would also like to use it as
> an almost always up-to-da
Steve Erickson writes:
> Thanks for the reply. I deleted all rows in pg_statistic and the VACUUM
> ANALYZE on pg_attribute still fails.
You would probably need to actually truncate pg_statistic (in a
standalone backend) to make that safe. If you only want to use DELETE,
I'd try just pg_dump at
Thanks for the reply. I deleted all rows in pg_statistic and the VACUUM
ANALYZE on pg_attribute still fails. I tried to reindex pg_toast_2619 and got
an error "could not access status of transaction 1493786085. Could not open
file "pg_subtrans/5909": No such file or directory. Sure enough, t
Steve Erickson writes:
> This went well and postgres restarted just fine. However, now when I execute
> a pg_dump I get a missing chunk 0 for pg_toast_2619 while querying
> pg_attribute. I did a reindex on pg_toast_2619, then tried to VACUUM ANALYZE
> pg_attribute but again got the missing ch
... and I'm wondering if an index really speeds up array functions:
CREATE INDEX idx_ourarrtable_arr ON ourarrtable USING GIN(arr);
Stefan
2013/3/11 Stefan Keller :
> Hi,
>
> Question regarding arrays: How can I append an element to a row inside
> a 2-dim. array?
> See example below.
> And:
> Doe
Here's a much smaller self-contained example of the problem:
https://gist.github.com/joevandyk/06e1e26219726f11917e/raw/e9b279c2f2776d5825a6adbb04c7a41201f8cd24/gistfile1.txt
Joe
On Fri, Mar 8, 2013 at 4:17 PM, Joe Van Dyk wrote:
>
> https://gist.github.com/joevandyk/070e4728c4c9fe1bf086/raw/
I have a database that's about 600 GB. I did a pg_dump (pg_dump --format plain
--create --encoding UTF8 --oids --verbose --file "/var/tmp/145_backup.sql"
"digi") and, after 442 GB, I got an error, "pg_dump: Error message from server:
ERROR: missing chunk number 0 for toast value 49209130 in pg
> Out of curiosity: any reason the ORDER BY should be in the subquery? It
seems like it ought to be in the UPDATE (if that's allowed).
Hmm, it's not allowed. :-) It's still surprising that you can guarantee the
order of a multi-row UPDATE by ordering a subquery.
Paul
--
Hi,
Question regarding arrays: How can I append an element to a row inside
a 2-dim. array?
See example below.
And:
Does anybody have experiences how arrays perform if data grows (it's
"read-mostly")?
Yours, Stefan
--
-- Arrays Test
--
CREATE TABLE ourarrtable (id int primary key, arr int[]);
I
> 2) All transactions modify table rows in the same order, e.g. ascending
"id".
>With the big update you can do that by putting an "ORDER BY tg2.id"
into
>the subquery, and with the "little transactions" you'll have to make
sure
>that rows are updated in ascending "id" order.
I agree t
Alban Hertroys wrote:
> All the suggestions thus far only reduce the window in which a dead lock can
> occur.
Where do you see a window for deadlocks with my suggestions?
Yours,
Laurenz Albe
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscrip
On 11 March 2013 13:01, Chris Curvey wrote:
> On Sat, Mar 9, 2013 at 4:20 PM, Paul Jungwirth <
> p...@illuminatedcomputing.com> wrote:
>
>> I have a long-running multi-row UPDATE that is deadlocking with a
>> single-row UPDATE:
>>
>> 2013-03-09 11:07:51 CST ERROR: deadlock detected
>> 2013-03-09
On Sat, Mar 9, 2013 at 4:20 PM, Paul Jungwirth
wrote:
> I have a long-running multi-row UPDATE that is deadlocking with a
> single-row UPDATE:
>
> 2013-03-09 11:07:51 CST ERROR: deadlock detected
> 2013-03-09 11:07:51 CST DETAIL: Process 18851 waits for ShareLock on
> transaction 10307138; block
Paul Jungwirth wrote:
> I have a long-running multi-row UPDATE that is deadlocking with a single-row
> UPDATE:
>
> 2013-03-09 11:07:51 CST ERROR: deadlock detected
> 2013-03-09 11:07:51 CST DETAIL: Process 18851 waits for ShareLock on
> transaction 10307138; blocked by
> process 24203.
>
On 11 March 2013 07:39, Csanyi Pal wrote:
> but when I run the command:
> CREATE TABLE employee_schedule (
> id serial,
> employee_id integer REFERENCES employees(id),
> start_time timestamptz,
> end_time timestamptz
> );
>
> I get an error message:
>
> NOTICE: CREATE TABLE will create i
Balázs Keresztury wrote:
> I have a postgres 8.4 on a Ubuntu box at my workplace [...]
> I already created the base backup with pg_start_backup and transferred to my
> computer, which is
> currently running Windows. The problem is that I can't even start up the base
> cluster, because
> postgres
dhaval257 wrote:
> I worked upon this link you sent. It ends up giving error "This applcation
> can not start because SSLEAY32.dll is missing." I want to connect in any
> form whether it is ODBC or direct. If you know then please help me. I am
> stuck here. Thank you
So why don't you copy ssleay32
21 matches
Mail list logo