On 3/6/20 6:57 AM, stan wrote:
I have looked at:
https://www.postgresql.org/docs/8.4/plperl-database.html
I am also comfortable querying data from tables in perl. But I do not
quite see how to get the results of a query in plperl. Here is what I
tried, and it is not working:
my $rv2 = spi_ex
On 8/6/19 6:25 PM, Laura Smith wrote:
Hi,
I've seen various Postgres examples here and elsewhere that deal with the old
common-prefix problem (i.e. "given 1234 show me the longest match").
I'm in need of a bit of guidance on how best to implement an alternative take.
Frankly I don't quite kn
On 7/18/19 8:44 AM, Matthew Pounsett wrote:
I've recently inherited a database that is dangerously close to outgrowing the
available storage on its existing hardware. I'm looking for (pointers to)
advice on scaling the storage in a financially constrained not-for-profit.
The current size of
On 6/9/19 4:45 PM, Drexl Spivey wrote:
Hello all,
Don't want to start one of those endless internet tug of wars without end
threads, but would like some other people's opinions.
First off, I use all Operating systems without problems, personally defaulting
to linux at home, but mostly mac at
On 3/23/19 11:51 AM, Rory Campbell-Lange wrote:
On 23/03/19, Andy Colson (a...@squeakycode.net) wrote:
On 3/23/19 7:09 AM, Rory Campbell-Lange wrote:
On 17/03/19, Rory Campbell-Lange (r...@campbell-lange.net) wrote:
...
We're buying some new Postgres servers with
2 x 240GB Inte
On 3/23/19 7:09 AM, Rory Campbell-Lange wrote:
On 17/03/19, Rory Campbell-Lange (r...@campbell-lange.net) wrote:
We aren't sure whether to use software MDRaid or a MegaRAID card.
We're buying some new Postgres servers with
2 x 240GB Intel SSD S4610 (RAID1 : system)
4 x 960GB Intel SS
On 12/29/18 12:34 PM, Glenn Schultz wrote:
All,
Following my earlier post on variable instantiation, I rethought how I was
working with dates and realized I can fix the date and use static interval. I
came up with this recursive CTE which is the end goal. However, the problem is
that the con
On 10/21/18 2:06 AM, Boris Sagadin wrote:
Hello,
I have a database running on i3.8xlarge (256GB RAM, 32 CPU cores, 4x 1.9TB NVMe
drive) AWS instance with about 5TB of disk space occupied, ext4, Ubuntu 16.04.
Multi-tenant DB with about 4 tables, insert heavy.
I started a new slave with ide
On 07/04/2018 12:31 AM, David Rowley wrote:
On 4 July 2018 at 14:43, Andy Colson wrote:
I moved a physical box to a VM, and set its memory to 1Gig. Everything
runs fine except one backup:
/pub/backup# pg_dump -Fc -U postgres -f wildfire.backup wildfirep
g_dump: Dumping the contents of
On 07/03/2018 10:21 PM, Adrian Klaver wrote:
On 07/03/2018 07:43 PM, Andy Colson wrote:
Hi All,
I moved a physical box to a VM, and set its memory to 1Gig. Everything
runs fine except one backup:
/pub/backup# pg_dump -Fc -U postgres -f wildfire.backup wildfirep
g_dump: Dumping the contents
Hi All,
I moved a physical box to a VM, and set its memory to 1Gig. Everything
runs fine except one backup:
/pub/backup# pg_dump -Fc -U postgres -f wildfire.backup wildfirep
g_dump: Dumping the contents of table "ofrrds" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR:
On 01/28/2018 08:46 AM, Ryan Murphy wrote:
I believe the main, and maybe only, concern is the bloating of the system
catalog tables since you are constantly adding and removing records. Yes, they
will be vacuumed but vacuuming and bloat on catalog tables slows every single
query down to s
12 matches
Mail list logo