I have a database that has started to constantly hang after a brief period of
activity
looking at `select * from pg_stat_activity;` I roughly see the following each
time:
process 1 |
process 2 | in transaction
process 3 | in transaction
process 4 |
p
I'm running postgres on a virtual server
I was wondering if there were any known issues with moving the data directory
to another mounted partition / filesystem.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql
Thanks, everyone!
For now this will be provisioning physical drive for a box -- and "everything"
will be there for now. So OS on one drive, and DB on another.
I've run into programs before (mostly on Mac/Win) that are exceedingly not
happy if they're run on a drive other than the OS.
Sin
I ran into an issue migrating from 9.1 to 9.3 on ubuntu using pg_upgrade
the default ubuntu package, and the one from postgresql.org, both store
`postgresql.conf` in etc as `/etc/postgresql/VERSION/main/postgresql.conf`
however, the pg_upgrade script expects it in the `datadir`.
the simple solu
On Nov 17, 2014, at 12:55 PM, Robert DiFalco wrote:
> SELECT * FROM MyTable WHERE upper(FullName) LIKE upper('%John%');
>
> That said, which would be the best extension module to use? A "gist" index on
> the uppercased column? Or something else? Thanks!
Performance wise, I think a function
On Nov 18, 2014, at 7:38 AM, Albe Laurenz wrote:
>
> That index wouldn't help with the query at all.
>
> If you really need a full substring search (i.e., you want to find
> "howardjohnson"), the only thing that could help are trigram indexes.
I stand corrected.
I ran a sample query on my te
On Nov 18, 2014, at 11:49 AM, Robert DiFalco wrote:
> As far as I can tell, the trigram extension would be the easiest way to
> implement this. It looks like I wouldn't need to mess with vectors, etc. It
> would just look like a standard index and query, right? It seems that if I
> need someth
I have a particular query that returns resultset of 45k rows out of a large
resultset (pg 9.3 and 9.1)
It's a many 2 many query, where I"m trying to search for Bar based on
attributes in a linked Foo.
I tweaked the indexes, optimized the query, and got it down an acceptable speed
around 1,100m
On Nov 18, 2014, at 6:43 PM, Tom Lane wrote:
> but as for why it gets a much worse plan after
> flattening --- insufficient data.
Thanks. I'll run some test cases in the morning and post the full queries
matched with ANALYZE EXPLAIN.
This is just puzzling to me. I was hoping there might be a
I re-ran the query in multiple forms, and included it below (I regexed it to
become 'foo2bar' so it's more generic to others).
I also uploaded it as a public spreadsheet to google, because I think that is a
bit easier to look at:
https://docs.google.com/spreadsheets/d/1w9HM8w9YUpu
I have a core table with tens-of-millions of rows, and need to delete about a
million records.
There are 21 foreign key checks against this table. Based on the current
performance, it would take a few days to make my deletions.
None of the constraints were defined as `DEFERRABLE INITIALLY IMM
On Nov 20, 2014, at 6:00 PM, Melvin Davidson wrote:
> Try the following queries. It will give you two .sql files (create_fkeys.sql
> & drop_fkeys.sql).
Thanks!
I tried a variation of that to create DEFERRABLE constraints, and that was a
mess. It appears all the checks ran at the end of the t
Can someone confirm a suspicion for me ?
I have a moderately sized table (20+ columns, 3MM rows) that tracks "tags".
I have a lower(column) function index that is used simplify case-insensitive
lookups.
CREATE INDEX idx_tag_name_lower ON tag(lower(name));
I have a few complex queries
On Dec 8, 2014, at 9:35 PM, Scott Marlowe wrote:
> select a,b,c into newtable from oldtable group by a,b,c;
>
> On pass, done.
This is a bit naive, but couldn't this approach potentially be faster
(depending on the system)?
SELECT a, b, c INTO duplicate_records FROM ( SELECT a, b, c,
On Dec 12, 2014, at 4:58 PM, Tom Lane wrote:
> regression=# create table tt (f1 int, f2 text);
> CREATE TABLE
> regression=# create index on tt (lower(f2));
> CREATE INDEX
> regression=# explain select * from tt order by lower(f2);
> QUERY PLAN
I wouldn't even store it on the filesystem if I could avoid that.
Most people I know will assign the video a unique identifier (which is stored
in the database) and then store the video file with a 3rd party (e.g. Amazon
S3).
1. This is often cheaper. Videos take up a lot of disk space. Havin
On Dec 29, 2014, at 5:36 PM, Mike Cardwell wrote:
> So the system I've settled with is storing both the originally supplied
> representation, *and* the lower cased punycode encoded version in a separate
> column for indexing/search. This seems really hackish to me though.
I actually do the same
A very popular design I see is often this:
- PostgreSQL for account, inventory, transactional; and all writes
- NoSQL (Redis, Riak, Mongo, etc) for read-only index postgres (almost
like a read-through cache) and assembled documents
On Jan 5, 2015, at 5:46 PM, Raymond Cote wrote
This is really a theoretical/anecdotal question, as I'm not at a scale yet
where this would measurable. I want to investigate while this is fresh in my
mind...
I recall reading that unless a row has columns that are TOASTed, an `UPDATE` is
essentially an `INSERT + DELETE`, with the previous ro
On Jan 19, 2015, at 5:07 PM, Stefan Keller wrote:
> Hi
>
> I'm pretty sure PostgreSQL can handle this.
> But since you asked with a theoretic background,
> it's probably worthwhile to look at column stores (like [1]).
Wow. I didn't know there was a column store extension for PG -- this would c
racked_ip_block, i search/join against the tracked_ip_address to
show known ips in a block, or a known block for an ip.
i used cidr instead of inet for the ip_address because it saved me a cast on
joins and appears to work the same. was that the right move? is there a
better option?
thanks in advanc
On Feb 17, 2017, at 4:05 PM, Jeff Janes wrote:
> It will probably be easier to refactor the code than to quantify just how
> much damage it does.
Thanks for all the info. It looks like this is something worth prioritizing
because of the effects on indexes.
We had discussed a fix and pointed
I ran into an issue while changing a database schema around. Some queries
still worked, even though I didn't expect them to.
Can anyone explain to me why the following is valid (running 9.6) ?
schema
CREATE TEMPORARY TABLE example_a__data (
foo_id INT,
bar_id INT
);
CRE
thanks all!
On Apr 20, 2017, at 6:42 PM, David G. Johnston wrote:
> Subqueries can see all columns of the parent. When the subquery actually
> uses one of them it is called a "correlated subquery".
i thought a correlated subquery had to note that table/alias, not a raw column.
I guess i've
Everything here works fine - but after a handful of product iterations &
production adjustments, a query that handles a "task queue" across a few tables
looks a bit ugly.
I'm wondering if anyone can see obvious improvements.
There are 3 tables:
upstream_provider
task
On May 16, 2017, at 10:20 PM, David G. Johnston wrote:
> Unless you can discard the 5 and 1000 limits you are going to be stuck
> computing rank three times in order to compute and filter them.
Thanks a ton for your insight. I'm suck using them (5 is required for
throttling, 1000 is required
i'm doing a performance audit and noticed something odd.
we tested a table a while back, by creating lots of indexes that match
different queries (30+).
for simplicity, here's a two column table:
CREATE TABLE foo (id INT PRIMARY KEY
value IN
The following command was run and the content of content_file, signature_file
and id_rsa.pub (or pem) are inserted into a Postgres database.
openssl dgst -sign id_rsa content_file > signature_file
Is there any way to verify that the signature corresponds with the
content/public key within Pos
table_id TYPE bigint;
But it's taking a very long time, and locking the database. We're going to need
to do this in production as well, so a long-term table-lock isn't workable.
Is there anything we can do to speed things up? How long is this likely to take?
Thanks,
Jonathan
mes WHERE name LIKE ‘keyword%’
Or
SELECT * FROM names WHERE name LIKE ‘%keyword%’
I optimized the first type of queries making partitions with every
letter that a name can begin with:
AFAIK, you only need to add an index on "name" to be able to speed up the first kind of queries.
Have
s different DBMSs that support the ANSI information_schema.
A possible solution would be adding the foreign key table_name to all
the tables on the information_schema that rely on foreign keys names
being unique, for the case I am talking about it would be enough to
have it the table referential_contraints.
Thanks,
Jonathan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
icating constraint names within a schema.
>
> regards, tom lane
>
Yes, I know that following the SQL standards is the way to go, but
sometimes this has to be done in databases I don't design, so I have
to be prepared for every case. I think I'll use the pg_
//www.postgis.org/documentation/manual-1.4/ST_Distance_Spheroid.html
>
>
> HTH
>
> Brent Wood
>
>
> Brent Wood
> DBA/GIS consultant
> NIWA, Wellington
> New Zealand
> >>> Scott Marlowe 09/18/09 11:35 AM >>>
> On Thu, Sep 17, 2009 at 1:16 PM,
I have a table with
name_first
name_middle
name_last
if i try concatenating as such:
SELECT
name_first || ' ' || name_middle || ' ' || name_last
FROM
mytable
;
I end up with NULL as the concatenated string whenever a
A typo in a webapp left ~150 records damaged overnight
I was hoping to automate this, but may just use regex to make update
statements for this
basically , i have this situation:
table a ( main record )
id , id_field , fullname
table b ( extended profiles )
id_field , last_n
Greetings
I'm trying to set up Cybertec's ODBC_LINK program
(http://www.cybertec.at/download/odbc_link/2010_03_16_odbc_link.pdf )
so I can read MS-SQL data into my Postgresql 8.4.8 database. I'm using
Ubuntu 10.04 LTS Server.
Their build instructions say to Compile the module using make USE_PGXS=1
to do this through Postgresql directly?
I saw a post about someone doing a "SELECT * FROM XXX ODBC SOURCE" or
something like that
(http://archives.postgresql.org/pgsql-odbc/2009-07/msg00032.php) and that
would be cool. I don't need to import massive datasets, only 20-30K records
at a
ent.
I don't use MySQL for anything.
Thanks much for your response!
J
From: Brent Wood [mailto:b.w...@niwa.co.nz]
Sent: Monday, July 04, 2011 8:58 PM
To: j...@blackskytech.com
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Read MS-SQL data into Postgres via ODBC link?
.mk exists
Jonathan Brinkman wrote:
> Makefile:12: /usr/lib/postgresql/8.4/lib/pgxs/src/makefiles/pgxs.mk:
No such
> file or directory
Maybe you have to install the software package that contains
PostgreSQL's
development environment.
Yours,
Laurenz Albe
--
Sent via pgsql-general mailin
I was really hoping to keep the data-replication (between MSSQL --> PG)
contained within a PG function.
Instead I could write a small shell script or C service to do this using
tsql (freetds). I have access to the MSSQL data via unixodbc and
tdsodbc/freetds in my Ubuntu console.
But I want to r
File(s) 6,800,730 bytes
3 Dir(s) 37,139,398,656 bytes free
C:\Program Files\PostgreSQL\pgJDBC>
How can I get the program running?
--
Jonathan Camilleri
Mobile (MT): 00356 7982 7113
E-mail: camilleri@gmail.com
Please consider your environmental responsibility before printing this
e-mail.
I usually reply to e-mails within 2 business days. If it's urgent, give me
a call.
rg/ftp/binary/
Jboss is 4.2.3-GA, running on the Sun JDK 1.6.0u12, with the
PostgreSQL JDBC JAR postgresql-8.3-603.jdbc4.jar.
I realise that I'm behind on the minor version for the PostgreSQL
server, and I'm going to recommend upgrading - but it'd be nice to
know if any
On 11 July 2011 17:19, Jonathan Barber wrote:
> I'm trying to debug a jboss/hibernate application that uses PostgreSQL
> as a backend, for which PostgreSQL is reporting a lot of queries as
> taking around 4398046 ms (~73 minutes) plus or minus 10 ms to
> complete. I have two que
I'm trying to write a bit of logic as 1 query, but I can't seem to do
it under 2 queries.
i'm hoping someone can help
the basic premise is that i have an inventory management system , and
am trying to update the quantity available in the "shopping
cart" (which is different than the indepen
it would be that, but with greatest
thank you. that's the exact query i was failing to write !
On Apr 21, 2010, at 8:51 PM, Glen Parker wrote:
UPDATE
cart_item
SET
qty_requested_available = least(cart_item.qty_requested,
stock.qty_available)
FROM
stock
WHERE
cart_item.stock_id = stock
On Apr 21, 2010, at 9:38 PM, Glen Parker wrote:
Not if qty_requested_available needs to be <= qty_available...
indeed, i'm an idiot this week.
thanks a ton. this really helped me out!
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription
-- running pg 8.4
i have a table defining geographic locations
id
lat
long
country_id not null
state_id
city_id
postal_code_id
i was given a unique index on
(country_id, state_id, city_id, postal_code_id)
the unique index isn't wo
On May 10, 2010, at 6:29 AM, Alban Hertroys wrote:
As the docs state and as others already mentioned, "Null values are
not considered equal".
Ah. I interpreted that wrong. I thought it applied to indexes
differently. I'll have to experiment now...
--
Sent via pgsql-general mailing li
From: pgsql-general-ow...@postgresql.org on behalf of Leonardo F
Sent: Fri 14/05/2010 14:24
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Authentication method for web app
>I think this point number 2 is pretty important. If at all possible, keep
> the
On 01/09/10 16:22, Bayless Kirtley wrote:
I have a two-user point-of-sale application on Windows XP PRO. The DB
runs
on the cash register. The second user is a manager's computer. They are
connected through a wired router which is also connected to an internet
cable modem. The manager's compute
[CODE]
BEGIN;
DROP TYPE structure.format_list2table_rs CASCADE;
CREATE TYPE structure.format_list2table_rs AS (
"item" VARCHAR(4000)
);
END;
CREATE OR REPLACE FUNCTION structure.format_list2table (
"v_list" varchar,
"v_delim" varchar
)
RETURNS SETOF structure.format_list2table_rs AS
$bod
Thanks, yes the schema was missing from the DECLARE rs statement!
-Original Message-
From: Merlin Moncure [mailto:mmonc...@gmail.com]
Sent: Monday, September 13, 2010 1:35 PM
To: Jonathan Brinkman
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] I keep getting "type does not
When i run the command
SET search_path TO custom,idsystems, clientdata, configs, replication,
structure, archive;
then when I run
SHOW search_path;
it does show those schemas as the search_path. however when i restart the
postgresql service, the search_path has reverted to $User, public.
when i p
alter database "MYDATABASE" SET search_path TO custom, clientdata, configs,
replication, structure, archive;
that seems to fix it.
Thank you!
-Original Message-
From: Merlin Moncure [mailto:mmonc...@gmail.com]
Sent: Wednesday, September 15, 2010 4:40 PM
To: Jonathan Brinkman
Greetings
I have a customized search_path for my database. When I backup (pgdump) the
database and restore it to another server, the search_path must be reset,
since it reverts to $User, public upon restore.
Why is the search_path info not being retained in the backup?
Thank you!
Jonathan
I've been tearing my hair out over this one issue and I'm hoping that
someone on this list will have an insight on the matter that will shed
some light on why the system is doing what it's doing.
I have a database with a number of tables, two of which are projects and
resources. We also have a us
On Wed, 2008-11-05 at 04:40 +0900, Craig Ringer wrote:
> The point is that if your initial create and the setting of the initial
> permissions must succeed or fail together, they MUST be done within a
> single transaction. That is, in fact, the fundamental point of database
> transactions.
I under
On Tue, 2008-11-04 at 07:49 +, Richard Huxton wrote:
> Jonathan Guthrie wrote:
> > When I create a project, entries in the project table and the resource
> > table are created in a single function. Then, separate functions are
> > called to set the owner's access
On Wed, 2008-11-05 at 12:14 +0900, Craig Ringer wrote:
> Jonathan Guthrie wrote:
>
> > The thing is, the C++ code does this
> >
> > BEGIN transaction 1
> > INSERT project
> > COMMIT
> >
> > BEGIN transaction 2
> > SET permissions
> >
I was wondering if there is some indication of how well clustered a table
is.
In other words, when a Cluster command is performed then a table would be
100% clustered.
As updates etc are made the table clowly loses its clustering.
Is there any indication as to how "bad" it is at any one point?
Hi,
I am getting the message 'ERROR: aggregate function calls cannot be nested"
when using a select from an inner select.
The outer select had a group by clause but the inner one is a straight join
between a few tables.
What exactly does this message mean?
Jonathan Blitz
one's insight and help.
Thank you
Sincerely,
Jonathan Schindler
Hi Everyone,
Can someone please confirm that the PostgreSQL licence allow commercial
distribution (with a fee charged)?
I am developing a proprietary (i.e. non-free) solution in Java, and wish to use
PostgreSQL as the backend database. We wish to ship the server with our
software, as well as u
I blogged about my experiences with using PG 9.1's trigram indexes, and
thought some here might be interested:
http://bartlettpublishing.com/site/bartpub/blog/3/entry/350
I would appreciate any feedback anyone has.
Jon
I blogged about my experiences with using PG 9.1's trigram indexes, and
thought some here might be interested:
http://bartlettpublishing.com/site/bartpub/blog/3/entry/350
I would appreciate any feedback anyone has.
Jon
I have a database which contains two primary sets of data:
1) A large (~150GB) dataset. This data set is mainly static. It is
updated, but not by the users (it is updated by our company, which provides
the data to users). There are some deletions, but it is safe to consider
this an "add-only" d
>
>
> by 'dataset' do you mean table, aka relation ?'
>
It's a group of tables.
> by 'not using any referential integrity', do you mean, you're NOT using
> foreign keys ('REFERENCES table(field)' in your table declaration ?
Correct.
Also, many queries cross the datasets together.
>>
>>
> by '
olesworth wrote:
> Hi Jonathan,
>
>
> On 29/03/12 19:01, Jonathan Bartlett wrote:
>
>
>
>> Now, my issue is that right now when we do updates to the dataset, we
>>> have to make them to the live database. I would prefer to manage data
>>> releases the
i think i just need a METHOD for localhost only.
thanks.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
893 Feb 27 17:06 libdict_snowball.la
And I've symlinked these all from /Volumes/pkgsrc/pkg/lib as well, but
initdb still fails to complete.
Any help will be gratefully apreciated.
Regards,
Jonathan.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Wed, May 13, 2009 at 09:54:56AM -0400, Tom Lane wrote:
Jonathan Groll writes:
Custom built postgresql 8.3.5 using the pkgsrc build system on OS X
Leopard;
Uh ... what is the "pkgsrc build system", and what changes does it make
to a straight-from-source PG build?
Pkgsrc is
egin:vcard
fn:Jonathan Hedstrom
n:Hedstrom;Jonathan
org:Downtown Emergency Service Center;Information Services
email;internet:[EMAIL PROTECTED]
title:Data Systems Administrator
tel;work:(206) 464-1570 ext. 3014
version:2.1
end:vcard
---(end of broadcast)---
T
gin = "postgres"
password = "mynewpassword"
and right underneath it:
tcpip = true
i've also disabled my local firewall and SELINUX just
for kicks. and yes, i did a reboot.
so...anyone know what else i can look at?
many thanks!
jonathan
---(end of br
this kind of conversion before and if you have ran into
any problems.
Any help will be greatly appreciated.
Thanks.
Jonathan Lam
unsubscribe
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Richard Huxton
Sent: Monday, July 25, 2005 10:43 AM
To: WA Pennant & Flag Displays - Darren
Cc: [EMAIL PROTECTED]; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Connection error
WA Pennant
I've been googling a little bit and appears that 7.1 pretty old. What steps are
advised to upgrade from 7.1 to 7.4?
-Jonathan
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
27;m trying to get
upgraded
does not give it it's seal of approval.
-Jonathan
> Jonathan Villa wrote:
>> I've been googling a little bit and appears that 7.1 pretty old.
>> What steps are advised to upgrade from 7.1 to 7.4?
>
> 1. Dump the old db using 7.4's p
even possible to export them separately?
-Jonathan
> Thomas F. O'Connell wrote:
>
>>
>>
>> Jonathan,
>>
>> The implicit indexes are no big deal; they're just a sign of indexes
>> getting created by PRIMARY KEYs on your tables.
>>
>> I
m used to MySQL where I can queries once connected... I'm sure I can do the
same
with PGSQL...
FYI, I'm also using phpPGAdmin and I can run the SQL query there just fine
-Jonathan
---(end of broadcast)---
TIP 9: In versions below 8.0
or near "select"
select project_name from project_group_list;
...
...
...
that's odd...
-Jonathan
> On 7/29/05, Jonathan Villa <[EMAIL PROTECTED]> wrote:
>> now I want to select
>> #select someColumn from testtable
>>
>> I get nothing.
-Jonathan
> On 7/29/05, Jonathan Villa <[EMAIL PROTECTED]> wrote:
>>
>> Ok, this is odd...
>>
>> I tried ending with a semicolon before, and received this error
>>
>> ERROR: parser: parse error at or near "select"
>>
>&
I'm having some trouble getting one of the contrib modules to load
correctly...
it's for tsearch2.sql
I've tried
./configure --prefix=/usr/local/pgsql
gmake all
gmake install
and I've also tried, from the contrib/tsearch2/ dir,
gmake
but that fails when looking for flex, which by the way, is
> Jonathan Villa schrieb:
>> I'm having some trouble getting one of the contrib modules to load
>> correctly...
>>
>> it's for tsearch2.sql
>>
>> I've tried
>>
>> ./configure --prefix=/usr/local/pgsql
>> gmake all
&g
> Jonathan Villa schrieb:
>
>> thanks, that seemed to work ok... now.. how do I use tsearch2? meaning,
>> how do I run the script? is it against the database I was to use it
>> with?
>> example
>>
>> psql -d mytestdb < tsearch2.sql
>>
>>
> Jonathan Villa schrieb:
>
>> Thanks... at least know I'm doing to correctly... but I still get the
>> errors. I've done everything as it states on the tsearch-V2-intro.html
>> page... and then I run
>>
>> psql ftstest < tsearch2.sql &> ft
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Jonathan Villa schrieb:
>
>> Yes, I'm running on Linux
>>
>> I did not try ldconfig, however I just have... and same result
>>
>> tsearch2.so is in /usr/local/pgsql/lib and my home /usr/lo
I'm getting the following error when attempting to use my application:
ERROR: current transaction is aborted, commands ignored until end of
transaction block
I have no clue... the only idea I have is to somehow release any
transaction locks, but I don't how to list, or even if possible, to list
or directory
gmake: *** No rule to make target `/contrib/contrib-global.mk'. Stop.
Could someone point me the correct way? Many many thanks in advance.
Best regards,
Jonathan
---(end of broadcast)---
TIP 6: explain analyze is your friend
On 10/9/05, Rick Morris <[EMAIL PROTECTED]> wrote:
> Marc G. Fournier wrote:
> > Stupid question, but what does MySQL bring to the equation?
>
> MySQL brings to the table an impressive AI interface that knows what you
> really meant to do and thus does away with those pesky error messages.
>
> Afte
On 10/6/05, suresh ramasamy <[EMAIL PROTECTED]> wrote:
> On 10/6/05, Ly Lam Ngoc Bich <[EMAIL PROTECTED]> wrote:
> > I am using Linux Fedora 3 . I've installed Postgres with
> > postgresql-8.0.3.tar.gz package , so there is no rpm package when I
> > check with
> > rpm -qa|grep postgresql
> >
> >
> > Go the installation directory and try
> > #make uninstall - > if it doesn't work then do the following
> >
> > # make clean
> > # make dist clean
> > and remove the directory manually
> >
>
> I think he wants to know how to uninstall the files that were
> installed with 'make install', not th
Hi,
I'm hoping someone on this list can save me some unnecessary
benchmarking today
I have the following table in my system
BIGSERIAL , INT , INT, VARCHAR(32)
There are currently 1M records , it will grow to be much much
bigger. It's used as a search/dispatch table, and gets t
Someone posted an issue to the mod-perl list a few weeks ago about
their machine losing a ton of memory under a mod-perl2/apache/
postgres system - and only being able to reclaim it from reboots
A few weeks later I ran into some memory related problems, and
noticed a similar issue. Starti
long does it take to compute the checksum
compared with doing a field-by-field check? There are many facets to
that answer - related to caching of old values and comparison with the new.
--
Jonathan Leffler #include
Email: [EMAIL PROTECTED], [EMAIL PROTECTED]
Guardian of DBD::I
On Sep 30, 2006, at 12:28 PM, Tom Lane wrote:
If the shared segment is no longer present according to ipcs,
and there are no postgres processes still running, then it's
simply not possible for it to be postgres' fault if memory has
not been reclaimed. So you're looking at a kernel bug.
thats
On Oct 1, 2006, at 11:56 AM, Tom Lane wrote:
OK, that kills the theory that the leak is triggered by subprocess
exit.
Another thing that would be worth trying is to just stop and start the
postmaster a large number of times, to see if the leak occurs at
postmaster exit.
On FreeBSD I'm not s
On Oct 1, 2006, at 12:24 PM, Fred Tyler wrote:
It is not from the exit. I see the exact same problem and I never
restart postgres and it never crashes. It runs constanty and with no
crashes for 20-30 days until the box is out of memory and I have to
reboot.
my theory, which i hope to prove/di
On Oct 7, 2006, at 3:31 PM, Alexander Staubo wrote:
I don't see PostgreSQL being "bashed sentence after sentence",
however -- the two "known limitations" listed for PostgreSQL are
"slow (even for small datasets)" and "jokes [sic] on 3-table-joins"
-- and among the open-source databases men
On Oct 7, 2006, at 6:41 PM, Chris Browne wrote:
This could also be a situation where adding a few useful indexes might
fix a lot of ills. Better to try to help fix the problems so as to
help show that the comparisons are way off base rather than to simply
cast stones...
i'm too tight for cash
ading in data?
2) Is it possible to screen out lines which begin with a comment character
(common outputs for csv/txt files from various programs)?
3) Is there a way to read in fixed width files?
Thanks!
--j
--
Jonathan A. Greenberg, PhD
NRC Research Associate
NASA Ames Research Center
MS 242-4
Moffe
101 - 200 of 363 matches
Mail list logo