and I was thinking it was a psycopg2
problem, but seems there are issues with the internal counters in pg as
well for tracking "large" changes.
thanks,
Mark
On Sun, Feb 2, 2014 at 9:12 AM, Tom Lane wrote:
> Vik Fearing writes:
> > Without re-doing the work, my IRC logs sho
the same function in tcl in which I can work
out how to do this, but what about pgsql? I can't use the system tables
for this, since the data may not come from a table.
2. It it possible, either in tcl or pgsql, to have optional function
arguments?
Thanks,
Mark
--
Mark Simon
Manngo Net Pt
the same function in tcl in which I can work out how
to do this, but what about pgsql? I can't use the system tables for
this, since the data may not come from a table.
2. It it possible, either in tcl or pgsql, to have optional function
arguments?
Thanks,
Mark
--
Mark Simon
Manng
tabase as
well as how often each index is being used.
Check some of the queries here:
http://www.xzilla.net/blog/2008/Jul/Index-pruning-techniques.html
..:Mark
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Search the PG performance mailing list archive. There has been some good
posts about SSD drives there related to PG use.
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Allan Kamau
Sent: Wednesday, November 10, 2010 11:
> -Original Message-
> From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> Sent: Wednesday, October 06, 2010 11:14 PM
> To: mark
> Cc: r...@iol.ie; 'Mathieu De Zutter'; 'Georgi Ivanov'; pgsql-
> gene...@postgresql.org
> Subject: Re: [GENERAL] Idle connec
en time.
Of course if he doesn't have spare resources on the machine he might just be
making his life worse, and far more complex.
Just my thoughts, I don't consider myself an expert on the subject matter.
Mark
--
Sent via pgsql-general mailing list (pgsql-general@po
omething to
page (someone) when things go bad (nagios). And benchmark/profile as much as
possibile to compare to down the road. Something like are just good sys
admin things to have ... like say a tested and working lights out management
I am rambling but yeah ... those are all things that came to
are ? they all list the same same statement as the cause and I don't
think we ran it 3 times.
Thank you,
-Mark
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
check the query with
an explain analyze.
I understand that work_mem is a more accurate description.
in summary it seems that if I see a temp file logged of say 20MB I
need about 40MB of work_mem before it doesn't spill to disk. just
wondering if I am at all accurate with this or if I am way off
Hi
I use postgres v 8.3 on a dual quad core, intel xeon [EMAIL PROTECTED], fedora
core 8 x86_64, and 32GB RAM
settings i changed on postgresql.conf:
shared_buffers = 1000MB # min 128kB or max_connections*16kB
effective_cache_size = 4000MB
I have a user table structure is attache
On Sat, Mar 15, 2008 at 4:37 PM, Richard Broersma <
[EMAIL PROTECTED]> wrote:
> On Sat, Mar 15, 2008 at 4:21 PM, mark <[EMAIL PROTECTED]> wrote:
>
>
> > select * from users where session_key is not Null order by id offset
> > OFFSET limit 300
> >
> > O
On Sat, Mar 15, 2008 at 5:04 PM, brian <[EMAIL PROTECTED]> wrote:
> Richard Broersma wrote:
> > On Sat, Mar 15, 2008 at 4:41 PM, mark <[EMAIL PROTECTED]> wrote:
> >
> >> On Sat, Mar 15, 2008 at 4:37 PM, Richard Broersma <
> >> [EMAIL PROTECTED]>
is the query I am running , and it takes over 10 seconds to complete this
query...
update users set number_recieved=number_recieved+1 where uid=738889333;
table has about 1.7 million rows.. i have an index on column uid and also on
number_received. .. this is also slowing down the inserts that h
On Mon, Mar 31, 2008 at 12:23 PM, Raymond O'Donnell <[EMAIL PROTECTED]> wrote:
> On 31/03/2008 20:16, mark wrote:
> > is the query I am running , and it takes over 10 seconds to complete
> > this query...
> >
> >
> > update users set number_re
On Mon, Mar 31, 2008 at 12:48 PM, Raymond O'Donnell <[EMAIL PROTECTED]> wrote:
> On 31/03/2008 20:38, mark wrote:
>
> > EXPLAIN ANALYZE update users set number_recieved=number_recieved+1 where
> > uid=738889333;
> >
On Mon, Mar 31, 2008 at 12:59 PM, Raymond O'Donnell <[EMAIL PROTECTED]> wrote:
> On 31/03/2008 20:51, mark wrote:
>
> > can you explain what the numbers mean in the EXPLAIN ANALYZE?
> > (cost=0.00..8.46 rows=1 width=1073) (actual time=0.094..0.161 rows=1
> &
On Mon, Mar 31, 2008 at 12:59 PM, Raymond O'Donnell <[EMAIL PROTECTED]> wrote:
> On 31/03/2008 20:51, mark wrote:
>
> > can you explain what the numbers mean in the EXPLAIN ANALYZE?
> > (cost=0.00..8.46 rows=1 width=1073) (actual time=0.094..0.161 rows=1
> &
On Mon, Mar 31, 2008 at 11:18 PM, Tomasz Ostrowski <[EMAIL PROTECTED]>
wrote:
> On 2008-03-31 21:16, mark wrote:
>
> > is the query I am running , and it takes over 10 seconds to complete
> > this query...
> > update users set number_recieved=number_recieved+1 wher
On Tue, Apr 1, 2008 at 12:44 AM, mark <[EMAIL PROTECTED]> wrote:
> On Mon, Mar 31, 2008 at 11:18 PM, Tomasz Ostrowski <[EMAIL PROTECTED]>
> wrote:
>
> > On 2008-03-31 21:16, mark wrote:
> >
> > > is the query I am running , and it takes over 10 seconds to
On Tue, Apr 1, 2008 at 1:48 AM, Tomasz Ostrowski <[EMAIL PROTECTED]>
wrote:
> On 2008-04-01 09:44, mark wrote:
>
> > I already am running 8.3.1 [ i mentioned in subject].
>
> But I have no experience on anything with more than 1GB of RAM...
>
Should I reduce shared_b
On Tue, Apr 1, 2008 at 7:27 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> Tomasz Ostrowski <[EMAIL PROTECTED]> writes:
> > I'd also set
> > log_checkpoints=on
> > to get an idea how it behaves.
>
> Yeah, that's really the *first* thing to do. You need to determine
>
I set this on,
log_checkpoin
On Tue, Apr 1, 2008 at 5:31 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Tue, 1 Apr 2008, mark wrote:
>
> current settings all default
> > > #checkpoint_segments = 3
> > > #checkpoint_timeout = 5min
> > > #checkpoint_completion_target = 0.5
> > &
On Wed, Apr 2, 2008 at 1:19 AM, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Wed, 2 Apr 2008, mark wrote:
>
> this really clear! Thannks!!
> >
>
> This is the first time someone new to this has ever said that about
> checkpoint tuning, which is quite the victory for
On Thu, Apr 3, 2008 at 10:02 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Wed, 2 Apr 2008, mark wrote:
>
> > with no clients connected to the database when I try to shutdown the
> > database [to apply new settings], it says database cant be shutdown.. for a
> &g
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Yang Zhang
> Sent: Thursday, April 14, 2011 6:51 PM
> To: Adrian Klaver
> Cc: pgsql-general@postgresql.org; Craig Ringer
> Subject: Re: [GENERAL] Compression
>
> On
create database, create an app.-
> user.
>
> 3. Load dataset...:
> a. with owner 'app.-user' in schema PUBLIC;
> b. create indexes;
> c. issue a VACUUM ANALYZE command on user tables.
Might consider setting your indexes to be fill factor 100 if you have not
alrea
ted ext4 at all to speak of - so shame on me for that.
To loosely quote someone else I saw posting to a different thread a while
back "I would walk through fire for a 10% performance gain". IMO through
proper testing and benchmarking you can make sure you are not giving up 10%
(or more
then comes back to "bad plans".
Either the planner is really bad all the time and I never knew it or we are
way overblowing things.
One or two of his points are on my list as well, but as far as a TOP 10
missing features that PG "needs" his probably aren't anywhe
I have problem with GIN index. Queries over it takes a lot of time. Some
informations:
I've got a table with tsvector- textvector:
CREATE TABLE mediawiki.pagecontent
(
old_id integer NOT NULL DEFAULT
nextval('mediawiki.text_old_id_seq'::regclass),
old_text text,
old_flags text,
textvector
t did not help.
Could you please point me in the right direction, where could be the
problem?
Thanks a lot
Mark
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/GIN-index-not-used-tp4344826p4344826.html
Sent from the PostgreSQL - general mailing list archive at Nabb
Alban thanks for your quick reply.
It is true that I use for this only 2,5GB RAM on Intel Core i5 CPU 2.67GHz
and resources I didn't changed from instalation of postgres:
max_connections = 100
shared_buffers = 32MB
(other parameters are commented)
, but that would not be the reason I think.
I was
Alban thank for your ideas
> It probably is, the default Postgres settings are quite modest and GIN
> indexes are memory hungry.
> I think you need to increase shared_buffers. With 2.5GB of memory (such a
> strange number) the docs> suggest about 250MB.
> See
> http://www.postgresql.org/doc
Alban thank for your ideas
> It probably is, the default Postgres settings are quite modest and GIN
> indexes are memory hungry.
> I think you need to increase shared_buffers. With 2.5GB of memory (such a
> strange number) the docs> suggest about 250MB.
> See
> http://www.postgresql.org/doc
Is there in PostgreSQL posibility to track which function in C file is called
by postgres functions. For example I would like to see which functions are
calling in ts_rank.
Thanks for reply
Mark
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/track-functions-call
Thanks for quick reply,
but I want to know, which of these method is called in concrete situation. I
suppose, that ts_rank call only one of these functions(ts_rank_wttf ,
ts_rank_wtt , ts_rank_ttf ,ts_rank_tt ). Is it possible?
Thanks for reply
Mark
--
View this message in context:
http
use ?
Thanks for reply.
Mark
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/ts-rank-vs-ts-rank-cd-tp4385337p4385337.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To
> Note 1:
> I have seen an array that was powered on continuously for about six
> years, which killed half the disks when it was finally powered down,
> left to cool for a few hours, then started up again.
>
Recently we rebooted about 6 machines that had uptimes of 950+ days.
Last time fsck had
ere).
http://archives.postgresql.org/pgsql-hackers/2010-11/msg00198.php
&
-> http://archives.postgresql.org/pgsql-hackers/2010-11/msg00252.php
I didn't see this the last time I was looking but:
https://github.com/psoo/pg_standby_status/blob/master/pg_standby_status.pl
(I have never
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Martín Marqués
> Sent: Wednesday, August 24, 2011 2:48 PM
> To: pgsql-general
> Subject: [GENERAL] how is max_fsm_pages configured in 8.4
>
> I see that max_fsm_pag
t kernel they are running and
what storage drivers they might be using.
FWIW (to the list vm. swappiness) at 0 didn't play well for us, with a
postgresql fork, until we had a swap partition the size of memory. We were
recommended to make that setting change for the fork that we are using and
am wondering why I am getting the record inserted in both the
child and the parent partition when executing an insert into the parent.
Is there a step missing from the DOC? Something else I need to do?
Thank you
..: Mark
Try http://www.pgadmin.org/download/macosx.php
?
..: Mark
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Mike Christensen
Sent: Monday, September 27, 2010 6:42 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL
undreds of open connections all the time, so better
connection management should give us some more head room before we have to
figure out the next scaling hurdle.
..: Mark
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf O
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Wednesday, October 06, 2010 11:14 PM
To: mark
Cc: r...@iol.ie; 'Mathieu De Zutter'; 'Georgi Ivanov';
pgsql-general@postgresql.org
Subject: Re: [GENERAL] Idle connections
>What you're desc
hi..
i want to store latitude and longitude in a users table.. what is the
best data type to use for this? i want to be able to find use this
info to find users within a distance..
how do i go about doing this?
thanks
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make c
hi
if i execute this statement:
select * from users where id in (2341548, 2325251, 2333130, 2015421,
2073536, 2252374, 2273219, 2350850, 2367318, 2032977, 2032849, )
the order of rows obtained is random.
is there anyway i can get the rows in the same order as the ids in
subquery? or is there a d
> get involved every so often. I hope I can get the database to make the
> right thing easy and the wrong thing "impossible" for them.
>
> Any suggestions?
HTH.
Cheers,
Mark.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes t
y ideas on how I can make the normalization consistent.
(I can upload some dummy data and a dummy ddl if needed)
Mark
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi,
Simple question:
If I change a hostname (Linux FC2) do I need to do any changes to
Postgresql configurations ?
Thanks,
Mark
__
Celebrate Yahoo!'s 10th Birthday!
Yahoo! Netrospective: 100 Moments of the Web
after install?
If there is any better way to have postgresql installed on usb drive
and have it executebale from lnopixx - please let me know
Thanks,
Mark
Don't pick lemons.
See all the new 2007 cars at Ya
Since it's going to be a development environment I don't need it
fast.
So, I would still prefer to go ahead with USB drive.
Mark
--- James Neff <[EMAIL PROTECTED]> wrote:
> Mark wrote:
> > I would like to use postgresql with knopixx, Sounds like a simple
> > i
7.4.X rpms always get installed into /usr
which is ramdrive in my case. Is it possible?
Thanks,
Mark
--- Merlin Moncure <[EMAIL PROTECTED]> wrote:
> On 3/23/07, Mark <[EMAIL PROTECTED]> wrote:
> > I would like to use postgresql with knopixx, Sounds like a simple
> > idea :-)
lem I had was with creating links and if anybody has
an idea how it can be fixed, can you let me know.
Cheers!
Mark
--- Merlin Moncure <[EMAIL PROTECTED]> wrote:
> On 3/28/07, Mark <[EMAIL PROTECTED]> wrote:
> > Hi Merlin,
> > Can you point where I can find build instr
I can set up prefix when building postgresql from the
source.
Any suggestion how to install from rpm 2 different versions on the
single Linux machine.
Thanks,
Mark.
__
Do you Yahoo!?
Yahoo! Mail - Helps protect you from nasty viru
ently you can do this in some of the other pl languages though
(plperl for example).
-Mark.
---(end of broadcast)---
TIP 8: explain analyze is your friend
Hi,
What are recommendations about running vacuumdb?
How frequently it need be executed and how will I know I have to run
it.
Can I run vaccumdb on production system or I need to do it on DB with
no users connected?
Thanks,
Mark.
__
Do you
Hi,
I guess is simple, but cannot find out how to run scripts in psql(
Linux)
What I would like to do is following:
1. Create a table structure from scripts ?
2. Preload data to remote Linux box (IP added to conf file)
Thanks,
Mark
__
Do
Hi,
Is it possible to lock row(s) when updating a table, so another call
for update( from different session) will be rejected and to be on
hold until lock get released ?
Thanks,
Mark.
__
Do you Yahoo!?
The all-new My Yahoo! - What will yours do
Hi,
I have a table with 100K rows. One of columns is a timestamp and
indicates when this row inserted.
What will the the best way of getting 10 latest rows from that table
and introducing partial data retrieval (rows 50-60, 100- 120, etc)
Thanks,
Mark
crash on the client side, network interruptions,
etc.) Can LOCK be used in JDBC or it's SQL92 standard?
Thanks a lot.
Mark
--- Michael Fuhr <[EMAIL PROTECTED]> wrote:
> On Thu, Dec 23, 2004 at 11:56:26AM -0800, Mark wrote:
>
> > Is it possible to lock row(s) when updating
how about to have only one DB with multiple DB shcemas and assign a
DB user per schema?
Will this solution use the multiple CPUs ? - I think it should
this is my 2cents.
--- Jeff Davis <[EMAIL PROTECTED]> wrote:
> Benefits of multiple instances:
> (1) Let's say you're using the one-instance
Hi,
I have a small data base ~ 10 tables. each table get
insert/update/delete few times a day. postgresql is running for a
month.
The load will increase in the near future: insert/update/delete
activity will be at least one in 5 minutes.
What maintenance should I need to do?
Thanks,
Mark
Hi,
I'm getting some errors in log file saying "invalid character at
position
#20..." I know that this is most likely that query is wrong.
Is it possible to capture all queries that get send or at least the
invalid queries?
I'm using postgresql 7.4.3 on Red
is in seconds.
SELECT id
FROM mq
WHERE now - start_date > time_to_live;
I have a following table:
CREATE TABLE mq
{
msg_id INTEGER,
retry_date TIMESTAMP NOT NULL DEFAULT ('now'::text)::timestamp(1),
start_date TIMESTAMP NOT NULL DEFAULT ('now'::text)::timestamp(1),
time_to_
Hello,
When I run 'explain analyze' on a query, how do I know what index is
used and is it used at all. What are specific words should I look
for?
Is "Seq Scan" indicates that index has been used?
How do I know that it was Full Table
...
is there any way to force usage of index?
another question:
Can I defined index for _NOT_EQUAL_ ?
I have a column that can have 5 values and my where is
WHERE type <> 'A' OR type <> 'B'
_or_ better to use:
WHERE type ='C' OR type = 'D' O
-postgresql-7.3.10-2
Any help will be gratefully accepted.
Ta in advance
Mark
--
This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify the sende
abase maintenance. Is it possible that we are doing
something wrong?
What are the plans for future versions of pgsql? Will vacuum be optomized or
otherwise enhanced to execute more quickly and/or not lock tables?
Thanks,
Mark
PS
I posted more details to the hackers
unnecessary post.
On Wednesday 11 July 2001 15:39, Mark wrote:
> Is Postgresql ready for 24/7 uptime? Our tests have shown that vacuumdb
> requires downtime, and if one does this nightly as suggested, well, one has
> downtime, 40+ minutes in our case.
>
> My company wants to replac
interested.
Mark
dbc
'
gmake[1]: *** [all] Error 2
gmake[1]: Leaving directory `/usr/src/pgsql/postgresql-6.4.2/src/interfaces'
gmake: *** [all] Error 2
Has anyone seen this and can help me (or anyone even reading this)?
MkLinux DR3 (RedHat
aks when the
application is multi-threaded and the rules are not applied at the database
level.
Another solution I can think of is to just use a trigger to prevent the
duplicate rows.
Any thoughts are certainly appreciated. I can't do much about the data
model itself right now, I need to protect the integrity of the data.
Thanks!
-mark-
I have two tables that i want to link with a FK where the child table
record is "active".
some googling shows that i could use a function and a check constraint on
the function, but that only works for inserts, not updates on table b.
create table a (int id, text name);
create table b (int id, bo
I want to update a table to have the value of the occurrence number. For
instance, I have the below table. I want to update the number column to
increment the count of last name occurrences, so that it looks like this:
first last 1
second last 2
third last 3
first other 1
next other 2
Here's my
I am reading through Postgres and PGStrom. Regarding the planning factors, I
need some clarifications. Can u help me with that?
Planner in Postgres checks for different scan and join methods, and then
find the cheapest one and creates a query plan tree. While going for same
thing in GPU, the check
Thank you so much for your kind reply.
I am just curious about this planning factors in GPU.
There can be more than one appropriate paths in query plan tree. How the
decision for particular path has been made considering those planning
factors?
--
View this message in context:
http://p
Thank you so much for your references.
How the planning factors of PGStrom differs from planning factos of
PostgreSQL?
--
View this message in context:
http://postgresql.nabble.com/How-the-Planner-in-PGStrom-differs-from-PostgreSQL-tp5929724p5930356.html
Sent from the PostgreSQL - general mail
Yeah I think Kouhei Kaigai is one of the Contributors. So expecting his
reply.
And thanks for your kind responses
--
View this message in context:
http://postgresql.nabble.com/How-the-Planner-in-PGStrom-differs-from-PostgreSQL-tp5929724p5930373.html
Sent from the PostgreSQL - general mailing l
Thanks for your response.
But that planning for a query execution in GPU is different from planning a
query execution in CPU right?
Even considering cost calculation, cost for executing a query in GPU is
different from cost for executing a query in CPU. How this cost calculation
for GPU occurs?
Can u explain this statement "check whether the scan qualifier can
be executable on GPU device"
What are the scan qualifiers?
How to determine whether they are device executable or not?
--
View this message in context:
http://postgresql.nabble.com/How-the-Planner-in-PGStrom-differs-from-Postg
Can u explain this statement "check whether the scan qualifier can
be executable on GPU device"
What are the scan qualifiers?
How to determine whether they are device executable or not?
The cost estimates are entirely based on number of rows and type of scan.
Then it will be same for both CPU a
"fraction of the cost of executing the same portion of the plan using
the traditional CPU processing"
Can u explain about this fraction in detail?
I need the clarifications for query plan tree also.
Executing a query in CPU is different from executing the same in GPU. So the
plan also differs.
What are the functions (for example) are available/not available to get
transformed to GPU source code?
What is the factor value u consider to get multiplied with actual cost for
CPU? For example, default cpu_tuple_cost is 0.01.
Consider, for example, if the cost=0.00..458.00 for seq scan, how c
Considering PGStrom, an extension of PostgreSQL-9.5.4, I tried opening that
file in netbeans 8.1
I opened PGStrom in netbeans as File -> New Project -> C/C++ -> C/C++
Project with Existing Sources. And then selected the Folder that contains
existing sources (PG_Strom). And then Finish
It is showi
PostgreSQL has been successfully compiled in netbeans 8.1. But how to add its
extension PG_Strom into it?
--
View this message in context:
http://postgresql.nabble.com/How-to-open-PGStrom-an-extension-of-PostgreSQL-in-Netbeans-tp5931425p5931427.html
Sent from the PostgreSQL - general mailing li
Nope. I am not asking about installation instructions. I have installed it.
And I know how to run it from command line.
I just wanted to compile it in netbeans.
--
View this message in context:
http://postgresql.nabble.com/How-to-open-PGStrom-an-extension-of-PostgreSQL-in-Netbeans-tp5931425p59
Yes making the file is the problem. If you read my topic again, then you may
know about what is the exact question
--
View this message in context:
http://postgresql.nabble.com/How-to-open-PGStrom-an-extension-of-PostgreSQL-in-Netbeans-tp5931425p5931594.html
Sent from the PostgreSQL - general m
sql.org/developer/summerofcode.html> Google SoC page
Anyone know how to get object DDL SQL through a script? Ideas on
alternative approaches would also be appreciated.
Thanks,
Mark
On Delete) -
perhaps these property values will be easy to "guess" when recreating the
constraint. Example below ...
Thank you again, John. Cheers, Mark
Example:
Case 1: pg_get_constraintdef(oid) output:
"FOREIGN KEY (permission_id) REFERENCES auth_permission(id) DEFERRABLE
INI
Aha ... makes sense. Thank you, Tom.
Mark
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Saturday, June 16, 2007 11:21 AM
To: Mark Soper
Cc: 'John DeSoi'; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Dynamically generating DDL for postgresql obje
Our developers run on MacBook Pros w/ 2G memory and our production
hardware is dual dual-Core Opterons w/ 8G memory running CentOS 5. The
Macs perform common and complex Postgres operations in about half the
time of our unloaded production hardware. We've compared configurations
and the producti
configuration changes drew no measurable
change in performance. And that is when you know you are turning the
wrong knobs!
Scott Marlowe wrote:
> On Nov 9, 2007 10:55 PM, Mark Niedzielski <[EMAIL PROTECTED]> wrote:
>
>> Our developers run on MacBook Pros w/ 2G memory and our product
cannot re-install the version
that the database was last used with (it should have been first initialised
on 8.2, as I went to the beta to experiment with enum having recently
returned from MySQL).
Any help appreciated, including links to a download of beta1 that still
works.
--
Mark Walker
The cvs/svn worked I managed to dump out of beta 1 and now have my database
restored in RC1. Many thanks to all.
--
Mark Walker
This post is just to record an example of how to use the new window fn's in 8.4
to perform difference-between-row calculations.
To demonstrate, we create a table with 30 rows of data, two columns, one of
which contains the sequence 1..30, the other contains mod(c1,10).
So the table looks like th
WAL block size to see if that had any effect but
it does not. So is what I have read wrong? Is there are hard limit of 1600 that
you cannot get around?
- Mark
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Friday, November 12, 2010 12:24 AM
To: Mark Mitchell
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] More then 1600 columns?
"Mark Mitchell" writes:
> Is there are hard limit of 1600 that you cannot get around?
Yes.
General
for the info!
And yes we do data analysis that tortures SQL, but SQL allows us to do so many
things quickly and less painfully. Better to torture the machines then torture
ourselves….
- Mark
From:pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of
Apologizes Tom I did not see that you had answered yes to my question about the
hard limit.
You have all been very helpful, I will give up on the 1600+ columns and look
into using hstore.
Cheers
- Mark
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general
1 - 100 of 633 matches
Mail list logo