On 14.11.2013 02:26, Jeff Janes wrote:
On Wed, Nov 13, 2013 at 3:53 PM, Sergey Burladyan wrote:
Jeff Janes writes:
If I not mistaken, looks like lazy_scan_heap() called from
lazy_vacuum_rel()
(see [1]) skip pages, even if it run with scan_all == true,
lazy_scan_heap()
does not increment scann
Hi
For decades, this type of problem has been the meat and vegetables of
discussions about SQL programming and design.
One writer on this subject has stood out, thanks to his mental clarity
and ability to set out complicated concepts in a readily comprehensible
manner.
His name is Joe Celk
On 18/11/2013 02:16, Hengky Liwandouw wrote:
> Dear Friends,
>
> Please help for the select command, as i had tried many times and
> always can not display the result as what i want.
>
> I am looking for the solution on google but still can not found the
> right answer to solve the problem.
>
>
If the tables aren't huge, you're not concerned about optimization, and you
just want to get your numbers, I think something like this would do the
trick. I haven't actually tried it 'cause I didn't have easy access to
your tables:
SELECT
a.product_id,
a.product_name,
b.initial_stock_sum,
Thanks all for your concern and help.
I have tried David suggestion and it works. As what you all said, there are so
many important feature in PostgreSQL. I really have to spend time to study it.
Last time i use Windev to develop front end application, and HyperfileSQL as
the database server.
Thanks a lot Ken,
I will try it soon.
But when the table becomes huge (how big 'huge' in postgres ?), how to
optimize such command ?
I have index on all important field like date, productid, supplierid,
customerid and so on
Optimization is really an important thing as i plan to keep all tra
In general, when I have to handle Ledger type data (which this problem
is), I tend to hold data in 3 tables
1. Master Ledger ( Product ID, Name, etc)
2. Master Ledger Balances(Product ID, Fiscal_Year, Opening Balance,
Net_Transaction_P1, Net_Transaction_P2, ... etc)
3. Master Ledger Transact
oka wrote:
> I have a question.
>
> There are the following data.
>
> create table chartbl
> (
> caseno int,
> varchar5 varchar(5)
> );
> insert into chartbl values(1, ' ');
> insert into chartbl values(2, '');
>
> The same result with the following two queries is obtained.
> select * from chart
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello
We got this error in our logs yesterday.
2013-11-17 19:01:02.278 CET,,,29356,,52565a6f.72ac,1,,2013-10-10
09:42:39 CEST,,0,LOG,0,"could not truncate directory
""pg_subtrans"": apparent wraparound",""
2013-11-17 19:06:02.027 CET,,,2
On 11/17/2013 11:48 PM, David Johnston wrote:
> I am guessing that it is the need for the index to point to new versions of
> the physical record that the index is churning so much and causing this kind
> of bloat?
Bingo.
> I am preparing to REINDEX the unique index and DROP the non-unique one ov
Hi,
Have some issues to built the pg_trgm module from source.
For first the regexport.h file was missing in /usr/include, so I got it.
Now I still need the the regexport.c file and probably also the other one
You can see the files in this link:
http://git.postgresql.org/gitweb/?p=postgresql.git;a
On 11/18/2013 07:34 AM, Janek Sendrowski wrote:
Hi,
Have some issues to built the pg_trgm module from source.
For first the regexport.h file was missing in /usr/include, so I got it.
Now I still need the the regexport.c file and probably also the other one
You can see the files in this link:
htt
Adrian Klaver writes:
> On 11/18/2013 07:34 AM, Janek Sendrowski wrote:
>> Have some issues to built the pg_trgm module from source.
> FYI I find those files in the source I downloaded from the Postgres site:
Sounds like Janek is trying to build 9.3 pg_trgm against a pre-9.3
server installation.
Hi,
Just wondering what kind of execute statement (within a function) i should
use to force the planner to use the index for the following?:
SELECT pcode searchmatch, geometry FROM postcode
WHERE (replace(lower(pcode), ' '::text, ''::text)) LIKE
(replace((lower($1)::text),' '::text,''::text)||'%
Hi,
My current version is 9.2. I could just update it.
I got the pg_trgm from here:
http://git.postgresql.org/gitweb/?p=postgresql.git;a=tree;f=contrib/pg_trgm;hb=refs/heads/master
And the regex files from here:
http://git.postgresql.org/gitweb/?p=postgresql.git;a=tree;f=src/include/reg
On Sun, Nov 17, 2013 at 1:33 PM, Stefan Keller wrote:
> Hi Edson,
>
> On 2013/11/17 Edson Richter you wrote:
> > One question: would you please expand your answer and explain how would
> this adversely affect async replication?
>
> Is this a question or a hint (or both) :-)? Of course almost all
On 2013-11-18 04:37, Ken Tanzer wrote:
If the tables aren't huge, you're not concerned about optimization,
and you just want to get your numbers, I think something like this
would do the trick. I haven't actually tried it 'cause I didn't have
easy access to your tables:
SELECT
a.product_id
On Sun, Nov 17, 2013 at 11:12 PM, David Johnston wrote:
> Having recently had a pg_dump error out due to not having enough disk it
> occurs to me that it would be nice for pg_dump to remove the partial dump
> file it was creating (if possible/known) instead of having it sit around
> taking up tha
On Sun, Nov 17, 2013 at 4:02 PM, Stefan Keller wrote:
> 2013/11/18 Andreas Brandl wrote:
> > What is your use-case?
>
> It's geospatial data from OpenStreetMap stored in a schema optimized for
> PostGIS extension (produced by osm2pgsql).
>
> BTW: Having said (to Martijn) that using Postgres is p
On 11/18/2013 08:32 AM, Janek Sendrowski wrote:
Hi,
My current version is 9.2. I could just update it.
I got the pg_trgm from here:
http://git.postgresql.org/gitweb/?p=postgresql.git;a=tree;f=contrib/pg_trgm;hb=refs/heads/master
And the regex files from here:
http://git.postgresql.org/gitweb/?p=p
Janek Sendrowski wrote:
> My current version is 9.2. I could just update it.
> I got the pg_trgm from here:
> http://git.postgresql.org/gitweb/?p=postgresql.git;a=tree;f=contrib/pg_trgm;hb=refs/heads/master
Get a production release version from the snapshot tarball
downloads or use a URL that l
Hi Jeff and Martin
On 18. November 2013 17:44 Jeff Janes wrote:
> I rather doubt that. All the bottlenecks I know about for well cached
read-only workloads are around
> locking for in-memory concurrency protection, and have little or nothing
to do with secondary storage.
Interesting point. But
On Tue, Nov 19, 2013 at 02:39:17AM +0100, Stefan Keller wrote:
> Referring to the application is something you can always say - but
> shouldn't prevent on enhancing Postgres.
With respect, that sounds like a sideways version of, "You should
optimise for $usecase". You could be right, but I think
On Sun, Nov 17, 2013 at 10:48 PM, David Johnston wrote:
> I am preparing to REINDEX the unique index and DROP the non-unique one over
> the same field - probably Tuesday evening. Does everything I am saying here
> sound kosher or would someone like me to provide additional information?
[...]
>
We'd like to seek out your expertise on postgresql regarding this error that
we're getting in an analytical database.
Some specs:
proc: Intel Xeon X5650 @ 2.67Ghz dual procs 6-core, hyperthreading on.
memory: 48GB
OS: Oracle Enterprise Linux 6.3postgresql version: 9.1.9
shared_buffers: 18GB
We'
On Mon, Nov 18, 2013 at 12:40 PM, Brian Wong wrote:
> We'd like to seek out your expertise on postgresql regarding this error
> that we're getting in an analytical database.
>
> Some specs:
> proc: Intel Xeon X5650 @ 2.67Ghz dual procs 6-core, hyperthreading on.
> memory: 48GB
> OS: Oracle Enterp
Jeff Janes wrote
> On Sun, Nov 17, 2013 at 11:12 PM, David Johnston <
> polobo@
> > wrote:
>
>> Having recently had a pg_dump error out due to not having enough disk it
>> occurs to me that it would be nice for pg_dump to remove the partial dump
>> file it was creating (if possible/known) instea
David Johnston wrote
>
> Jeff Janes wrote
>> On Sun, Nov 17, 2013 at 11:12 PM, David Johnston <
>> polobo@
>> > wrote:
>>
>>> Having recently had a pg_dump error out due to not having enough disk it
>>> occurs to me that it would be nice for pg_dump to remove the partial
>>> dump
>>> file it wa
On Mon, Nov 18, 2013 at 8:30 PM, Brian Wong wrote:
> I've tried any work_mem value from 1gb all the way up to 40gb, with no
> effect on the error. I'd like to think of this problem as a server process
> memory (not the server's buffers) or client process memory issue, primarily
> because when w
Hi All,
I have added one column with xml type ,after adding I am getting following
error.
org.postgresql.util.PSQLException: ERROR: could not identify an equality
operator for type xml
If I have removed column following query works fine
select * from (select * from KM_COURSE_MAST where ID in
gajendra s v wrote
> Hi All,
>
>
> I have added one column with xml type ,after adding I am getting following
> error.
>
>
> org.postgresql.util.PSQLException: ERROR: could not identify an equality
> operator for type xml
>
> If I have removed column following query works fine
>
> select * fr
Hi Andrew
You wrote:
> And indeed, given the specifics of the use
> case you're outlining, it's as much a demonstration of that evaluation
> as a repudiation of it.
Maybe my use cases seem to be a special case (to me and over a million
users of OpenStreetMap it's not).
Anyhow: That's why I'm inve
32 matches
Mail list logo