I have a server that has a column timestamp without timezone.
Is the time still saved?
if I select column with timestamp it will show server timestamp with
timezone.
But If I move the data from EST to Central will the timestamp with timezone
be correct?
Or will it just not make the adjustment?
T
The Code of Conducts basically amount to a "code of wrongthink". This can
be best described when some of their advocates, like for example in the
Node project make respositories called "mansplain" and "misandry", or when
speakers at OSCON are caught with mugs reading "male tears" and using the
"#ki
> test=> delete from cash_journal where ARRAY[click, cash_journal_id] NOT in
> (select max(ARRAY[click,cash_journal_id]) from cash_journal group by
> fairian_id); DELETE 7
For what it's worth, we've run into *severe* performance issues using in() if
there are a large number of values in conjunct
On Saturday, December 05, 2015 11:08:05 AM Berend Tober wrote:
> WITH max_click AS (
>SELECT
> cash_journal.fairian_id,
> max(cash_journal.click) AS click
> FROM cash_journal
> GROUP BY cash_journal.fairian_id
> )
>delete from cash_journal j
> using max_click b
On Monday, December 14, 2015 05:25:16 PM Adrian Klaver wrote:
> > FOLLOWUP QUESTION: is there a way to ask the query planner what
> > tables/fields were output in a database result?
>
> Just dawned on me, are you asking if EXPLAIN can output more detailed
> information?
Ha ha, in another post, I
On Monday, December 14, 2015 05:20:52 PM Adrian Klaver wrote:
> > FOLLOWUP QUESTION: is there a way to ask the query planner what
> > tables/fields were output in a database result?
>
> I am not following, that would be in the query output would it not? A
> more detailed explanation of what you w
Is there a way to set PG field-level read permissions so that a deny doesn't
cause the query to bomb, but the fields for which permission is denied to be
nullified?
In our web-based app, we have a request to implement granular permissions:
table/field level permissions. EG: userX can't read cu
I have a horribly-performing query similar to below, and I'd like to convert
it to use a "WITH mytable as ( ... ) " without having to re-architect my code.
For some reason, using a WITH prefix seems to generally work much faster than
IN() sub clause even allowing identical results. (runs in 1/4t
On Wednesday, September 30, 2015 03:49:44 PM Keith Fiske wrote:
> We've run postgres on ZFS for years with great success (first on
> OpenSolaris, now on OmniOS, and I personally run it on FreeBSD). The
> snapshotting feature makes upgrades on large clusters much less scary
> (snapshot and revert if
On Wednesday, September 30, 2015 09:58:08 PM Tomas Vondra wrote:
> On 09/30/2015 07:33 PM, Benjamin Smith wrote:
> > On Wednesday, September 30, 2015 02:22:31 PM Tomas Vondra wrote:
> >> I think this really depends on the workload - if you have a lot of
> >> random w
On Wednesday, September 30, 2015 02:22:31 PM Tomas Vondra wrote:
> I think this really depends on the workload - if you have a lot of
> random writes, CoW filesystems will perform significantly worse than
> e.g. EXT4 or XFS, even on SSD.
I'd be curious about the information you have that leads you
On Tuesday, September 29, 2015 10:35:28 AM John R Pierce wrote:
> On 9/29/2015 10:01 AM, Benjamin Smith wrote:
> > Does anybody here have any recommendations for using PostgreSQL 9.4
> > (latest) with ZFS?
>
> For databases, I've always used mirrored pools, not raidz*.
Does anybody here have any recommendations for using PostgreSQL 9.4 (latest)
with ZFS?
We've been running both on ZFS/CentOS 6 with excellent results, and are
considering putting the two together. In particular, the CoW nature (and
subsequent fragmentation/thrashing) of ZFS becomes largely irr
Hello,
I have a rather big query which should match id's from various tables
together. To be able to use multiprocessing I'm currently using
postgresql 9.5-dev, because of the SKIP LOCKED feature.
SELECT
start,
destination,
ST_Distance(start_geom, end_geom) AS distance_meter
FROM (
01:55 GMT LOG: autovacuum launcher started
Thanks!
--
Benjamin Rutt
We're trying to use bdr replication with DJango. Django's default primary
keys are SERIAL columns which is a pseudo type that uses global sequences (
http://www.postgresql.org/docs/devel/static/datatype-numeric.html#DATATYPE-SERIAL
). BDR requires a global sequence to be defined using an explicit U
Hi all,
I'm running PostgreSQL 9.1 on a fairly beefy server with a lot of RAM,
so I generally want work_mem set pretty high. One of my tables has a
GIN index, and, as a consequence of the high work_mem setting, its
fastupdate pending list can grow very large. This leads to the
occasional IN
EXPLAIN does not appear to work on ALTER TABLE statements:
=> EXPLAIN ALTER TABLE "foo" ALTER COLUMN "bar" SET NOT NULL;
ERROR: syntax error at or near "ALTER"
LINE 1: explain ALTER TABLE "foo" ALTER COLUMN "bar" SET NOT NULL;
^
Zev
On 02/25/2014 01:56 PM, Sameer Kumar wrote:
To be clear, this is with PostgreSQL 9.1. Also, if there is some other
way of doing this, I'd be interested in other methodologies as well.
Zev
On 02/24/2014 10:41 PM, Zev Benjamin wrote:
Hi all,
I'm sure this has been answered somewhere, but I was not able to find
anything i
Hi all,
I'm sure this has been answered somewhere, but I was not able to find
anything in the list archives.
I'm conceptually trying to do
ALTER TABLE "foo" ADD COLUMN "bar" boolean NOT NULL DEFAULT False;
without taking any noticeable downtime. I know I can divide the query
up like so:
A
Hi,
Could anyone explain the meaning of the bit fields in struct
HeadlineWordEntry? Specifically, I'm not completely sure about
selected, in, replace, repeated, and skip.
Thanks,
Zev
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription
On 11/15/2013 07:40 PM, Zev Benjamin wrote:
One problem that I've run into here is that I would also like to
highlight matched text in my application. For my existing search
solution, I do this with ts_headline. For partial matches, it's
unfortunately not just a matter of searchi
On 11/15/2013 07:40 PM, Zev Benjamin wrote:
One problem that I've run into here is that I would also like to
highlight matched text in my application. For my existing search
solution, I do this with ts_headline. For partial matches, it's
unfortunately not just a matter of searchi
?
Zev
On 12/02/2013 01:24 PM, Pavel Stehule wrote:
2013/12/2 Zev Benjamin mailto:zev-pg...@strangersgate.com>>
Hrm. Conceptually, I think you actually want something like:
CREATE OR REPLACE FUNCTION public.reduce_dim(anyarray)
RETURNS SETOF anyarray
LANGUAGE plpgsql
reduce_dim(array[array[1, 2], array[2, 3]]);
reduce_dim
{1,2}
{2,3}
(2 rows)
Regards
Pavel Stehule
2013/11/28 Zev Benjamin mailto:zev-pg...@strangersgate.com>>
It appears that unnest, when called on a multi-dimensional array,
effectively flattens the array first. For e
Thanks for the explanation and examples!
Zev
On 11/28/2013 10:03 AM, Tom Lane wrote:
David Johnston writes:
Zev Benjamin wrote
It appears that unnest, when called on a multi-dimensional array,
effectively flattens the array first. For example: ...
Multidimensional arrays do have
It appears that unnest, when called on a multi-dimensional array,
effectively flattens the array first. For example:
=> select * from unnest(array[array[1, 2], array[2, 3]]);
unnest
1
2
2
3
(4 rows)
while I would have expect something like the following:
=> s
It appears that unnest, when called on a multi-dimensional array,
effectively flattens the array first. For example:
=> select * from unnest(array[array[1, 2], array[2, 3]]);
unnest
1
2
2
3
(4 rows)
while I would have expect something like the following:
=> s
On 11/14/2013 10:09 AM, Alexander Farber wrote:
pgtune has produced the following for my server (the specs:
http://www.hetzner.de/en/hosting/produkte_rootserver/px60ssd ):
default_statistics_target = 50
maintenance_work_mem = 1GB
constraint_exclusion = on
checkpoint_completion_target = 0.9
effec
On 11/06/2013 02:04 PM, bricklen wrote:
On Wed, Nov 6, 2013 at 10:53 AM, Zev Benjamin
mailto:zev-pg...@strangersgate.com>> wrote:
On 11/06/2013 01:47 PM, bricklen wrote:
On Wed, Nov 6, 2013 at 10:45 AM, Zev Benjamin
mailto:zev-pg...@strangersga
On 11/06/2013 01:47 PM, bricklen wrote:
On Wed, Nov 6, 2013 at 10:45 AM, Zev Benjamin
mailto:zev-pg...@strangersgate.com>> wrote:
Hi,
I have Postgres full text search set up for my application and it's
been working great! However, my users would like their searches to
Hi,
I have Postgres full text search set up for my application and it's been
working great! However, my users would like their searches to turn up
parts of URLs. For example, they would like a search for "foobar" to
turn up a document that contains the string
"http://example.com/foobar/blah
e to
be careful to have swap activated, and SHMPAGES tuned, as well as
SHMMAX.
Otherwise it runs like a charm.
--
Benjamin Henrion
FFII Brussels - +32-484-566109 - +32-2-3500762
"In July 2005, after several failed attempts to legalise software
patents in Europe, the patent establishment change
I'm looking to expand my old home server running on Ubuntu Server
I only have ~100 GB system drive (4 HDs, Raid 5, SAS). Backups are to
a USB 1.1 (2TB)
I do have lots of spare old SATA HD's with a spare 1GB connection to server box.
I have been looking into getting an iSCSI system to run my /data
Hello,
I'm looking to do something like:
select month, mb_size(description) from data where year = 2012 group by month
I also am looking for raw size not compressed size.
Thanks
Ben
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
ht
Hi,
I am trying pgpool2 to split read and write queries, where reads
should go to the slave server (streaming replication) and writes to
the master server.
Anybody has a config file that works for pgpool2?
Best,
--
Benjamin Henrion
FFII Brussels - +32-484-566109 - +32-2-3500762
"In July
SCSI
> too, in RAID 10, but the dump takes over 2-3 hours (60 GB database)
> and the CPU consumption during this time is huge.
I wrote a bunch of shell scripts tools to backup postgres 9.1 with
rsync/ccollect (another hardlink tool), I might find the time to
publish it on github once I fin
Hello,
I have a data sets where each of the objects is represented in a metric
space with 32 dimensions (i.e., each object is represented by 32 numbers).
Is there a way to represent this object in Postgresql so that I can perform
KNN?
Thanks,
Benjamin
On Thursday, November 03, 2011 10:59:37 AM you wrote:
> There's a pretty varied mix of speed, durability, and price with any
> SSD based architecture, but the two that have proven best in our
> testing and production use (for ourselves and our clients) seem to be
> Intel (mostly 320 series iirc), a
On Wednesday, November 02, 2011 01:01:47 PM Yeb Havinga wrote:
> Could you tell a bit more about the sudden death? Does the drive still
> respond to queries for smart attributes?
Just that. It's almost like somebody physically yanked them out of the
machine, after months of 24x7 perfect performa
On Wednesday, November 02, 2011 11:39:25 AM Thomas Strunz wrote:
> I have no idea what you do but just the fact that you bought ssds to
> improve performance means it's rather high load and hence important.
Important enough that we back everything up hourly. Because of this, we
decided to give
Well,
After reading several glowing reviews of the new OCZ Vertex3 SSD last spring,
we did some performance testing in dev on RHEL6. (CentOS)
The results were nothing short of staggering. Complex query results returned
in 1/10th the time as a pessimistic measurement. System loads dropped from
Further to Scott's comment, we are running our application platform on
nginx/php (using php-fpm).
It scales very well and it is extremely fast.
When running under Apache, we had to constantly restart the apache
service because it could not handle the load (at 150 concurrent users
hitting the app it
On Sunday, April 17, 2011 01:55:02 AM Henry C. wrote:
> On Thu, April 14, 2011 18:56, Benjamin Smith wrote:
> > After a glowing review at AnandTech (including DB benchmarks!) I decided
> > to spring for an OCX Vertex 3 Pro 120 for evaluation purposes. It cost
> > about $300
&g
On Friday, April 15, 2011 09:50:57 AM Tom Lane wrote:
> If you simply unpacked the tar archive and started a postmaster on that,
> you'd be pretty much guaranteed to get a corrupt database. The tar
> archive is not a valid snapshot by itself --- you have to replay
> whatever WAL was generated duri
After a glowing review at AnandTech (including DB benchmarks!) I decided to
spring for an OCX Vertex 3 Pro 120 for evaluation purposes. It cost about $300
with shipping, etc and at this point, won't be putting any
Considering that I sprang for 96 GB of ECC RAM last spring for around $5000,
eve
The speed benefits of SSDs as benchmarked would seem incredible. Can anybody
comment on SSD benefits and problems in real life use?
I maintain some 100 databases on 3 servers, with 32 GB of RAM each and an
extremely rich, complex schema. (300+ normalized tables)
I was wondering if anybody her
Try this:
http://lmgtfy.com/?q=web+hosting+postgresql
On Sunday, March 06, 2011 11:33:01 am Eduardo wrote:
> At 17:24 06/03/2011, you wrote:
> >On 3/5/2011 4:08 PM, matty jones wrote:
> >>I already have a domain name but I am looking for a hosting company
> >>that I can use PG with. The few I
I'd also add: run pg_tune on your server. Made a *dramatic* difference for us.
On Friday, February 25, 2011 05:26:56 am Vick Khera wrote:
> On Thu, Feb 24, 2011 at 6:38 PM, Aleksey Tsalolikhin
>
> wrote:
> > In practice, if I pg_dump our 100 GB database, our application, which
> > is half Web f
On Wednesday, August 18, 2010 04:58:08 pm Joshua D. Drake wrote:
> Well if you are just using it for updates to the schema etc... you
> should only need to launch a single connection to each database to make
> those changes.
And that's exactly the problem. On each server, we have at least dozens o
On Wednesday, August 18, 2010 08:40:21 pm Adrian von Bidder wrote:
> Heyho!
>
> On Thursday 19 August 2010 01.32:06 Benjamin Smith wrote:
> > This way we can be sure that either all the databases are in synch, or
> > that we need to rollback the program patch/update.
>
On Thursday, May 13, 2010 11:51:08 pm Brian Modra wrote:
> Maybe the best way to solve this is not to do automatic distribution
> of the data, but rather to provide tools for implementing distributed
> references and joins.
Here's my vote! I'd *LOVE* it if I could do a simple cross-database join
Is there a way to update a number of databases hosted on a single server
without opening a separate psql connection to each database?
We have a cluster of servers hosting an application on Postgres. Right now, we
have dozens of databases per server, enough that we're starting to have
problems
"A deep unwavering belief is a sure sign that you're missing something."
-- Unknown
I had no intention of sparking an ideological discussion.
I read Joe's article reference previously - a simple case for using a
normalized database. I
I've used this same concept in subqueries for a very long time. Doing this
allows me to "dive in" and get other values from the joined table, rather than
just the thing that we're getting the most of.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
"I kept looking fo
I have some questions about the best way to best use foreign keys in complex
schemas. It's becoming cumbersome to manage a large set of foreign keys - is
there a better way?
// FOUNDATIONAL //
Let's say that you want to keep addresses, and one of the values that you need
to keep is the state
pick which one you're more comfortable with (BSD
ports style, or Linux/Debian style).
--
Benjamin Reed a.k.a. Ranger Rick
Fink, KDE, and Mac OS X development
Blog: http://www.raccoonfink.com/
Music: http://music.raccoonfink.com/
--
Sent via pgsql-general mailing list (pgsql-genera
should not really be touched by anyone
but Apple (or the system vendor, for that matter, on any unix).
/Library is searched automatically by the linker when doing
framework-style builds, without needing -F to find a specific location
for it.
--
Benjamin Reed a.k.a. Ranger Rick
Fink, KDE, and Mac
ries rather than a permanent
>> library solution.
>>
>> Today, I'm going to follow instructions here:
>> http://developer.apple.com/internet/opensource/postgres.html
>
> That's very old. I'd stick with the installer or MacPorts.
Or Fink. ;)
--
Benja
pq you'd have /Library/Frameworks/pq.framework (and all the
files/directories as required) and then people could compile with
"-framework pq" and the OSX linker would do the write thing without
having to dirty a system library path (/usr/lib) with symlinks.
--
Benjamin Reed a.k.a
ere you need to know your way around a Makefile..."
In message <[EMAIL PROTECTED]> Tom Lane <[EMAIL PROTECTED]> writes:
> Greg Smith <[EMAIL PROTECTED]> writes:
> > On Thu, 26 Jun 2008, Benjamin Weaver wrote:
> >> I have heard of problems arisi
L5335 2.0 GHz.
2. Are there performance problems running PostGreSQL 8.3 on a 64-bit
processor?
Thanks in advance.
Yours Ben Weaver
--
Benjamin Weaver
Faculty Research Associate, Imaging Papyri, Greek Fragments Projects, Oxford
email: [EMAIL PROTECTED]
phone: (0)1865 610236
--
Sent via
Hello,
Does anybody know of the performance impact of using uuid over int4?
Specifically, I am assuming that it will be about 4 times slower since
it is 128 bits. Is this correct?
Benjamin
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
Has anybody had a good experience going from Npgsql to
Corelab:PostgreSQLDirect? I am considering migrating because it offers
support for COPY but I am wondering about other functionality and
performance as well. Any feedback would be greatly appreciated.
Benjamin
to load data?
Benjamin
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
at doing a "shutdown" (centos: /sbin/service postgresql
stop) will be considered a crash?
Benjamin
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PRO
down read-write-create-selete software!
Thanks again for your help,
Ben
In message <[EMAIL PROTECTED]> Tom Lane <[EMAIL PROTECTED]>
writes:
> Benjamin Weaver <[EMAIL PROTECTED]> writes:
> > I AM in fact running the db on Linux. Redhat 9. Are the encoding
paramete
PROTECTED]>
writes:
> Benjamin Weaver <[EMAIL PROTECTED]> writes:
> > I AM in fact running the db on Linux. Redhat 9. Are the encoding
parameters
> > wrong for Linux?
>
> Hmm ... RH 9 is awfully old. It's at least conceivable that you're
> getting bit by so
server box.
Ben
In message <[EMAIL PROTECTED]> Gregory Stark
<[EMAIL PROTECTED]> writes:
> "Benjamin Weaver" <[EMAIL PROTECTED]> writes:
>
> > Tom,
> >
> > Thanks. I am running:
> >
> > Postgres 8.1.4
> > server_encoding
Tom,
To be more precise, the mixed queries "fail" in that they return hits of 0
rows, when they should return more than 0 rows.
Ben
In message <[EMAIL PROTECTED]> Tom Lane <[EMAIL PROTECTED]>
writes:
> Benjamin Weaver <[EMAIL PROTECTED]> writes:
> > I ha
Tom,
Thanks. I am running:
Postgres 8.1.4
server_encoding UTF8
lc_collate en_GB.UTF-8
lc_ctype en_GB.UTF-8
Ben
In message <[EMAIL PROTECTED]> Tom Lane <[EMAIL PROTECTED]>
writes:
> Benjamin Weaver <[EMAIL PROTECTED]> writes:
> > I have the following
xt LIKE '%Smith%'
What must I do to ensure that mixed-term searches of the first kind succeed?
Thanks in advance,
Ben Weaver
--
Benjamin Weaver
Faculty Research Associate, Imaging Papyri Projects, Herculaneum Society, Oxford
email: [EMAIL PROTECTED]
phone: (
In what order should I :
- COPY data
- Create indexes
- Create Trigger
- Vaccum
?
Currently I am:
1. Create table
2 . Create trigger for updates
3. Create indexes including gin
4. Vaccum
Benjamin
On Aug 27, 2007, at 7:59 AM, Tom Lane wrote:
Benjamin Arai <[EMAIL PROTECTED]> writes
Oh, I see. I didn't look carefully at the EXPLAIN ANALYZE I posted.
So, is there a solution to the rank problem?
Benjamin
On Oct 11, 2007, at 8:53 AM, Tom Lane wrote:
Benjamin Arai <[EMAIL PROTECTED]> writes:
It appears that the ORDER BY rank operation is the slowing factor.
It appears that the ORDER BY rank operation is the slowing factor.
If I remove it then the query is pretty fast. Is there another way
to perform ORDER BY such that it does not do a sort?
Benjamin
On Oct 5, 2007, at 3:57 PM, Benjamin Arai wrote:
On Oct 5, 2007, at 8:32 AM, Oleg Bartunov
On Oct 5, 2007, at 8:32 AM, Oleg Bartunov wrote:
On Fri, 5 Oct 2007, Tom Lane wrote:
Benjamin Arai <[EMAIL PROTECTED]> writes:
# explain analyze select * FROM fulltext_article, to_tsquery
('simple','dog') AS q WHERE idxfti @@ q ORDER BY rank(idxfti, q
le;
count
933001
(1 row)
# select COUNT(*) FROM fulltext_article, to_tsquery('simple','blue &
green') AS q WHERE idxfti @@ q;
count
---
6308
(1 row)
Benjamin
---(end of broadcast)---
TIP 3:
On Wednesday 19 September 2007, Bjørn T Johansen wrote:
> It's a Dell server with the following spec:
>
> PE2950 Quad-Core Xeon E5335 2.0GHz, dual
> 4GB 667MHz memory
> 3 x 73GB SAS 15000 rpm disk
> PERC 5/I Integrated controller card (8 ports, 256MB cache, battery backup) x
6 backplane
Asking
On Wednesday 19 September 2007, Chester wrote:
> Hi
>
> I have a question regarding foreign keys, I just cannot get it to create
> them for meI must be doing something wrong but I have no idea what
> that might be :)
>
> I have a table "clients"
>
> clientID (primary)
> ticode
> Firstname
I'm using 8.1 RPMs for CentOS and so far, it's been great.
Now, I'm going to enable SSL. I had no trouble with the instructions on the
documentation for server-only certificates, and verified that psql (Linux)
acknowledges the SSL connection.
But I am stumped as to how to create a client cert
Ok, this appears to have worked but I have to check for exception
code "OTHERS" because I could not figure out what the actual code
being thrown was. Is there a specific exception code for:
ERROR: no operand in tsearch query: "("
Thanks for the help!
Benjamin
On Sep
That is helpful but these functions to do help me detect errors in
queries such as "(moose & frog" where the left parentheses is
missing. I may just have to write a lexical analyzer.
Benjamin
On Sep 8, 2007, at 10:45 PM, Oleg Bartunov wrote:
There are two useful functions -
Is there a way to pass a query to PostgreSQL to check if the
TSeasrch2 search text is valid? For example,
SELECT to_tsquery('default', '!');
returns an error. I want to know if there is a way get true/false
for the '!' portion of the query?
Benjamin
Why is a trigger faster than doing a ALTER after table is created? I
thought a trigger would be slower because it would be invoked every
iteration (a new row is inserted) during the COPY process.
Benjamin
On Aug 26, 2007, at 8:43 PM, Tom Lane wrote:
Gregory Stark <[EMAIL PROTEC
copy data into
the idxFTI field during the copy process? Is there anything else I
can do to get my loading process to perform similar to backup/restore?
Does pg_dump also dump the indexes? That would explain why it is so
much faster...
Benjamin
---(end of
this database using tsearch2, so I
am assuming all tables are going to be queried each time since the
text is not partition by any specific constraint - e.g., >R goes to
table 1 and <=R goes to table 2.
Benjamin
On Aug 25, 2007, at 11:18 AM, Joshua D. Drake wrote:
-BEGIN PGP
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
As stated in the previous email if I use partitioning then queries
will be executed sequentially - i.e., instead of log(n) it would be
(# partitions) * log(n). Right?
Benjamin
On Aug 25, 2007, at 9:18 AM, Joshua D. Drake wrote:
-BEGIN
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
This kind of disappointing, I was hoping there was more that could be
done.
There has to be another way to do incremental indexing without
loosing that much performance.
Benjamin
On Aug 24, 2007, at 6:29 PM, Joshua D. Drake wrote
indexes and etc) and accessing them all
through a view. This way I only have to index the new data each month.
Does this work? Does a view with N tables make it N times slower for
tsearch2 queries? Is there a better solution?
Benjamin
---(end of broadcast
6 | Gettysburg address quotation
(1 row)
Which would mean I would have to create index on strMessage. Right?
Benjamin
On Aug 5, 2007, at 7:17 PM, Tom Lane wrote:
Benjamin Arai <[EMAIL PROTECTED]> writes:
I am actually creating a GIN index on another field but I need to
index th
original "text" field to perform exact phrase matches.
Does anybody know how to fix this or alternatively perform exact
phrase matches on a GIN index?
Thanks in advance!
Benjamin
---(end of broadcast)---
TIP 1: if posting/reading thro
working great from a indexing standpoint. But I want to
parrallelize searches if possible to reduce the perofrmance loss of having
multiple tables.
Benjamin
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire
ows
GIN_FUZZY_SEARCH_LIMIT=0, disabled on
default
When I do a search with say LIMIT 100 isn't this essentially the same thing?
Benjamin
> Benjamin,
>
> what version of postgres and what type of index you used ?
> The best setup is to use partitioning with rather small table for
>
> Benjamin Arai wrote:
>> Hi,
>>
>> I have a really big Tsearch2 table (100s GB) that takes a while to
>> perform
>> queries and takes days to index. Is there any way to fix these issues
>> using UNIONs or partitioning? I was thinking that I could parti
drives
-Quad core XEON
16 GB of memory (Any suggestion on the postgresql.conf setup would also be
great! Currently I am just setting shared mem to 8192MB)
-x86_64 but Redhat 5 Ent
Benjamin
---(end of broadcast)---
TIP 6: explain analyze is your
drives
-Quad core XEON
16 GB of memory (Any suggestion on the postgresql.conf setup would also be
great! Currently I am just setting shared mem to 8192MB)
-x86_64 but Redhat 5 Ent
Benjamin
---(end of broadcast)---
TIP 1: if posting/reading through
.
Benjamin
On Apr 24, 2007, at 1:12 AM, Alban Hertroys wrote:
Benjamin Arai wrote:
Hi,
I upgraded to 8.2.4 but there was no significant change in
performance.
I did notice that hte query appears to be executed incorrectly.
I have pasted the EXPLAIN ANALYZE below to illustrate:
=# explain
time=96.903..96.903 rows=411341 loops=1)
Index Cond: (tb_id = 'P2_TB1'::text)
Total runtime: 7419.128 ms
(8 rows)
Is there are way to force the "Bitmap Index Scan on idx_search_tb_id"
to perform first then let "Bitmap Index Scan on idx_search_path_id&
2_TB1'::text)
(3 rows)
Benjamin
On Apr 23, 2007, at 3:38 PM, Benjamin Arai wrote:
Can anybody explain to me why this query is executing so slow?
=# explain select s_content,textdir from text_search where
path_id='1' AND tb_id='P2_TB1';
on idx_search_path_id
(cost=0.00..16546.09 rows=1016571 width=0)
Index Cond: (path_id = 1)
-> Bitmap Index Scan on idx_search_tb_id
(cost=0.00..23315.60 rows=1016571 width=0)
Index Cond: (tb_id = 'P2_TB1'::text)
(7 rows)
Thanks in advan
1 - 100 of 228 matches
Mail list logo