On Mon, Feb 27, 2006 at 09:14:40AM -0800, CG <[EMAIL PROTECTED]> wrote:

> I could probably get even better performance out of the table, at the cost of 
> a
> significant increase in table and index size, by chopping up the columns into
> smaller chunks.
> 
> "Hello World" would yield
> 
> 'h.e.l.l.o.w.o.r.l.d'
> 'e.l.l.o.w.o.r.l.d'
> 'l.l.o.w.o.r.l.d'
> 'l.o.w.o.r.l.d'
> 'o.w.o.r.l.d'
> 'w.o.r.l.d'
> 'o.r.l.d'
> 'r.l.d'
> 
> and using a wildcard search "search_vector ~ 'o.r.l.*'" would jump right to 
> the
> vectors which start with "o.r.l" ... 

But with this approch you'd be fine with a normal varchar_ops btree index
for textfields and searching using "like 'world%'", wouldn't you?
Or is the ltree approch more efficient?

I'm not trying to be smart-assed, it's a naive question, since I'm
looking for an efficient substring search solution in postgresql myself.

regards,
bkw


---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Reply via email to