On Sun, Dec 9, 2012 at 7:16 PM, Jeff Janes wrote:
> The obvious difference is that this one finds all 5 buffers it needs
> in buffers already, while the first one had to read them in. So this
> supports the idea that your data has simply grown too large for your
> RAM.
>
> Cheers,
>
> Jeff
>
J
;@ '[2012-07-03,2012-07-11)'::
daterange)"
" Buffers: shared hit=5"
"Total runtime: 0.046 ms"
Thank you.
On Sun, Dec 2, 2012 at 12:44 AM, Jeff Janes wrote:
> On Fri, Nov 30, 2012 at 12:22 PM, Henry Drexler
> wrote:
> > On Fri, Nov 30, 2012 at
On Sun, Dec 2, 2012 at 12:44 AM, Jeff Janes wrote:
> Could you do it for the recursive
> SQL (the one inside the function) like you had previously done for the
> regular explain?
>
> Cheers,
>
> Jeff
>
Here they are:
for the 65 million row table:
"Index Scan using ctn_source on massive (cost=0
On Fri, Nov 30, 2012 at 1:23 PM, Kevin Grittner wrote:
> Henry Drexler wrote:
>
> > why would the query time go from 4 minutes to over 50, for an
> > increase in table rows from 30 million to 65 million?
>
> Did the active (frequently referenced) portion of the database g
On Fri, Nov 30, 2012 at 1:42 PM, Jeff Janes wrote:
> Can you report the EXPLAIN (ANALYZE, BUFFERS) instead?
Thanks, here they are:
for the approx 65 million row approx 50 min version:
EXPLAIN (ANALYZE, BUFFERS)
select
massive_expansion(ctn,the_range)
from
critical_visitors;
"Seq Scan on crit
On Fri, Nov 30, 2012 at 8:22 AM, Henry Drexler wrote:
> Hello, and thank you in advance.
>
>
> Beyond the date vs timestamp troubleshooting I did,
>
I realize this could be confusing - since I ruled out that difference, the
real question is - given this setup, why would the query
The combination of pandas ipython and psycopg2 work wonders for pulling
data from db and manipulating/plotting,
although I don't know in more detail of what the client's use cases are.
On Wed, Jul 25, 2012 at 1:41 PM, Mark Phillips
wrote:
> I am seeking suggestions for business intelligence and
On Fri, Dec 9, 2011 at 5:48 PM, Jack Christensen wrote:
> CREATE TABLE people(
> id serial PRIMARY KEY,
> name varchar NOT NULL
> );
>
> INSERT INTO people(name) VALUES('Adam'), ('Adam'), ('Adam'), ('Bill'),
> ('Sam'), ('Joe'), ('Joe');
>
> SELECT name, count(*), random()
> FROM people
> GROUP B
google 'weeks of supply'
On Mon, Nov 21, 2011 at 1:18 PM, Jason Long
wrote:
> I have a custom inventory system that runs on PG 9.1. I realize this is
> not a postgres specify question, but I respect the skills of the members of
> this list and was hoping for some general advice.
>
> The system i
On Thu, Nov 10, 2011 at 8:34 AM, Thomas Kellerer wrote:
>
>>
> SELECT type,
> string_agg(color, ',') as organized_by_type
> FROM clothes
> GROUP BY type;
>
>
>
wow, yes that is cleaner.
Thank you for taking the time - obviously I need to read through the string
functions again.
I am thinking there is a better/simpler way, though this is what I have
working:
(postgres 9.1)
I would like to have the list of colors for each type of clothing to be
comma seperated in the end result.
like this:
typeorganized_by_type
pants red, blue, orange
shirt black, gra
On Fri, Oct 21, 2011 at 2:57 PM, Henry Drexler wrote:
> I realize I have sent a lot of messages on this thread so this will be the
> last one unless I come up with a solution, then I will post that.
>
>
Resolved.
Ray - thanks again for your help.
The pattern was it was only matchin
I realize I have sent a lot of messages on this thread so this will be the
last one unless I come up with a solution, then I will post that.
The idea behind this is to take a string and remove one character from it
successively and try to match that against any of the nodes in the query.
So for
On Fri, Oct 21, 2011 at 1:02 PM, Henry Drexler wrote:
>
> On Fri, Oct 21, 2011 at 6:10 AM, Raymond O'Donnell wrote:
>
>>
>> Glad you got sorted. What was the problem in the end?
>>
>> Ray.
>>
>> apart from the solution I sent earlier I have n
On Fri, Oct 21, 2011 at 6:10 AM, Raymond O'Donnell wrote:
>
> Glad you got sorted. What was the problem in the end?
>
> Ray.
>
> apart from the solution I sent earlier I have now noticed an abberation -
and in testing I have not isolated but have a simple example.
for instance, using the functio
, 2011 at 6:10 AM, Raymond O'Donnell wrote:
> On 20/10/2011 23:16, Henry Drexler wrote:
> >
> >
> > On Thu, Oct 20, 2011 at 5:41 PM, Raymond O'Donnell > <mailto:r...@iol.ie>> wrote:
> >
> >
> > Are you sure about this? Try using
On Thu, Oct 20, 2011 at 5:41 PM, Raymond O'Donnell wrote:
>
> Are you sure about this? Try using RAISE NOTICE statements in the
> function to output the value of nnlength each time it's executed.
>
> Ray.
>
>
Thank you for showing me the 'Rase Notice' , I had not seen that before and
it helped me
On Thu, Oct 20, 2011 at 5:42 PM, Raymond O'Donnell wrote:
>
> I was just trying to figure your function out... :-) I think you're
> mistaken about step 3 - This statement -
>
> node = substring(newnode, 1, i-1) || substring (newnode, i+1, nnlength)
>
> - is contatenating two substrings - the firs
On Thu, Oct 20, 2011 at 4:57 PM, Raymond O'Donnell wrote:
>
>
> Not sure what you mean by the above...
>
> Ray.
>
>
This is what I thought it was doing.
1. it gets the node from the first row
2. measures its length
3. then loops through removing one character at a time and comparing that
to th
On Thu, Oct 20, 2011 at 4:57 PM, Raymond O'Donnell wrote:
>
> Not sure what you mean by the above... that statement only gets executed
> once, so the value of nnlength doesn't change.
>
> Ray.
doesn't the function get executed once for each row in the query?
so in the below example
thr wil
I found the problem, it looks like nnlength := length(newnode); is not
getting reset
create or replace function nnodetestt(text) returns text language plpgsql as
$$
DECLARE
newnode alias for $1;
nnlength integer;
t text;
nmarker text;
BEGIN
nnlength := length(newnode);
for i in 1..(nnlength-1) loo
I am struggling to understand at what point the query knowledge comes into
play here.
Ideally it should look in nmarker and if there is an 'N' then execute the
query (but how would it know that without running the query first?) and
return the results in the nnodetest, but (in its current form it s
On Mon, Oct 17, 2011 at 3:11 PM, Henry Drexler wrote:
> couldn't you just wrap it in a case statement to change the t to true
> etc...?
>
>
example:
select
case when (1=1) = true then 'true' else 'false' end
couldn't you just wrap it in a case statement to change the t to true
etc...?
On Mon, Oct 17, 2011 at 2:29 PM, Viktor Rosenfeld wrote:
> Hi,
>
> I need to move data from PostgreSQL to MonetDB and also bulk-import data
> into MonetDB that was bulk-exported from PostgreSQL by other people. My
> pr
down the
line.
You need to eliminate the date column in the query, or whatever fits your
requirements.
On Mon, Oct 10, 2011 at 6:24 PM, Henry Drexler wrote:
> you are also grouping by sample date, those are the largest values for the
> criteria you have set out in the group by.
>
>
>
you are also grouping by sample date, those are the largest values for the
criteria you have set out in the group by.
On Mon, Oct 10, 2011 at 6:17 PM, Rich Shepard wrote:
> I'm trying to query the table to extract the single highest value of a
> chemical by location and date. This statement gi
On Thu, Oct 6, 2011 at 4:37 PM, Gavin Flower
wrote:
> On 07/10/11 01:40, Henry Drexler wrote:
>
>> I have a workaround to the error/result, but am wondering what the result
>> of ts_rank of '1e-020' represents?
>>
>> Here is the original:
>
it sent before I finished, here is the rest:
I have fixed this by doing the following:
select
ts_rank(to_tsvector(replace('a_a_do_ug_read_retreqmon_ptam','_','
')),plainto_tsquery(replace('a_a_do_ug_read_retrmso.com_ptam','_',' ')))
so I have found a solution, just wondering what the earlier err
I have a workaround to the error/result, but am wondering what the result of
ts_rank of '1e-020' represents?
Here is the original:
select
ts_rank(to_tsvector('a_a_do_ug_read_retreqmon_ptam'),to_tsquery('a_a_do_ug_read_retrmso.com_ptam'))
that was spot on Richard. Thank you for your time and the solution.
On Wed, Oct 5, 2011 at 3:22 PM, Richard Huxton wrote:
> On 05/10/11 19:29, Henry Drexler wrote:
>
>>
>> and would like to have a column indicate like this:
>>
>> 'evaluation' '
I can do this in excel with vba, though due to the volume of data that is
now impracticable and I am trying to move most of my logic into the query
and db for analysis.
Looking at the analytic functions I see no way to carry values over the way
they need to be.
Example column:
I have a column th
Are you looking for stuff like this?
http://www.postgresql.org/docs/9.0/static/functions-window.html
http://www.postgresql.org/docs/9.0/static/functions-string.html
On Fri, Sep 30, 2011 at 10:12 AM, Dario Beraldi wrote:
> Hello,
>
> I'm looking for some information (guidelines, docs, tutorials,
*From*: Rohan Malhotra
select * from items order by random() limit 5;
my basic requirement is to get random rows from a table, my where clause
will make sure I won't get same rows in repeated execution of above queries.
--
Regards
To clarify, you are not looking for random then yes? as you
tested pairs.
>
> David J.
>
>
> On Sep 19, 2011, at 10:37, Henry Drexler wrote:
>
> Thanks you that is the kind of suggestion I was looking for - I will look
> into plpgsql.
>
> Yes, there are several optimizations in it - though due to the actual data
> the first few char
ou are doing (given your specification below) in VBA is
> also doable in PostgreSQL.
>
> ** **
>
> David J.
>
> ** **
>
> ** **
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] *On Behalf Of *Henry Drexler
> *Sent:*
I have no problem doing this in excel vba, though as the list grows larger
obviously excel has row limits.
What is being done:
There is a column of data imported into the db - they are just text strings,
there are about 80,000 rows of them. The goal is to do a single character
elimination to fin
select 1/3::float as answer;
>
> answer
>
> ---
>
> 0.333
>
> (1 row)
>
> ** **
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] *On Behalf Of *Henry Drexler
&
thanks Tom and Guillaume,
*That sequencing of casting makes sense - I appreciate the clear
explanation.
*
*
*
On Fri, Sep 9, 2011 at 11:12 AM, Tom Lane wrote:
> Henry Drexler writes:
> > [ "1/3" yields zero ]
>
> Yeah, it's an integer division.
>
> > I th
take any table and run
Query
-
select
1/3
from
storage
limit 1
Result
-
?column?
integer
0
Expected Result
-
?column?
double precision
0.3...
Question
-
Since there is no column type to begin with as this is
39 matches
Mail list logo