;,
fields -> 'b', fields -> 'c', fields -> 'd', fields -> 'e', foo_id::text))
from things) select md5, count(id), array_agg(id) from t group by 1 having
count(id) > 1;
-Robert
On Tue, Aug 20, 2013 at 1:53 PM, Pavel Stehule wrote:
>
At the moment, all guids are distinct, however before I zapped the
duplicates, there were 280 duplicates.
Currently, there are over 2 million distinct guids.
-Robert
On Mon, Aug 19, 2013 at 11:12 AM, Pavel Stehule wrote:
>
>
>
> 2013/8/19 Robert Sosinski
>
>> Hi Pave
INSERT OR UPDATE ON things FOR EACH ROW EXECUTE
PROCEDURE timestamps_tfun()
Let me know if you need anything else.
Thanks,
On Mon, Aug 19, 2013 at 3:29 AM, Pavel Stehule wrote:
> Hello
>
> please, can you send some example or test?
>
> Regards
>
> Pavel Stehule
>
&g
When using array_agg on a large table, memory usage seems to spike up until
Postgres crashes with the following error:
2013-08-17 18:41:02 UTC [2716]: [2] WARNING: terminating connection because
of crash of another server process
2013-08-17 18:41:02 UTC [2716]: [3] DETAIL: The postmaster has comma
help,
--
Robert Sosinski
On Wednesday, October 3, 2012 at 10:44 AM, Merlin Moncure wrote:
> On Wed, Oct 3, 2012 at 9:33 AM, Robert Sosinski
> mailto:rsosin...@ticketevolution.com)> wrote:
> > We are running Postgres 9.1.3, and after stopping it by physically shutting
> >
a problem with an index, because it is saying that there in a GIN
metapage missing. Any idea how to get postgres to boot up after it gets into
this condition without having to recover from a backup? Would upgrading to 9.2
prevent this issue from happening again?
Thanks,
--
Robert Sosinski
ms
(17 rows)
The first query shows a cost of 190,169.55 and runs in 199,806.951 ms. When I
disable nested loop, I get a cost of 2,535,992.34 which runs in only
133,447.790 ms. We have run queries on our database with a cost of 200K cost
before and they ran less then a few seconds, which makes me wonder if the first
query plan is inaccurate. The other issue is understanding why a query plan
with a much higher cost is taking less time to run.
I do not think these queries are cached differently, as we have gotten the same
results ran a couple of times at across a few days. We also analyzed the
tables that we are querying before trying the explain analyze again, and were
met with the same statistics. Any help on how Postgres comes up with a query
plan like this, and why there is a difference would be very helpful.
Thanks!
--
Robert Sosinski
obert Sosinski
On Tuesday, September 18, 2012 at 4:04 PM, Steve Crawford wrote:
> On 09/18/2012 08:59 AM, Robert Sosinski wrote:
> > We have a table, which has items that can be put on hold of 5 minutes
> > (this is for an online store) once they are placed into a cart. What
> > w
future, and select
items where hold_until is less then now().
Would it be possible to change this to using a boolean that is set to true when
item is put on hold, and have something like a time-based trigger automatically
update the held boolean to false after 5 minutes pass.
Thanks,
--
Robert
different schemas.
Is there a way to see all tables across all schemas?
Thanks,
--
Robert Sosinski
10 matches
Mail list logo