Steven Schlansker writes:
> It's been covered a few times in the past,
> http://www.postgresql.org/message-id/banlktimft4ohqkb6y7m4wqrffpwnutp...@mail.gmail.com
> http://postgresql.1045698.n5.nabble.com/partial-indexes-not-used-on-parameterized-queries-td2121027.html
> but in a nutshell, partial i
Thanks for your reply.
On Mon, Feb 4, 2013 at 3:48 PM, Tom Lane wrote:
> You're worrying about the wrong thing entirely. The right thing to be
> worrying about is why are some of those row estimates off by four orders
> of magnitude, and what you can do to fix that. The planner will never
> d
Hallo,
I have a question regarding a selection.
I'd like to group and merge certain records having the same values in
some columns, but only if they are contiguous with regard to some sort
order.
So for a table
create table foo (
id int,
user_id int,
key varchar,
s
Edson Richter wrote:
> Hi!
>
> Assumptions: PostgreSQL 9.2.2, Win64, already run vacumm full freeze
> analyze. No problems in the database. I know there are 1247 records to
> be found.
> Why does these queries return different results:
>
>
> select count(*) from parcela
> where id not in (selec
zeljko, 04.02.2013 10:35:
Edson Richter wrote:
Hi!
Assumptions: PostgreSQL 9.2.2, Win64, already run vacumm full freeze
analyze. No problems in the database. I know there are 1247 records to
be found.
Why does these queries return different results:
select count(*) from parcela
where id not
2013/2/4 Morus Walter :
> I'd like to merge all consecutive records (ordered by sort, user_id)
> having the same value in user_id and key and keep the first/last
> value of sort of the merged records (and probably some more values
> from the first or last merged record).
>
> So the result should be
Hallo Виктор,
thanks a lot for your explanation :-)
You rock!
>
> This example corresponds to the ORDER BY user_id, sort
> while you claim you need to ORDER BY sort, user_id.
>
right, I confused the order.
> I will explain this for the ordering that matches your sample.
>
> You need to group y
I have had some issues with a database on EC2 and I have restored it to a
new instance. When vacuuming the database I am getting the following in the
logs;
WARNING: pgstat wait timeout
Is this normal/acceptable?
Thanks
On Friday, February 01, 2013 7:53 PM Adrian Klaver wrote:
>On 02/01/2013 06:06 AM, Hari Babu wrote:
>>
>> We tried the approach as suggested by you but still it is not working as
>> shown in the below log (I had enabled logLevel as 1)
>> keystore passowrd is qwerty
>> 19:26:22.666 (1) PostgreSQL 9
Hi,
As of now, i found the following cases where we can expect these kind of
WARNING message in pg_log.
Case 1 { Huge I/O }
==
When the postgresql autovacuum process is not able to get the required I/O
to write the statistics to "stats_temp_location" then we can get this kind
of WARNING Mes
Hello,
when trying to add a forgotten primary key pair
to a PostgreSQL 8.4.13 table I get the error:
# \d pref_rep
Table "public.pref_rep"
Column |Type | Modifiers
---+-+
On 02/04/2013 06:17 AM, Alexander Farber wrote:
Hello,
when trying to add a forgotten primary key pair
to a PostgreSQL 8.4.13 table I get the error:
# \d pref_rep
Table "public.pref_rep"
Column |Type |
On Sat, Feb 2, 2013 at 11:36 AM, Carlo Stonebanks
wrote:
> Here is an advantage Plpgsql has:
> http://www.postgresql.org/docs/9.1/static/plpgsql-expressions.html
>
> I guess you can offset this by creating your own prepared statements in C.
> Otherwise, I can’t think of how C could be slower. I wo
Thank you -
On Mon, Feb 4, 2013 at 3:26 PM, Andrew Jaimes wrote:
> SELECT id, author, count(1)
> FROM pref_rep
> GROUP BY id, author
> HAVING count(1) >1
>> From: alexander.far...@gmail.com
>> http://stackoverflow.com/questions/14688523/adding-primary-key-table-contains-duplicated-values
this
On 02/04/2013 04:46 AM, Hari Babu wrote:
On Friday, February 01, 2013 7:53 PM Adrian Klaver wrote:
On 02/01/2013 06:06 AM, Hari Babu wrote:
We tried the approach as suggested by you but still it is not working as shown
in the below log (I had enabled logLevel as 1)
keystore passowrd is qwerty
I am trying to write a query that grabs one particular day from a timestamp
column. The data are ordered in 15 minute chunks like this:
2010-07-07 12:45:00
2010-07-07 13:00:00
2010-07-07 13:15:00
2010-07-07 13:30:00
etc…
If I wanted all records from july 7th 2010, I would expect 4 x 24 = 96 reco
Trying to delete the older of the duplicated pairs:
# SELECT id, author, count(1), max(stamp) as maxx
FROM pref_rep
GROUP BY id, author
HAVING count(1) >1 and stamp < maxx;
ERROR: column "maxx" does not exist
LINE 4: HAVING count(1) >1 and stamp < maxx;
^
I am trying to write a query that grabs one particular day from a timestamp
column. The data are ordered in 15 minute chunks like this:
2010-07-07 12:45:00
2010-07-07 13:00:00
2010-07-07 13:15:00
2010-07-07 13:30:00
etc…
If I wanted all records from july 7th 2010, I would expect 4 x 24 = 96 reco
WHERE derived_tsoil_fifteen_min_stacked.time2::date = '2010-07-07'::date
On Monday, February 4, 2013, Kirk Wythers wrote:
> I am trying to write a query that grabs one particular day from a
> timestamp column. The data are ordered in 15 minute chunks like this:
>
> 2010-07-07 12:45:00
> 2010-07-0
On 02/04/2013 06:45 AM, Alexander Farber wrote:
Trying to delete the older of the duplicated pairs:
# SELECT id, author, count(1), max(stamp) as maxx
FROM pref_rep
GROUP BY id, author
HAVING count(1) >1 and stamp < maxx;
ERROR: column "maxx" does not exist
LINE 4: HAVING count(1) >1 and stam
Unfortunately that fails -
On Mon, Feb 4, 2013 at 3:55 PM, Adrian Klaver wrote:
> On 02/04/2013 06:45 AM, Alexander Farber wrote:
>>
>> Trying to delete the older of the duplicated pairs:
>>
>
> How about:
>
> SELECT id, author, count(1), max(stamp) as maxx
> FROM pref_rep
> GROUP BY id, autho
On 02/04/2013 06:45 AM, Alexander Farber wrote:
Trying to delete the older of the duplicated pairs:
# SELECT id, author, count(1), max(stamp) as maxx
FROM pref_rep
GROUP BY id, author
HAVING count(1) >1 and stamp < maxx;
ERROR: column "maxx" does not exist
LINE 4: HAVING count(1) >1 and stam
Thomas Kellerer wrote:
> zeljko, 04.02.2013 10:35:
>> Edson Richter wrote:
>>
>>> Hi!
>>>
>>> Assumptions: PostgreSQL 9.2.2, Win64, already run vacumm full freeze
>>> analyze. No problems in the database. I know there are 1247 records to
>>> be found.
>>> Why does these queries return different re
Em 04/02/2013 07:35, zeljko escreveu:
Edson Richter wrote:
Hi!
Assumptions: PostgreSQL 9.2.2, Win64, already run vacumm full freeze
analyze. No problems in the database. I know there are 1247 records to
be found.
Why does these queries return different results:
select count(*) from parcela
w
Alexander Farber wrote:
> # alter table pref_rep add primary key(id, author);
> NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index
> "pref_rep_pkey" for table "pref_rep"
> ERROR: could not create unique index "pref_rep_pkey"
> DETAIL: Table contains duplicated values.
>
> How co
Andreas Kretschmer wrote:
> Alexander Farber wrote:
>
> > # alter table pref_rep add primary key(id, author);
> > NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index
> > "pref_rep_pkey" for table "pref_rep"
> > ERROR: could not create unique index "pref_rep_pkey"
> > DETAIL: Tab
Thanks. That worked great! Now I am trying to aggregate these same fifteen
minute to hourly. I have tried using date_trunk:
date_trunc('hour', derived_tsoil_fifteen_min_stacked.time2),
but date_truck only seems to aggriage the timestamp. I thought I could use
AVG(derived_tsoil_fifteen_min_stac
Thanks. That worked great! Now I am trying to aggregate these same fifteen
minute to hourly. I have tried using date_trunk:
date_trunc('hour', derived_tsoil_fifteen_min_stacked.time2),
but date_truck only seems to aggriage the timestamp. I thought I could use
AVG(derived_tsoil_fifteen_min_stac
I am looking for suggestions on aggregation techniques using a timestamp
column. In my case I have tried:
date_trunc('hour', derived_tsoil_fifteen_min_stacked.time2),
but date_truck only seems to aggregate the timestamp. I thought I could use
AVG(derived_tsoil_fifteen_min_stacked.value)
in co
Thank you -
On Mon, Feb 4, 2013 at 5:06 PM, Andreas Kretschmer
wrote:
>> # alter table pref_rep add primary key(id, author);
>> NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index
>> "pref_rep_pkey" for table "pref_rep"
>> ERROR: could not create unique index "pref_rep_pkey"
>> DET
It's currently (9.2) not possible to define DEFERRABLE NOT NULL constraints. Meaning the following is not valid:
CREATE TABLE my_table(
id varchar PRIMARY KEY,
stuff_id BIGINT NOT NULL DEFERRABLE INITIALLY DEFERRED
);
While it's possible to define a trigger to enforce this, like this:
Hello.
I'm modelling a system where I'd like to log inserts and deletes
to two or more tables (with foreign key references between them).
As a (contrived) example:
CREATE TABLE projects (
project_id SERIAL PRIMARY KEY,
project_name TEXT UNIQUE NOT NULL
);
CREATE TABLE project_repositories
2013/2/4 :
> Hello.
>
> I'm modelling a system where I'd like to log inserts and deletes
> to two or more tables (with foreign key references between them).
>
> As a (contrived) example:
>
> CREATE TABLE projects (
> project_id SERIAL PRIMARY KEY,
> project_name TEXT UNIQUE NOT NULL
> );
>
>
Alexander Farber wrote:
> Thank you -
>
> On Mon, Feb 4, 2013 at 5:06 PM, Andreas Kretschmer
> wrote:
> >> # alter table pref_rep add primary key(id, author);
> >> NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index
> >> "pref_rep_pkey" for table "pref_rep"
> >> ERROR: could not
Is it possible to migrate database from Postgresql 8.2 to 9.2 using
pg_upgrade?
Thanks.
On 02/04/2013 12:06 PM, AI Rumman wrote:
Is it possible to migrate database from Postgresql 8.2 to 9.2 using
pg_upgrade?
Thanks.
Per http://www.postgresql.org/docs/9.2/static/pgupgrade.html
"...pg_upgrade supports upgrades from 8.3.X and later to the current
major release of PostgreSQL..."
Hi Brent,
Nice to hear from you. I hope your world is good.
On Feb 4, 2013, at 2:14 PM, Brent Wood wrote:
> Hi Kirk,
>
> We have a (near) real time data database for instrument observations from our
> research vessels. All observations (summarised to one minute intervals - the
> actual raw
Hi all,
I know I'm probably missing something obvious here, but I have been unable
to figure this out or find any docs on it.
I have a function that takes in a postal address and normalizes it through
text manipulation etc.:
*fn_normalize_address*(*in_line_one* character varying, *in_line_two *c
How would I aggregate a lot of inet addresses/subnets to unique super-
networks? Simply doing a 'GROUP BY network(address)' will not do any
aggregation, and thus includes lots of /32s that are part of larger
networks. While I could add 'WHERE masklen(address) <> 32 and family
(address) = 4' (or di
On Mon, Feb 4, 2013 at 2:01 PM, wrote:
> Hello.
>
> I'm modelling a system where I'd like to log inserts and deletes
> to two or more tables (with foreign key references between them).
>
> As a (contrived) example:
>
> CREATE TABLE projects (
> project_id SERIAL PRIMARY KEY,
> project_name
Hi,
My question: are there any real docs on how to ‘force’ registry values so that
all users get some preconfigured servers . . .
That or, how to do it via an .ini file, cos the example ini doesn’t really say
how to do what I’m looking to do.
I’m trying to use AD/GPO to configure predefined serve
On 2/4/2013 12:03 AM, David Wooffindin wrote:
My question: are there any real docs on how to ‘force’ registry values
so that all users get some preconfigured servers . . .
That or, how to do it via an .ini file, cos the example ini doesn’t
really say how to do what I’m looking to do.
I’m t
2013/2/4 Kirk Wythers :
> I am looking for suggestions on aggregation techniques using a timestamp
> column. In my case I have tried:
>
> date_trunc('hour', derived_tsoil_fifteen_min_stacked.time2),
>
> but date_truck only seems to aggregate the timestamp. I thought I could use
>
> AVG(derived_tso
On Feb 4, 2013, at 3:26 PM, Jason Dusek wrote:
> 2013/2/4 Kirk Wythers :
>> I am looking for suggestions on aggregation techniques using a timestamp
>> column. In my case I have tried:
>>
>> date_trunc('hour', derived_tsoil_fifteen_min_stacked.time2),
>>
>> but date_truck only seems to aggrega
I would imagine he means, ‘Can PgAdmin be distributed in such a way that the
user does not have to setup DB connections themselves. But are preconfigured’.
If so, then this is a PgAdmin question, not a PostgreSQL question.
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow.
Thanks in advance for thinking about my problem.
As I suspect you know, CA Erwin doesn't support Postgres or greenplum.
But they do support ODBC for reverse engineering.
When I reverse, Erwin executes the standard ODBC metadata queries for the
system catalog.
The process works fine, but I'm fin
Hi Kirk,
We have a (near) real time data database for instrument observations from our
research vessels. All observations (summarised to one minute intervals - the
actual raw data is in netCDF, this database makes for easier access & meets
most users needs) go into a single table, with other ta
Hi All,
We are having a thorny problem I'm hoping someone will be able to help with.
We have a pair of machines set up as an active / hot SB pair. The database they
contain is quite large - approx. 9TB. They were working fine on 9.1, and we
recently upgraded the active DB to 9.2.1.
After upgra
I don't know about ErWin. If you look
for alternatives that would include a tool change, then continue
reading.
I do use DBWrench, is working fairly well.
Prós:
- Multiple diagrams for same database/schema (can reduce the
amount of tables you ar
Select time2::date, extract('hour' from time2), AVG(avg) from tablename
group by time2::date, extract('hour' from time2)
On Monday, February 4, 2013, Kirk Wythers wrote:
> Hi Brent,
>
> Nice to hear from you. I hope your world is good.
>
> On Feb 4, 2013, at 2:14 PM, Brent Wood >
> wrote:
>
> >
On Mon, Feb 4, 2013 at 9:45 AM, Kirk Wythers wrote:
> I am trying to write a query that grabs one particular day from a
> timestamp column. The data are ordered in 15 minute chunks like this:
>
> 2010-07-07 12:45:00
> 2010-07-07 13:00:00
> 2010-07-07 13:15:00
> 2010-07-07 13:30:00
> etc…
>
> WHER
On Feb 4, 2013, at 7:03 PM, Misa Simic wrote:
> Select time2::date, extract('hour' from time2), AVG(avg) from tablename group
> by time2::date, extract('hour' from time2)
Thanks Misa,
But this gives the same result as the way I was using date_trunc (not GROUPING
BY the hour portion of the t
On Mon, Feb 4, 2013 at 08:33:02AM -0600, Merlin Moncure wrote:
> On Sat, Feb 2, 2013 at 11:36 AM, Carlo Stonebanks
> wrote:
> > Here is an advantage Plpgsql has:
> > http://www.postgresql.org/docs/9.1/static/plpgsql-expressions.html
> >
> > I guess you can offset this by creating your own prepare
If a C function was a call to multiple (unprepared) SQL statements, could
PL/PGSQL's prepare-once plan caching have an advantage?
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Bruce Momjian
Sent: February 5, 2013 12:
2013/2/5 Bruce Momjian :
> On Mon, Feb 4, 2013 at 08:33:02AM -0600, Merlin Moncure wrote:
>> On Sat, Feb 2, 2013 at 11:36 AM, Carlo Stonebanks
>> wrote:
>> > Here is an advantage Plpgsql has:
>> > http://www.postgresql.org/docs/9.1/static/plpgsql-expressions.html
>> >
>> > I guess you can offset
55 matches
Mail list logo