Dave Gauthier wrote:
> V9.1.5 on linux
>
> User "select" created (yup, that's right, they want the user name to be
> "select". Guess what ptivs it
> is to have! Don't kill the messanger :-) )
>
> postgres=# grant select on all tables in schema sde to "select";
> ERROR: schema "sde" does not ex
Andreas Kretschmer wrote:
> Albe Laurenz wrote:
>> Dhiraj Gupta wrote:
>>> I have created a database name '"ofbiz". then the default schema name
>>> "public "
>>> created automatically. I want to create schema name ofbiz in the database
>>> "ofbiz"
>>> when I create database name "ofbiz" then
Hi Thomas
Works like a charm no with different file path.
2012/12/5 Tomas Vondra :
> The file is opened from a
> PostgreSQL backend process, not using your regular user.
Thanks for the hint. Did'nt think of that - and I agree with this
implementation decision.
You are referring to RTFM below: Ma
Hi Edson,
since you are using 'like' in your select, you may want to try the
following (example):
CREATE INDEX "MY_LONG_INDEX_NAME_IDX"
ON "MY_TABLE_NAME"
USING btree
("MY_VARCHAR_FIELD_NAME" COLLATE pg_catalog."default" *varchar_pattern_ops
*);
(for TEXT fields, use *text_pattern_ops* in t
This worked. Thank You Chris!
One problem remains. The "select" user can also create tables, and then insert
into them. Need to prevent "select" user from being able to create tables.
When "select" user was created, no privs given to it...
postgres=# \du
List of
Hi,
can somebody give me some insights why the creation of this rule fails
with 'ERROR: SELECT rule's target list has too many entries'?
CREATE RULE "_RETURN" AS
ON SELECT TO history_relation
DO INSTEAD
SELECT
history_relation.id,
history_relation.relname,
pg_attribu
Em 05/12/2012 09:59, ERR ORR escreveu:
Hi Edson,
since you are using 'like' in your select, you may want to try the
following (example):
CREATE INDEX "MY_LONG_INDEX_NAME_IDX"
ON "MY_TABLE_NAME"
USING btree
("MY_VARCHAR_FIELD_NAME" COLLATE pg_catalog."default"
*varchar_pattern_ops*);
(fo
I'm using pg_bulkload to load large amounts of CSV data into a postgres
database hourly.
This database is replicated to a second node.
Whenever a bulk load happens the indexes on the updated tables on the secondary
node corrupt and are unusable until a reindex is run on the primary node. I
Actually, maybe it didn't work. What's wrong with this picture...
.
fcadsql7> psql sde
psql (9.1.5)
Type "help" for help.
sde=# \du
List of roles
Role name | Attributes | Member of
---+
Henrik Kuhn writes:
> can somebody give me some insights why the creation of this rule fails
> with 'ERROR: SELECT rule's target list has too many entries'?
Probably that history_relation has fewer than three columns? But if
you want something more than guessing, you'd have to provide more
con
On 12/05/2012 04:07 PM, Tom Lane wrote:
Henrik Kuhn writes:
can somebody give me some insights why the creation of this rule fails
with 'ERROR: SELECT rule's target list has too many entries'?
Probably that history_relation has fewer than three columns? But if
you want something more than gu
Henrik Kuhn writes:
> On 12/05/2012 04:07 PM, Tom Lane wrote:
>> Henrik Kuhn writes:
>>> can somebody give me some insights why the creation of this rule fails
>> > with 'ERROR: SELECT rule's target list has too many entries'?
>> Probably that history_relation has fewer than three columns? But
Hi there Postgressors,
I have a 15-element/dimension vector (floats) data type. I also have about
10 million of these and given a query vector I would like to search these
to find a number of nearest neighbours.
For this I intend to extend the current implementation of kd tree
in postgresql-9.2.
JP writes:
> I have a 15-element/dimension vector (floats) data type. I also have about
> 10 million of these and given a query vector I would like to search these
> to find a number of nearest neighbours.
> For this I intend to extend the current implementation of kd tree
> in postgresql-9.2.1/s
On Wed, Dec 5, 2012 at 5:17 AM, James Cowell wrote:
> I'm using pg_bulkload to load large amounts of CSV data into a postgres
> database hourly.
>
> This database is replicated to a second node.
pg_bulkload is fundamentally incompatible with PITR, streaming, and
forms of replication that depend o
libpq will automatically reconnect if the connection is dropped.
auto con = PQconnectdb("info");while (true) {PQclear(PQexec(con,
"SELECT * FROM foo LIMIT 1"));
std::this_thread::sleep_for(std::chrono::seconds(1));std::cout << "here
" << i++ << std::endl; }
$ sudo ifconfig eth0 do
On Wed, Dec 5, 2012 at 7:02 AM, Gauthier, Dave wrote:
> Actually, maybe it didn't work. What's wrong with this picture...
>
>
> sde=# alter default privileges for user "select" grant select on tables to
> "select";
Remove the 'for user "select"'
What that does is make the default permissions ap
Hi Jeff,
Thanks for your reply.
I can understand that it would be incompatible if you bypassed the WAL logs,
but I've got it configured to use the buffered loader so that WAL logs are
written. The data replicates fine, the indexes don't, yet a reindex on the
primary fixes the index on the second
icholy writes:
> libpq will automatically reconnect if the connection is dropped.
No it won't. You'd have to do a PQreset() to make that happen.
> auto con = PQconnectdb("info");while (true) {PQclear(PQexec(con,
> "SELECT * FROM foo LIMIT 1"));
> std::this_thread::sleep_for(std::ch
Hi all -
We have postgres 9.0.4 since an year. Replication also set up and
works fine. On the master. The wal files are growing and it's almost gone
to 90% of the disk. How can I handle the situation. Can I delete the old
files. Please advice
Regards
Thanks for the help
akp geek wrote:
> We have postgres 9.0.4 since an year. Replication also set up and
> works fine. On the master. The wal files are growing and it's
> almost gone to 90% of the disk. How can I handle the situation.
> Can I delete the old files. Please advice
Replicating how? Are these WAL files in
akp geek wrote:
> We have postgres 9.0.4 since an year. Replication also set up and works fine.
> On the master. The wal files are growing and it's almost gone to 90% of the
>disk.
> How can I handle the situation. Can I delete the old files. Please advice
Are the WAL files (the ones in pg_xlo
WAL archives is growing, not the files in pg_xlog.
Regards
On Wed, Dec 5, 2012 at 3:38 PM, Albe Laurenz wrote:
> akp geek wrote:
> > We have postgres 9.0.4 since an year. Replication also set up and works
> fine.
> > On the master. The wal files are growing and it's almost gone to 90% of
>
akp geek writes:
> WAL archives is growing, not the files in pg_xlog.
You need to prune the archives of files that are too old to be
interesting. This doesn't happen automatically since PG has no
idea what your archiving requirements are.
Be sure to keep WAL files at least back to the start o
thank you all. There is a mistake I did then. I created a base backup when
I started the DB , that was exactly an year ago. I need to do that again
and then delete the old files.
Regards
On Wed, Dec 5, 2012 at 4:25 PM, Tom Lane wrote:
> akp geek writes:
> > WAL archives is growing, not the
akp geek writes:
> thank you all. There is a mistake I did then. I created a base backup when
> I started the DB , that was exactly an year ago. I need to do that again
> and then delete the old files.
Yeah, you should be making new base backups a lot more frequently than
that, probably. Conside
Hi I wonder if someone can help me I am getting a bit confused about an
error I am getting.
I have a partitioned table called sensor_values which is partitioned on a
timestamp entry.
The parent and some of the child tables are defined like so
(The child tables are yearly quarters and in my actual
On 12/05/2012 02:24 PM, Glenn Pierce wrote:
The error I get is
new row for relation "sensor_values_2011q3" violates check constraint
"sensor_values_2011q3_timestamp_check"CONTEXT: SQL statement
"INSERT INTO sensor_values_2011q3 VALUES (NEW.*)"PL/pgSQL function
"sensor_values_timestamp_sensor_
28 matches
Mail list logo