need to return query with alias
*example:*
create table emp (id integer, ename text);
insert into emp values(1, 'aaa');
create or replace function f_sample1() returns table (id integer, ename
text) as $$
declare
begin
return query select id, ename from emp;
end$$ language plpgsql;
select f_
mo_pkey index is being used.
>
Yes, you can drop the unused Indexes provided they are not the ones created
by primary key or unique key constraints. By your explanation, it seems
that the index "ix_mo_pk" is an user defined index (created by you), so, it
can be dropped if not used.
Regards,
Venkata B N
Fujitsu Australia
est starting from LSN 4 which, the
logical decoder does not know.
Hope that helps !
Regards,
Venkata B N
Fujitsu Australia
automate the standby promotion using tools like pgpool-II or pacemaker.
I am trying to differentiate standby-promotion and failover mechanism here.
Are there any scheduled improvements here either for 9.6 or 9.7?
>
I am not 100% sure, if there is anything developed from an failover
mechanism perspective.
Regards,
Venkata B N
Fujitsu Australia
pooling, load balancing and automatic
failover. It may not make sense if you are do not wish to pool you
connections.
Regards,
Venkata B N
Fujitsu Australia
>
> I use a bash script to ship them. The script hasn't being changed So
> it isn't the problem.
>
>
> *postgresql.conf:*
>
>> archive_command = 'exec nice -n 19 ionice -c 2 -n 7 archive_command.bash
>> "%p" slave01 slave02'
>
sible that,
1000s of requests can be served and those being concurrent will be far from
real with kind of hardware capacity you have. So, the solution would be to
have appropriate tuning and benchmarking process in place.
Regards,
Venkata B N
Fujitsu, Australia
> On August 1, 2016 at 10:30:48
nly 1 would be the way to go to build an multi-tenant
application, but, it strongly depends on your application specific
requirements and how are you distributing the data across databases and how
the users are accessing data across the databases.
Regards,
Venkata B N
Fujitsu Australia
(SELECT DocumentTypeID FROM
relDocumentTypeMetaDataName WHERE DocumentTypeID IS NOT NULL);
Thanks
Sridhar
OpenText
On Wed, Jun 29, 2016 at 6:04 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Wed, Jun 29, 2016 at 2:07 AM, Sridhar N Bamandlapally <
> sridhar@g
M, David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Mon, Jun 13, 2016 at 7:21 AM, Sridhar N Bamandlapally <
> sridhar@gmail.com> wrote:
>
>> Hi
>>
>> how to create table name with size, the limitation we are facing is 63
>> length
>&
believe. This can be used for production environment.
http://www.rubyrep.org/
Regards,
Venkata B N
Fujitsu Australia
Hi
The actual statement is MERGE INTO NOT MATCHED, which in PG
migrated to WITH - INSERT
however, yes, the SQL-statement in previous does not work in other
databases too, I was wrong
Thanks, thanks again
Sridhar
OpenText
On Wed, Jun 29, 2016 at 11:58 AM, Tom Lane wrote:
> Sridha
Hi
Please go through below case
postgres=# CREATE TABLE emp (id INTEGER unique, ename VARCHAR);
CREATE TABLE
postgres=# INSERT INTO emp VALUES (null, 'aaa');
INSERT 0 1
postgres=# INSERT INTO emp VALUES (null, 'bbb');
INSERT 0 1
postgres=# INSERT INTO emp VALUES (3, 'ccc');
INSERT 0 1
postgres=#
y1_funcs.2.2.2"
> ERROR: could not access file "$libdir/slony1_funcs.2.2.2": No such file
> or directory
>
How did you upgrade Slony ? Uninstall Slony, drop slony related schemas and
re-install, re-configure slony from scratch ? or did you ensure
SLONIK_UPDATE_FUNCTIONS has been executed successfully ?
Regards,
Venkata B N
Fujitsu Australia
Hi
how to create table name with size, the limitation we are facing is 63
length
these are dynamic tables created from application
issue is:
we cannot suggest/tell client to change NAMEDATALEN constant in
src/include/pg_config_manual.h
do we have any other option,
please
Thanks
Sridhar
OpenTe
, every online archive db is use case for this.
Thanks
Sridhar
Opentext
On 10 Jun 2016 22:36, "David G. Johnston"
wrote:
> On Fri, Jun 10, 2016 at 4:11 AM, Sridhar N Bamandlapally <
> sridhar@gmail.com> wrote:
>
>> Hi
>>
>> Is there any feature in P
One thing we can restrict to "begin noarchive" transaction block are DELETE
and SELECT only
Sridhar
Opentext
On 10 Jun 2016 21:57, "Sridhar N Bamandlapally"
wrote:
> This is what I feel will give me solution to maintain production
> (current+7days) and archive(curren
John R Pierce wrote:
>
>> On 6/10/2016 2:18 AM, Sridhar N Bamandlapally wrote:
>>
>>> This/These will be performed in Production to clean-up archive which
>>> will not be sync with Archive/DW DB only
>>>
>>> one heads-up is Archive/DW DB may need to
One thing looks possible ( feature not available), just an idea
example/syntax:
BEGIN NOARCHIVE;
--- transaction-1
--- transaction-2
.
.
--- transaction-N
END;
This/These will be performed in Production to clean-up archive which will
not be sync with Archive/DW DB only
one heads-up
Hi
Is there any feature in PostgreSQL where online DW (Dataware housing) is
possible ?
am looking for scenario like
1. Production DB will have CURRENT + LAST 7 DAYS data only
2. Archive/DW DB will have CURRENT + COMPLETE HISTORY
expecting something like streaming, but not ETL
Thanks
Sridhar
On Thu, Jun 9, 2016 at 8:01 PM, Sunil N Shinde
wrote:
> Thanks Venkata.
>
>
>
> I am considering latest version now i.e. 9.4 or 9.5 on Linux 6.
>
> Is there any difference in setup from 9.1 to 9.5?
>
There is no difference in the setup. Streaming Replication in the ver
Thanks Venkata.
I am considering latest version now i.e. 9.4 or 9.5 on Linux 6.
Is there any difference in setup from 9.1 to 9.5?
Thanks & Regards,
Sunil N Shinde
From: Venkata Balaji N [mailto:nag1...@gmail.com]
Sent: 08 June 2016 12:46
To: Sunil N Shinde
Cc: pgsql-general@postgresql
6
>
Below is the link which explains the basic steps to setup "streaming
replication"
https://www.postgresql.org/docs/9.1/static/warm-standby.html
By the way, version 9.1 is very old and will reach end-of-life soon. You
are 4 major versions behind, did you consider using latest version ?
Regards,
Venkata B N
Fujitsu Australia
OS -- Linux 6
Thanks & Regards,
Sunil N Shinde
g
Thanks
Sridhar
OpenText
On Mon, Jun 6, 2016 at 5:57 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Mon, Jun 6, 2016 at 7:17 AM, Sridhar N Bamandlapally <
> sridhar@gmail.com> wrote:
>
>> Hi
>>
>> Is there any option in PGPLS
Hi
Is there any option in PGPLSQL which can RETURNS table or SETOF rows along
with an OUT parameter?
please
Thanks
Sridhar
OpenText
details about your replication architecture ?
Slave is unable to read the required WAL segment for some reason ? Do you
have this situation happening regularly ?
Regards,
Venkata B N
Fujitsu Australia
ailover ?
>
> I still have the backups and WAL logs and could probably try use them to
> build a bad slave. But that's a few hundred GB of WAL and is not worth
> it unless I know what to look for.
>
Well, that is the work around if you are encountering data loss due to this
bug.
Developers must be able to tell you if this is a bug in streaming
replication. I haven't encountered such bug till now.
Regards,
Venkata B N
Fujitsu Australia
_start_backup('labelname');" before you initiate rsync and "select
pg_stop_backup()" after you complete rsync. That way, postgresql would know
that you are rsyncing and also identifies the required WALs to be copied
over.
Or if you can shutdown M1 for sometime then, simply shutdown M1 copy over
(or rsync) the data-directory to M2 and then start the M2 instance. That
should work.
Regards,
Venkata B N
Fujitsu Australia
ted successfully ? When pg_restore was running, did
you see anything suspicious in the postgresql logfiles ?
I have data_checksums switched on so am suspecting a streaming
> replication bug. Anyone know of a recent bug which could have caused
> this?
>
I cannot conclude at this point. I encountered these kind of errors with
Indexes and re-indexing fixed them.
Regards,
Venkata B N
Fujitsu Australia
This I got, need some implicit way, like maybe in RULE on SELECT can we
write this ?
Thanks
Sridhar
OpenText
On Mon, May 30, 2016 at 1:05 PM, Michael Paquier
wrote:
> On Mon, May 30, 2016 at 4:25 PM, Sridhar N Bamandlapally
> wrote:
> > Hi
> >
> > Is there a way to
Hi
Is there a way to implicit SELECT on UUID datatype in uppercase ?
Please
Thanks
Sridhar
On Sun, May 22, 2016 at 6:38 PM, Venkata Balaji N wrote:
>
> On Sun, May 22, 2016 at 1:59 AM, Tom Lane wrote:
>
>> Venkata Balaji N writes:
>> > On Sat, May 21, 2016 at 1:04 AM, Tom Lane wrote:
>> >> http://www.unix.com/man-page/opensolaris/3c/atomic_
ng them
> because of any recommendation or any issue faced ?.
>
I just gave LDFLAGS in-case compiler cannot pickup the libraries and the
locations i gave in LDFLAGS is for 64 Bit libraries. "-m64" in CFLAGS is
generating an error and CC="gcc -m64" is running successfully. So, no
issues there.
Regards,
Venkata B N
Fujitsu Australia
On Sun, May 22, 2016 at 1:59 AM, Tom Lane wrote:
> Venkata Balaji N writes:
> > On Sat, May 21, 2016 at 1:04 AM, Tom Lane wrote:
> >> http://www.unix.com/man-page/opensolaris/3c/atomic_cas/
> >> http://docs.oracle.com/cd/E23824_01/html/821-1465/atomic-cas-3c.html
>
On Sat, May 21, 2016 at 1:04 AM, Tom Lane wrote:
> Venkata Balaji N writes:
> > "make" command is generating the following error while compiling
> > postgresql-9.5.3 on Solaris SPARC.
>
> > Undefined first referenced
> > sy
ude/dynloader.h
config.status: linking src/include/port/solaris.h to
src/include/pg_config_os.h
config.status: linking src/makefiles/Makefile.solaris to src/Makefile.port
Regards,
Venkata B N
Fujitsu Australia
Hi
I control this way
if "state_change" is from longtime and "state" is idle then I use
function:
*pg_terminate_backend ( integer ) * ==> return TRUE if killed-successful
else FALSE
example:
# select pg_terminate_backend ( pid ) from pg_stat_activity where
state='idle' and state_change < (cur
rminate IDLE connections after a certain time.
It would be much better if your Application can terminate connections to
the database.
Regards,
Venkata B N
Fujitsu Australia
On Sat, May 14, 2016 at 5:38 PM, Venkata Balaji N wrote:
>
> On Wed, May 11, 2016 at 9:04 PM, wrote:
>
>> I apologise for the missing data.
>>
>> we are running 9.1.15 on debian servers.
>>
>
> There is a possibility of making the old master standby if
pg_dump: The command was: COPY public.TABLE (COLUMNS) TO stdout;
>
I suspect this is due to the clean up by VACUUM on primary.
Regards,
Venkata B N
Fujitsu Australia
When you start the old master, it will ask for timeline history files
which you need to manually transfer from new master
- The old master must become standby
Hope this helps and works in 9.1.x
Regards,
Venkata B N
Fujitsu Australia
o Sridhar,
>
> Have you tried the 'coalesce' function to handle the nulls?
>
>
> Kind Regards,
>
> Adam Pearson
> --
> *From:* pgsql-general-ow...@postgresql.org <
> pgsql-general-ow...@postgresql.org> on behalf of Sridhar
Thanks Pavel
Great !!
I was thinking both || and CANCAT does same
Thanks again
-
Sridhar
OpenText
On Thu, May 12, 2016 at 2:22 PM, Pavel Stehule
wrote:
> Hi
>
> 2016-05-12 10:47 GMT+02:00 Sridhar N Bamandlapally
> :
>
>> Hi
>>
>> In migration, am facing
Hi
In migration, am facing issue with NULL concatenation in plpgsql,
by concatenating NULL between any where/position to Text / Varchar, the
total string result is setting value to NULL
*In Oracle:*
declare
txt1 VARCHAR2(100) := 'ABCD';
txt2 VARCHAR2(100) := NULL;
txt3 VARCHAR2(100)
basebackup.
>
Which version of postgresql you are using ?
You need to shutdown master first, then promote slave and then other way
round, but, this can be clarified only if you let us know the postgresql
version. This is quite tricky in 9.2.x and from 9.3.x.
Regards,
Venkata B N
Fujitsu Australia
e to build the schema), then you will need to restore the
database from the backup and manually cleanup all the data, which is quite
a bit of work.
Regards,
Venkata B N
Fujitsu Australia
hen
queries are being executed on slave.
Regards,
Venkata B N
Fujitsu Australia
abled. Segment size cannot
be larger than 1GB" )
Thanks
Sridhar
On Tue, Mar 29, 2016 at 9:01 PM, Daniel Verite
wrote:
> Sridhar N Bamandlapally wrote:
>
> > due to size limitation BYTEA was not considered
>
> You could adopt for a custom table the same kind of structu
954183248
> Website: www.ocs.pe
>
> Sent from my Sony Xperia™ smartphone
>
>
> Sridhar N Bamandlapally wrote
>
>
> all media files are stored in database with size varies from 1MB - 5GB
>
> based on media file types and user-group we storing in different tab
was not considered
Thanks
Sridhar
On Tue, Mar 29, 2016 at 3:05 PM, John R Pierce wrote:
> On 3/29/2016 2:13 AM, Sridhar N Bamandlapally wrote:
>
>> Hi
>>
>> pg_largeobject is creating performance issues as it grow due to single
>> point storage(for all tables)
>
Hi
pg_largeobject is creating performance issues as it grow due to single
point storage(for all tables)
is there any alternate apart from bytea ?
like configuration large-object-table at table-column level and oid
PK(primary key) stored at pg_largeobject
Thanks
Sridhar
Hi
Is there any way to create nested function?
oracle to postgres migration required super function variable reference
into nested function without nested function parameter
Oracle sample:
---
create or replace function f1(n number) return number
is
vs number:=1
Hi
Is there a way to avoid creating rule under creation of view ?
please let me know
Thanks
Sridhar
Do you see the lag all the time ? Did you test if the replication
is working fine ?
You can check that via pg_controldata as well. What does sync_state in
pg_stat_replication say ?
Regards,
Venkata B N
Fujitsu Australia
t tables in the database ? How are you calculating
the database size ?
Each field size could vary depending upon the number of characters you
entered. If you wish to understand the table size stats and its tuple size
"pgstattuple" contrib module is the way to go.
There are other components in the postgresql's data-directory which occupy
the disk space - postgresql logs (if enabled), pg_xlog etc..
What is size of each directory in the data-directory on the disk ?
Regards,
Venkata B N
Fujitsu Australia
t's found.
> #
> trigger_file = '/data/main/primary.trigger'
>
Can you consider putting recovery_target_timeline='latest' as well ? and
can you help us know if you can see anything weird in the postgresql
logfiles @ DR ?
Is DR in complete sync with the slave ?
Regards,
Venkata B N
Fujitsu Australia
y using the backup of Master database + WAL archives (if available) and
setup primary_conninfo to point to slave database in recovery.conf on DR.
Can you please let us know which version of postgresql you are using ?
Thats important for us to make any suggestions.
Regards,
Venkata B N
Fujitsu Australia
Ok, let me put this way
in JDBC we have *setAutoCommit( false ) *, and all dmls are independent
transactions
and when any transaction fails then the session not allowing next
transactions
in Java when we do setAutoCommit( false ) its behaving like all
transactions in BEGIN-END block, this is not
If we want transactions in "begin-end" then its fine,
but in this case all these transactions are independent with autocommit off,
user choice to continue with commit or rollback
Thanks
Sridhar
On Thu, Feb 18, 2016 at 1:43 PM, Vladimir Sitnikov <
sitnikov.vladi...@gmail.com> wrote:
> >Is it
Hi
We are facing issue with PostgreSQL JDBC behaviour
in Java, under autocommit false mode,
1. In between if any transaction then for next transaction, throws
exception saying "current transaction is aborted, commands ignored until
end of transaction block"
2. Even if exception is suppressed wi
Hi
We are facing issue with PostgreSQL JDBC behaviour
in Java, under autocommit false mode,
1.in between if any transaction
s for the whole PostgreSQL cluster shared by all
the databases in the cluster.
Regards,
Venkata B N
Fujitsu Australia
urenz
wrote:
> Sridhar N Bamandlapally wrote:
> > is there any feature available in postgres to drop stats on table?
>
> What about
>
> DELETE FROM pg_catalog.pg_statistic WHERE starelid =
>
> Yours,
> Laurenz Albe
>
Hi
is there any feature available in postgres to drop stats on table?
thanks
Sridhar
art up to accept drop schema via pg_restore?
>
"pg_ctl -D status" command will let you know if the
PostgreSQL cluster is up and running.
Are you sure, you do not have any other processes running while pg_restore
process is running ? Whats the background of the process 2720 ?
Regards,
Venkata B N
Fujitsu Australia
is the problem. The fact that its
> happening more often makes me think the db is somehow involved in the
> problem.
>
> I did a reindex and vacuum recently hoping it would alleviate the problem.
> Not so. Any advice would be appreciated.
>
The first thing to do is to tune the checkpoint parameters.
Regards,
Venkata B N
Fujitsu Australia
be upgraded to 9.4.x.
After a detailed analysis on memory usage by OS and other processes, is it
safe to advise on configuring shared_buffers to 2 GB ?
Any advise will be appreciated.
Regards,
Venkata Balaji N
Fujitsu Australia
.?
>
Do you mean that, you want import the data into a different schema in
postgres ? Where do you want to change the name ?
If i understand your question correctly, there is an schema option in
ora2pg.conf which might help you.
Regards,
Venkata B N
Fujitsu Australia
tables
2) also need index on table auths_with_trans column user_id
3) replacing users with temp_users in BEGIN block
with this may reduce impact/dependency on regular crontab
Thanks
Sridhar BN
On Fri, Sep 11, 2015 at 10:52 AM, Venkata Balaji N
wrote:
>
> On Thu, Sep 10, 2015 at 3
d to be careful regarding
privileges/grants and dependencies on the table.
Or the second approach would be --
Create a table called users_orig from the "users" table and execute SELECT
on user_orig table and let the TRUNCATE/data-repopulation operation run on
"users" table. This will be a problem if the data is huge. It might take up
your hardware resources.
Third and simple approach would be to -
Execute SELECT and TRUNCATE at different times.
All of the above approaches are without considering data-size and other
critical aspects of environment, which you need to worry about.
Regards,
Venkata Balaji N
Fujitsu Australia
.sql format but even i am unable
to upload.
So, i kindly request u to help in this as the database contains more than 2000
Students data
Regards
Murali N
System Admin,
CIT, Gubbi+91 9448661055
Murali N
System Admin,
CIT, Gubbi
Hi
I need some top 10 customers list in financial or healthcare domain (
irrespective of open-source or proprietary tool )
We need to showcase to our customer for building analytical database
Please do share, it will be really helpful
Thanks
Sridhar BN
he database end. If you want process not to continue
anymore at the database-end, then, you need to manually kill it at the
database end too using pg_cancel_backend() or pg_terminate_backend().
Regards,
Venkata Balaji N
Fujitsu Australia
/pg_archivecleanup
> /var/lib/postgresql/9.1/wal_archive/ %r'
>
>
> How can I reduce the number of WAL files on the hot_stanby slave ?
>
If the number of WAL files in pg_xlog are growing, then you need to look at
why the files are not getting deleted.
Do you see master and standby in sync ? You can check that by getting the
current pg_xlog position in standby.
Regards,
Venkata Balaji N
Fujitsu Australia
e
slave to a consistent state. Nothing can be advised straight without
knowing your replication configuration/architecture details.
Regards,
Venkata Balaji N
Fujitsu Australia
t.relname
> WHERE idstat.idx_scan < 200
> AND indexdef !~* 'unique'
> ORDER BY idstat.schemaname,
> idstat.relname,
> indexrelname;
>
Not sure why do you have "<200"
Regards,
Venkata Balaji N
Fujitsu Australia
2015-05-22 skrev Albe Laurenz :
Nicklas Avén wrote:
>> I was a little surprised by this behavior.
>> Is this what is supposed to happen?
>>
>> This query returns what I want:
>>
>> with
>> a as (select generate_series(1,3) a_val)
>> ,b as (select generate_series(1,2) b_val)
>> ,c as (select ge
Hallo
I was a little surprised by this behavior.
Is this what is supposed to happen?
This query returns what I want:
with
a as (select generate_series(1,3) a_val)
,b as (select generate_series(1,2) b_val)
,c as (select generate_series(1,1) c_val)
select * from a
inner join c on a.a_val=c.c
574 LOG: database system is ready to accept read only
> connections
> May 21 21:28:58 d8 postgres[53575]: [16-1] 2015-05-21 21:28:58.349
> CEST @ 53575 LOG: restored log file "00044C4D0091" from
> archive
> May 21 21:28:59 d8 postgres[53575]: [17-1] 2015-05-21 21:28:59.962
> CEST @ 53575 LOG: restored log file "00044C4D0092" from
> archive
> May 21 21:29:00 d8 postgres[53575]: [18-1] 2015-05-21 21:29:00.037
> CEST @ 53575 LOG: unexpected pageaddr 4C45/DC00 in log segment
> 00044C4D0093, offset 0
> May 21 21:29:00 d8 postgres[53640]: [4-1] 2015-05-21 21:29:00.047 CEST
> @ 53640 LOG: started streaming WAL from primary at 4C4D/9200 on
> timeline 4
>
> What was wrong?
>
This error message can be ignored, did you try shipping
00044C4D0093 to slave from master's archive location and see if
slave picked it up and applied ?
Regards,
Venkata B N
Fujitsu Australia
r archiving is not working as expected.
It is the checkpoint process which cleans up the WAL files from pg_xlog, if
that is not happening
Regards,
Venkata Balaji N
Fujitsu Australia
?
> How I handle this case (pg_xlog folder size) when Production people
> entering the data in bulk, kindly suggest. I am missing something in my
> postgresql.conf and somewhere else.
>
What is the *archive_timeout* value you have ?
Regards,
Venkata Balaji N
Fujitsu Australia
On Tue, May 5, 2015 at 7:57 AM, Edson F. Lidorio
wrote:
>
>
> On 04-05-2015 00:46, Venkata Balaji N wrote:
>
>> You do not see the above WAL file in the archive directory ?
>> "/mnt/server/archivedir" is shared between master and slave databases ? The
>
re rare and are not deleting much, then frequent
VACUUM FULL is not ideal.
Regards,
Venkata Balaji N
Fujitsu Australia
e at the OS level
(this is not related to pg_xlog directory).
As VACUUMING is a data change operation, "pg_xlog" will also have only the
WAL data (modifications) written at the time of VACUUMING.
http://www.postgresql.org/docs/9.4/static/sql-vacuum.html
Regards,
Venkata Balaji N
Fujitsu Australia
been a simple
update almost matched the activity of 200+ processes (which are not yet
running at full power).
Any suggestions for where to look?
On Sat, 15 Mar 2014 11:49:36 +0100, Yngve N. Pettersen
wrote:
Hi again,
A further update, and it looks like I have finally been able to &quo
eign-key constraints:
"queue_queueitems_group_id_fkey" FOREIGN KEY (group_id) REFERENCES
queue_queuerun(id) DEFERRABLE INITIALLY DEFERRED
"queue_queueitems_target_id_fkey" FOREIGN KEY (target_id) REFERENCES
queue_queuetarget(id) DEFERRABLE INITIALLY DEFERRED
On Mon, 10 Mar
the UPDATE command
UPDATE probedata2_probequeue SET "state"=E'S'
WHERE "state" = E'I' AND id IN ( .)
RETURNING id
and I have confirmed that with the built-in Django debug query information
in my test script.
On Sat, 08 Feb 2014 15:5
On Sat, 04 Jan 2014 19:40:31 +0100, Andrew Sullivan
wrote:
On Sat, Jan 04, 2014 at 07:07:08PM +0100, Yngve N. Pettersen wrote:
I tried that before, but ran into some issues, IIRC a similar looping
problem as this where queries never ended. I split it up in an attempt
to
solve that
Hi,
On Sat, 04 Jan 2014 16:23:42 +0100, Andrew Sullivan
wrote:
On Sat, Jan 04, 2014 at 12:14:42PM +0100, Yngve N. Pettersen wrote:
The update query looks like this:
UPDATE queue SET state = E'S' WHERE state = E'I' AND id IN () RETURNING id;
There is a BEGIN/CO
Hello all,
I am running a Postgresql 9.2 system (IIRC v9.2.4; I am about to upgrade
to 9.2.6) on a system with 32-cores, 256GB RAM, 64GB shared RAM for
postgresql. The applications I am running are written using Django
(currently v1.5)
For a while I have been observing what may be special
Thanks, but still not same. For example, is there subquery flattening
, is there any other query re-write ? is there website or books have
such detail ?
On Thu, Nov 28, 2013 at 5:54 PM, Adrian Klaver wrote:
> On 11/28/2013 09:46 AM, N wrote:
>>
>> Are there documents speci
Are there documents specifying the query optimisations in Postgresql
like the SQLite (http://www.sqlite.org/optoverview.html)?
>From the web, I can say, there are index and join optimisation, but
are there anything others like Subquery flattening?
thanks.
Bentley.
--
Sent via pgsql-general ma
Thanks a lot,It worked for me.
Thanks & Regards,
Saritha
On Thu, Oct 3, 2013 at 6:40 PM, David Johnston wrote:
> saritha N wrote
> > ERROR: invalid value "{0" for "dd"
> > DETAIL: Value must be an integer.
>
> The leading "{"
Hi,
I am getting an error while retrieving the values from xml and converting
into date format
when I execute this query
select history from KM_REL_FF_USR_DATA whereUSERID in (53008) and
SHARE_WORKING_GROUP =15683
output is
" Enter what you
want to achieve from this programme:
"
from th
Hi,
I am new to postgresql.We are migrating our application from oracle to
postgresql.We are using postgresql version 9.2.All most everything we are
migrated but I am unable to write a function for UPDATEXML which works same
as in oracle.Please help me to resolve .
Thanks & Regards,
Saritha
Dear All,
Hope you are doing well.
I wanted to find out the suspicious queries run by users. Can you please
suggest the best system table in PG/ Greenplum to find out such ?
Appreciate your quick response.
Regards,
Girish
Hello Friends,
Hope you are all well...
I have a specific issue, where my query fails with below error while trying
to export data from pgadmin SQL tool.
There are no such issues when the result set is small. But it returns error
when the result set is bit large.
Any inputs please ? Where and
> user=,db= FATAL: index "316879235" contains unexpected zero page at block
> 264
> user=,db= HINT: Please REINDEX it.
>
> Please tell me what can I do to recover?
>
Did you try re-building the index ? Re-Indexing or re-creating an new index
should resolve this.
Regards,
VBN
10244000F: No such file
or directory
LOG: streaming replication successfully connected to primary
--
With b/r Viacheslav N Tararin.
Abonent Logic Landhttp://abonent.logicland.com.ua/
Logic Land ltd.http://logicland.com.ua/
Uralska st., 8, Kamenets-Podilskiy,
Khmelnitski
1 - 100 of 183 matches
Mail list logo