I have read quite a variety of stuff on the internet about an explanation
for idle postgresql processes but still do not understand the following
typical scenario.
This is on Debian (postgresql 9.4.4-1.pgdg80+1).
Running the following (as user crest) on an empty table using psql:
select * from w
Thanks Adrian and Tom.
Tom wrote:
>
> It's not entirely clear what your question is, but here are some possible
> answers:
>
> 1. For quite some time now, the "query" column in pg_stat_activity has
> been defined as "the query currently or most recently run by the session";
> it's intentional tha
I have a long-running query (running now for more than 6 days already
(process 17434). It involves three tables of which one contains XML-data.
On another, unrelated table with 30718567 records, I ran a query to create
an index on a field. This morning I cancelled this process because it did
no
On 25 August 2015 at 15:52, Merlin Moncure wrote:
> h...
>
> creating and index requires exclusive access. did you try the
> concurrent variant?
>
Yes. The one which I stopped after 5 days, was running concurrently. There
was a similar lock involved.
Regards
Johann
--
Because experiencing y
I have a table (A) with 750+ million records and another one (B) which is a
summary of information in A containing 30+million records.
Now I wonder whether it wouldn't be better to have B as an indexed
materialized view. Though not often, there will be situations where B has
to be updated.
In th
When I unnest (regexp_split_to_array) or regexep_split_to_table
I get one value but the length of the array without the unnest is 193.
Why would that be?
wos=# select array_length(regexp_split_to_array(tsv::text, E'\\\s+'),1),
unnest(regexp_split_to_array(tsv::text, E'\\\s+'))
wos-# from wos_201
> Unnest is evaluated before LIMIT
> You need to use derived table
>
> postgres=# select unnest(a) from (select a from x limit 1) si
>
Thank you!
Regards
Johann
We would like to have a master(read/write) version of a database (or a
schema or two) on one server and a readonly version of of the same
database. The only changed on the second one may be to duplicate changes
to views, materialized_views and indexes that also happened on the first
one.
We work
Web2py is another (and maybe simpler) alternative to Django.
Regards
Johann
--
Because experiencing your loyal love is better than life itself,
my lips will praise you. (Psalm 63:3)
On 28 November 2016 at 21:11, Devin Smith wrote:
> Hi,
>
>
> I recently wrote a query that I thought was easy to reason about, and I
> assumed the query planner would execute it efficiently.
>
> SELECT * FROM xtag_stack_feed
> JOIN (
> SELECT DISTINCT ON (do_post_xtag.xtag_ci) *
> FROM do_pos
We did a pg_upgrade from 9.5 to 9.6
Now, when running the 9.6 server the following error message shows up
regularly in the log:
postgres@template1 ERROR: column "waiting" does not exist at character 217
As far as I understand the following remark in the release notes, the
column 'waiting' sho
On 9 December 2016 at 10:13, Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:
>
> On 09/12/2016 10:04, John R Pierce wrote:
>
> are you running some sort of monitoring software thats periodically
> looking at pg_stat_activity ? whatever query its doing needs to be
> rewritten to take in
On 25 January 2017 at 08:32, Gavin Flower
wrote:
>
>
>>
>> What is 'shapefile'?
>
> I don't recall ever coming across that term!
> https://en.wikipedia.org/wiki/Shapefile
>
See https://en.wikipedia.org/wiki/Shapefile
Regards
Johann
--
Because experiencing your loyal love is better than life
How do I formulate the on conflict do update-section of this query? When I
try set title=q.title, q is unknown. When I try and change 'title' in the
select-part to something else and try title=ti I get the message that ti
cannot be used in this part of the query.
INSERT INTO wos_2017_1.article (
I have never seen this problem before. It occurred while trying to import
a dump (done by 9.5 client of a 9.4 database) also.
Table definition:
-
CREATE TABLE source.annual
(
filename text,
gzipfile text,
id serial NOT NULL,
tsv tsvector,
ut character varying(19),
xml xml,
processe
On 26 April 2016 at 15:35, Charles Clavadetscher wrote:
> Hello Johann
>
> There are two to_tsvector functions:
>
> charles@charles.[local]=# \df to_tsvector
> List of functions
>Schema |Name | Result data type | Argument data types | Type
>
> --
I have several large (7GB+) xml files to get into an SQL database.
The xml-files can contain up to 500 000 subrecords which I want to be able
to query in the database.
They are too large to do something like this:
insert into rawxml (xml) select XMLPARSE (DOCUMENT
CONVERT_FROM(PG_READ_BINARY_FI
Thanks Arjen,
> def q(v):
> return b'"' + v.replace(b'"', b'""') + b'"'
>
> return b','.join(q(f) for f in fields) + b'\n'
>
> In the end I also had some other problems with the XML (namespaces), so I
> used:
>
> etree.tostring(element, method='c14n', exclusive=True)
>
This helped. My code
On 5 January 2013 20:20, ERR ORR wrote:
>
>
>
> Queries which use "WHERE "TST_PAYLOAD" LIKE 'SEAT%'" go to the btree
> index as it should.
> Queries which use "WHERE "TST_PAYLOAD" LIKE '%EAT%'" *should* use the
> GIST index but do a full table scan instead.
> (I am looking for names like 'SEATTLE
I would appreciate some advice from the experts on this list about the best
backup strategy for my database.
The setup:
Size: might be about 200Gb
The server uses a Tivoli backup client with daily backup
At the moment There are pg_dumps for each database on the server on a
daily, weekly and month
Thanks everybody for your valuable comments.
I can specify how many versions of the files should be kept on Tivoli.
The database will regularly get new data and there is a continuous process
of data cleaning. It is a database mainly for research purposes and a few
researchers are using it.
I wil
I am running python scripts to read tag-formated files and put the data
into tables.
Sometimes a script (I am running several of them in parallel on a server)
just hangs. That happened now again and when I checked I saw this:
SELECT pg_stat_get_backend_pid(s.backendid) AS procpid,
pg_stat_get_ba
Apologies for the long email.
I would appreciate some advice designing a database which
can become very complicated. The purpose of the database is to be a
source for bibliometric research.
Some background: I work with various sources:
* The largest source consists of tagged delimited text fil
I have used this method successfully on another table but this one is not
working.
I have a table of nearly 800 million records with some duplicates in.
Here is an example:
select rart_id, r9, ra, ry, rw, rv, rp, rs, ri
from isi.rcited_ref
where rart_id = 'A1986D733500019';
renders a result of
When I query table a I get 18 rows.
The same query on table b results in 28 rows.
Both tables have the same structure.
When I export the results to csv-files and do a diff it confirms that all
18 rows from a are also in b. Table b has 10 new rows.
When I combine these queries and use "is (not) d
On 28 February 2017 at 17:06, Adrian Klaver
wrote:
>
> I have not worked through all this but at first glance I suspect:
>
> select distinct b.* from b ...
>
> is distinct from ...
>
> constitutes a double negative.
>
> What happens if you eliminate the first distinct?
>
>
>
> Thanks Adrian,
The
I have no idea what is causing this message in the logs (PostgreSQL
9.6+177.pgdg80+1 on Debian):
2017-03-02 10:20:25 SAST [5196-1] [unknown] postgres postgres@template1
ERROR: column "waiting" does not exist at character 217
2017-03-02 10:20:25 SAST [5196-2] [unknown] postgres postgres@template1
Thanks Charles and Neslişah.
Charles Clavadetscher wrote:
The name of the column in pg_stat_activity has changed. I assume it comes
from there?
On 2 March 2017 at 10:54, Neslisah Demirci
wrote:
> Your monitoring program is trying to use pg_stat_activity's old version.
> In 9.6 pg_stat_act
Thanks (again Adrian) and Tom.
The situation was that I had a table with 731million records which I wanted
to copy into a partitioned one where there was a unique constraint on the
fields used in my query.
The "backup" table was the single one.
While inserting into the partitioned table from the
While restoring a dump from our development server (768G ram) to the
production server, (PG 9.6.3 on Debian Stretch with 128G Ram) the
refreshing of a Materialized View fails like this:
local] js@wos=# REFRESH MATERIALIZED VIEW wos_2017_1.citation_window_mv ;
server closed the connection unexpecte
On 25 August 2017 at 13:48, Tom Lane wrote:
> How complex is "complex"? I can think of two likely scenarios:
> 1. You've stumbled across some kind of memory-leak bug in Postgres.
> 2. The query's just using too much memory. In this connection, it's
> not good that you've got
>> work_mem = 2GB
>
The table definition:
CREATE TABLE isi.isi_l1_publication
(
article_id character varying(15),
journal_id character varying(7),
volume character varying(10),
issue character varying(10),
publication_year character varying(4),
publication_date character varying(20),
id bigint NOT NULL
Sorry for waisting your time. I just now saw the missing period between A
and publication_year which is causing the problem and explains the
'strange' result.
Regards
Johann
On 3 October 2013 10:08, Johann Spies wrote:
> The table definition:
>
> CREATE TABLE isi
I have a record with this in the 'ti' field:
On type A and type B uncertainties and its propagation without
derivatives: a contribution to incorporate contemporary metrology to
Physics' laboratories in higher education
And there is a tsv-field which is updated by the following trigger function:
'similarity' be a replacement? I think
not.
Regards
Johann
--
Johann SpiesTelefoon: 021-808 4699
Databestuurder / Data manager
Sentrum vir Navorsing oor Evaluasie, Wetenskap en Tegnologie
Centre for Research on Evaluation, Science and Technology
U
ity)'
Result
"Kerkvereniging en Kerkreg: Geskiedenis, beginsel en
praktyk.(Church unity and church polity: History, principle and
practice.)"
Why is 'polity' not highlighted?
Regards
Johann
--
Johann SpiesTelefoon: 021-808 4699
Databestu
t lexeme:
Thanks for the explanation. I am working with a multi-language database
and that was the reason for using the 'simple' configuration.
I have asked, in an earlier message on this list, advice on how to
handle full text searches in a multi-language database, but got no
reacti
rently have
inconsistent language indications and some of them none at all.
Thanks for responding.
Regards
Johann
--
Johann SpiesTelefoon: 021-808 4699
Databestuurder / Data manager
Sentrum vir Navorsing oor Evaluasie, Wetenskap en Tegnologie
Centre for Research on
er option.
Regards
Johann
--
Johann SpiesTelefoon: 021-808 4699
Databestuurder / Data manager
Sentrum vir Navorsing oor Evaluasie, Wetenskap en Tegnologie
Centre for Research on Evaluation, Science and Technology
Universiteit Stellenbosch.
"Let us therefore co
ntation of any PostgreSQL database
> from within Pgadmin, through LaTeX.
I did not know about dot2tex. That opens a new door for using graphviz
with Latex for me. Thanks.
Postgresql-autodoc also generates a .dot file.
I would like to test the sqlalchemy route also.
Regards
Johann
--
Joha
On Thu, Aug 16, 2012 at 04:04:48PM +0200, Wolfgang Keller wrote:
> > I could not get the script sqlalchemy_schemadisplay3.py to work with
> > sqlalchemy 0.7.8-1 (on Debian).
>
> Have you asked on the SQLalchemy mailing list?
No.
Thanks for the link.
Regards
Johann
41 matches
Mail list logo