Dear guys,
how can i add more character support to PostgreSql 9.6?
I need to also store some information in Cyrillic, for Russian users.
Can more characters coexist in the same database?
Thank you!
F
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to you
scan in the master table is Always
sequential, descending into the partitions.
Thank you again,
F
Da: Alban Hertroys [haram...@gmail.com]
Inviato: mercoledì 20 settembre 2017 17.50
A: Job
Cc: pgsql-general@postgresql.org
Oggetto: Re: [GENERAL] In
Hi guys,
with Postgresql 9.6.1 we need to insert, from a certain select query, some
millions of record in a partitioned table.
The table is partitioned by day.
The datas we will import can, often, be mixed between two different days.
We noticed that if we import directly into the global table it
Dear Alban,
thank you for your precious reply, first of all.
>> On 19 Sep 2017, at 15:47, Job wrote:
>>
>> Hi guys,
>>
>> we need to insert from a table to another (Postgresql 9.6.1) a large amount
>> of data (about 10/20 millions of rows) without locking
Hi guys,
we need to insert from a table to another (Postgresql 9.6.1) a large amount of
data (about 10/20 millions of rows) without locking destination table.
Pg_bulkload is the fastest way but it locks the table.
Are there other ways?
Classic "COPY" from?
Thank you!
F
--
Sent via pgsql-gene
Hi,
within a query with two or more conditions with "OR", example:
"where a = 2 or b < 3"
could be useful to speed up the query a multi-column index (a,b) even though
the two conditions are in "OR" and not in "AND"?
Thank you!
F
--
Sent via pgsql-general mailing list (pgsql-general@postgre
Hi,
i have a table with about 400 millions of rows and i need to build some
aggregate function for reporting.
I noticed that query performances are slowing down, even though indexes are
present.
Query is simple (i make an example, my table is in italian language):
select a,sum(b) from table whe
Hi Stephen,
Thank you for your excellent opinion!
>If this is really what you're mostly doing, having constraint exclusion and an
>index on 'profile' would probably be enough, if you insist on continuing to
>have the table partitioned by day (which I continue to argue is a bad idea-
>based on
Hi Sushant,
i think the problem is that we miss check constraints.
We will implement asap
For the moment thank you to everybody for the excellent support!
/F
7;::text)
Heap Blocks: lossy=225509
-> Bitmap Index Scan on
webtraffic_archive_day_2017_02_16_timestamp_date_wbidx (cost=0.00..120.00
rows=1 width=0) (actual time=29.277..29.277 rows=2255360 loops=1)
Index Cond: (("timestamp")::d
Hu guys,
we have a test machine with Postgresql 9.6.1 an about 800.000.000 record in a
table.
Table is partitioned by day, with indexes on partitioned table.
Also a simple query (for example i want to search log occurred in a specific
day), is immediate in tha partitioned table (table_2017_02_1
Hi Raymond,
Thank nyou for your appreciated feedback.
Here is the original message:
i really strange problem, quite near to paranormal, is occurring during a
server migration.
We have a table with some millions of record, perfectly working on other
Postgresql 9.6.1 machines:
I create a table copy, with indexes, and it worked perfecly.
Then i dropped the original table and recreated it back by a copy of the newest.
Same problems, lack during searches and indexes not used.
I restart Postgresql 9.6.1 engine and now index is working perfectly!
Maybe a cache or somethin
Hi guys,
i really strange problem, quite near to paranormal, is occurring during a
server migration.
We have a table with some millions of record, perfectly working on other
Postgresql 9.6.1 machines:
Table "public.webrecord"
Column | Type
Hi Stephen.
>>The mat view takes longer and longer to update because it runs the full
>>query. What you really want to do is have a side-table that you update
>>regularly with appropriate SQL to issue UPDATE statements for just the
>>current day (or whatever).
If correct, i leave only last da
Hello,
we use a materialized view to aggregate datas from a very big table containing
logs.
The source table is partitioned, one table for a day.
Since the refresh of the materialized view seems to grow a lot about timing, we
would like to know if it is pssible to make a "partitioned materializ
Hi guys,
i am making some tests with a massive number of "select" queries (only for
reading datas) on a view and a materialized view.
We use Postgresql 9.6.1 on a 64bit server.
Only for "select" queries, which one is faster and less expensive as resources
cost?
The view or the materialized view
Hello,
on this table:
Table "public.gruorari_tmp"
Column | Type |Modifiers
---++--
id| n
Hi guys,
Really thank you.
Thanks to your help i solved the problem.
As said by Adrian:
>>Caveats, it is morning here and coffee is still brewing, but I am not
>>following. The left join limits grulist.stato to NULL, 1, 2. Your first
>>condition catches the 1 value. Should not the second 'and'
Hi, here it is, excuse me for the delay:
select * from webrecord
left join grucategorie on grucategorie.codcategoria=webrecord.categoria and
grucategorie.codgruppo='f50132'
left join grulist on grulist.nome=webrecord.dominio and
grulist.codgruppo='f50132' and grulist.stato in (1,2)
left join lis
I would also like to add this:
The explain analyze show that the index on that numeric field *is not* being
used.
I also try to set the seqscan off but that index continues not to be used.
Maybe the problem is this?
Thank you again!
/F
--
Sent via pgsql-general mailing list (pgsql-general@pos
Hi guys,
Here is an update, i think i have found the right statement that bring
Postgresql 9.6.1 under heavy-use of resources (CPU) in a massive benchmark.
I only try to remove one simple, but very simple, condition:
and (grulist.stato is null or grulist.stato!=2)
Grulist.stato field is numeri
Hi guys,
>>FWIW you still haven't explained how the upgrade was performed. That might be
>>a very important piece of information, because the 9.4 cluster might have
>>hint bits set and/or the data may be mostly frozen, but the
>>9.6 cluster may not have that yet, resulting in higher CPU usage.
Hi guys,
First of all excuse me but i really do not explain the problem, sorry...
>>Are you being serious? You're complaining about a "big slowdown" for a query
>>that goes from 1.5ms to 4ms?
>>What is the actual problem you're trying to solve? Because I don't see one in
>>the above.
Single qu
Hi guys,
really much appreciated your replies.
>> You might want to include the query plans for each server
W e use a function, the explain analyze is quite similar:
POSTGRESQL 8.4.22:
explain analyze select 'record.com' where 'record.com' like '%.%' and
function_cloud_view_orari('53', '192.1
Hello guys,
a very strange thing: after upgrading from 8.4.22 to 9.6.1 i noticed, under
heavy beanchmarks, a really slowdown of Postgresql 9.6.1, with the machine
really "without breath".
By replacing Postgresql 8.4.22 evberything returns working fine.
With three days of investigation, i come a
Hi,
i am trying "materialized views" in Postgresql 9.6.1.
Regarding Andy Calson suggestion:
>>I do very similar thing, log all my webstats to PG, but querying millions of
>>rows is always going to be slow. I use a summary table.
They seems to work fine.
One question: the materialized view is s
Hello guys and very good new year to everybody!
We are now approaching some queries and statistics on very big table (about 180
millions of record).
The table is partitioned by day (about ~3 Gb of data for every partition/day).
We use Postgresql 9.6.1
I am experiencing quite important slowdown
Hi Alban,
I was wrong: i have only one column: tsrarnge.
Which index can i create to use this statement fastly:
... AND timestamp <@ '[2016-12-27 00:00:00, 2016-12-31 00:00:00)'::tsrange
Thank you again!
/F
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes
Dear Alban,
Regarding:
>>... AND timestamp BETWEEN '2016-12-27 00:00:00'::timestamp with time zone AND
>>'2016-12-30 23:59:59'::timestamp with time zone ...
I think it is a very good approach, and i would like to try.
My table has got two different field for "starting" and "ending" timestamp
>>And, basically, if you need help with some queries you could try
>>posting them whole, even redacted, along the table defs, this way
>>perople can see the problem and not invent one based on a partial
>>description
Thank you very much, very kind from you.
The index applied on the timestamp fiel
conto di Job [j...@colliniconsulting.it]
Inviato: venerdì 30 dicembre 2016 10.55
A: David G. Johnston
Cc: pgsql-general@postgresql.org
Oggetto: R: [GENERAL] Special index for "like"-based query
>>GIST
>>https://www.postgresql.org/docs/9.6/static/pgtrgm.html
>>https:/
e can mix date and time parameters?
Thank you again!
/F
Da: David G. Johnston [david.g.johns...@gmail.com]
Inviato: venerdì 30 dicembre 2016 0.33
A: Job
Cc: pgsql-general@postgresql.org
Oggetto: Re: [GENERAL] Special index for "like"-based query
On Thu,
Hello,
in Postgresql 9.6 we have a query running on a very large table based, in some
cases, on a like statement:
... WHERE FIELD='CODE' OR FIELD LIKE 'CODE_%'
Which type of index can i create to speed to the search when the "like" case
happens?
Thank you!
/F
--
Sent via pgsql-general mail
Hello,
we are planning to store historically data into a Postgresql 9.6 table.
We see on Postgresql limit that it could handle "big data".
In fact, limit for a single table is 32 Tb.
We need to archive this data to generate report and analysis views.
Anyone has experienced Postgresql 9.6 with D
!
Francesco
Da: Tom Lane [t...@sss.pgh.pa.us]
Inviato: venerdì 4 novembre 2016 21.24
A: Job
Cc: pgsql-general@postgresql.org
Oggetto: Re: [GENERAL] CachedPlan logs until full disk
Job writes:
> it is the second time (in two weeks), that have a very strange
Hello,
we need to replicate hundreds of thousands of rows (for reporting) between
Postgresql Database nodes that are in different locations.
Actually, we use Rubyrep with Postgresql 8.4.22.
It works fine but it is very slow with a massive numbers of rows.
With Postgresql 9.x, are there some way
Hi Adrian,
Thank you for your fast reply!
>FYI, Postgres 8.4 is over two years past EOL.
Yes, i am aware; we will migrate next year but for application compatibility
reason we have at the moment to the remain here..
>What are the steps in the load/delete cycle?
We need to load, with pg_bulkloa
Hello,
i use PostgreSql 8.4.22 and two times a day, i import logs wth pg_bulkload and
i delete old data. I move some millions of records in a day.
I noticed that only autovacuum seems not to be able to free unused space.
I need a periodical vacuum full but this operations takes many hours.
Do y
Hello guys,
it is the second time (in two weeks), that have a very strange Postgresql in a
8.4.22 installation (32 bit still).
Logfile grow up (in few hours) until filling the Whole disk space.
I can read infinite series of this messages:
CachedPlan: 1024 total in 1 blocks; 640 free (0 chunks)
Hello,
With a heavy query, when line number results raise over 600k query hangs with
out of memory.
Here is the explain analyze:
CTE Scan on lista (cost=25066.66..47721.23 rows=3678 width=260)
CTE lista
-> Unique (cost=24956.32..25066.66 rows=3678 width=512)
-> Sort (
Hello,
sometimes, in pg_bulkload log, i see an "out of memory" error.
We use Postgresql 8.4.8 with Pg_bulkload 3.1 but we have at least 2 Gb of free
memory space and system machine does not use swap.
We notice that it happens only sometimes; in other launch it works fine, with
an higher number
Excellent Scott!
Thank you!
Francesco
Da: Scott Marlowe [scott.marl...@gmail.com]
Inviato: martedì 21 giugno 2016 2.06
A: Job
Cc: pgsql-general@postgresql.org
Oggetto: Re: [GENERAL] Vacuum full: alternatives?
On Mon, Jun 20, 2016 at 3:18 AM, Job wrote
hould be slower but free-marked space should be
reused again?
Thank you!
Francesco
Da: Jeff Janes [jeff.ja...@gmail.com]
Inviato: lunedì 20 giugno 2016 17.51
A: Martín Marqués
Cc: Melvin Davidson; Rakesh Kumar; Job; pgsql-general@postgresql.org
Ogget
__
Da: pgsql-general-ow...@postgresql.org [pgsql-general-ow...@postgresql.org] per
conto di Andreas Kretschmer [andr...@a-kretschmer.de]
Inviato: lunedì 20 giugno 2016 11.37
A: pgsql-general@postgresql.org
Oggetto: Re: [GENERAL] Vacuum full: alternatives?
Am 20.06.2016 um 11:18 schrieb Job:
>
[rakeshkumar46...@gmail.com]
Inviato: lunedì 20 giugno 2016 11.34
A: pgsql-general@postgresql.org
Oggetto: Re: [GENERAL] Vacuum full: alternatives?
Any reason why you need the space back? What is wrong with space remaining
constant at 4GB.
From: Job
To: "pgsql-ge
Hello,
we have a table with an heavy traffic of pg_bulkload and delete of records.
The size pass, in only one day, for example for 1Gb to 4Gb and then 1Gb back.
We have important problems on size and the only way to gain free space is
issueing a vacuum full .
But the operation is very slow, some
Hi Albe and thank you, first of all.
Is there a way to disable Wal logging only for specific table, permanently?
Thank you again!
Francesco
Da: Albe Laurenz [laurenz.a...@wien.gv.at]
Inviato: venerdì 17 giugno 2016 13.48
A: Job; pgsql-general
Hello,
is there a way in Postgresql-9.5 to disable temporarily WAL archiving to speed
up pg_bulkload with restarting database engine?
Thank you,
Francesco
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailp
Hello,
i have some problems about compiling pg_bulkload-3.1.8 on a CentOS 5 with
Postgresql 9.5.
If i use a previous version of Psql it compile and works.
Thank you if you can help me
Francesco
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscr
___
Da: Arthur Silva [arthur...@gmail.com]
Inviato: venerdì 15 maggio 2015 17.26
A: Job
Cc: pgsql-general@postgresql.org
Oggetto: Re: [GENERAL] Index on integer or on string field
You should probably experiment with a btree-gin index on those.
Em 15/05/2015 12:22, "Job"
mailto:j...@
Hello,
i have a table of about 10 millions of records, with the index on a string
field.
Actually is alphabetical; since queries are about 100/200 per seconds, i was
looking for a better way to improve performance and reduce workload.
The unique values, of that fields, are about the 50 (categor
Hello!
Due to speed up loading of a table of about 4 fields with 10 millions of
record, are there some parameters to optimize?
In same machine it takes about 15 minutes, in other machines about one hour.
Parallel mode is better than direct mode?
Other interesting things?
Thank you in advance fo
Hello, thank you first of all for your wonder help!
Tomas, regarding:
>There are ways to make the writes less frequent, both at the database
>and OS level. We don't know what's your PostgreSQL config, but making
>the checkpoints less frequent and tuning the kernel/mount options may
>help a lot.
Dear Postgresql mailing list,
we use Postgresql 8.4.x on our Linux firewall distribution.
Actually, we are moving from standard SATA disk to mSATA SSD solid drive, and
we noticed that the DB, using lots of indexes, is writing a lot.
In some monthes, two test machine got SSD broken, and we are st
Hello,
i have just installed (on a Centos 6.5 x64 machine) Postgresql 9 latest stable
version.
The application i use need lots of static DB connections: in the 9 version, how
many connections can be declared in postgresql.conf?
I searched for a max number but i noticed it is not specified.
Tha
Hello,
i have just installed (on a Centos 6.5 x64 machine) Postgresql 9 latest stable
version.
The application i use need lots of static DB connections: in the 9 version, how
many connections can be declared in postgresql.conf?
I searched for a max number but i noticed it is not specified.
Tha
What is the best replication option out there today for postgresql.
Have been trying to get rserv working and that hasn't been the
easiest.
Would like to run 7.4.1.
Any suggestions?
-Steve
---(end of broadcast)---
TIP 3: if posting/reading throug
58 matches
Mail list logo