> I tried the copy command, and that indeed works quite brilliantly:
> Inserted 2400 rows in 22004 milliseconds, 1090710.7798582076 rows per
> second
>
> That's faster than Oracle. But with a very bad interface I have to say for
> normal database work.. I will try to make this work in the tool
2016-06-09 15:31 GMT+02:00 Merlin Moncure :
> On Wed, Jun 8, 2016 at 1:04 AM, Tom Lane wrote:
> > Michael Paquier writes:
> >> On Tue, Jun 7, 2016 at 10:03 PM, Josh Berkus wrote:
> >>> On 06/07/2016 08:42 AM, Nicolas Paris wrote:
> >>>> Will this
2016-06-07 15:03 GMT+02:00 Josh Berkus :
> On 06/07/2016 08:42 AM, Nicolas Paris wrote:
> > You have to do something different. Using multiple columns and/or
> > multiple rows might we workable.
>
Getting a unique document from multiple rows coming from postgresq
2016-06-07 14:39 GMT+02:00 David G. Johnston :
> On Tue, Jun 7, 2016 at 8:36 AM, Nicolas Paris wrote:
>
>> 2016-06-07 14:31 GMT+02:00 David G. Johnston
>> :
>>
>>> On Tue, Jun 7, 2016 at 7:44 AM, Nicolas Paris
>>> wrote:
>>>
>>>&g
2016-06-07 14:31 GMT+02:00 David G. Johnston :
> On Tue, Jun 7, 2016 at 7:44 AM, Nicolas Paris wrote:
>
>> Hello,
>>
>> I run a query transforming huge tables to a json document based on a period.
>> It works great for a modest period (little dataset).
>>
Hello,
I run a query transforming huge tables to a json document based on a period.
It works great for a modest period (little dataset).
However, when increasing the period (huge dataset) I get this error:
SQL ERROR[54000]
ERROR: array size exceeds the maximum allowed (1073741823)
Thanks by adv
Hello Mark,
As far as I know, MongoDB is able to get better writing performances
thanks to scaling (easy to manage sharding). Postgresql cannot (is not
designed for - complicated).
Why comparing postgresql & mongoDB performances on a standalone
instance since mongoDB is not really designed for tha
Filter: ((concept_path)::text ~~
'\\i2b2\\cim10\\A00-B99\\%'::text)
Rows Removed by Filter: 186413
Planning time: 1.940 ms
Execution time: 22695.913 ms
What I would like is the planner allways hit of_idx_modifier
Thanks !
2015-07-09 22:49 GMT+02:00 Guilla
Hello,
My 9.4 database is used as datawharehouse. I can't change the queries
generated.
first index : INDEX COL (A,B,C,D,E)
In case of query based on COL A, the query planner sometimes go to a seq
scan instead of using the first composite index.
The solution is to add a second indexe (redond
Hello,
I wonder how understanding pg_stat_all_indexes working
When I run an explain, some index are not used, but
pg_stat_all_indexes.idx_scan is incremented for those indexes.
Does this mean idx_scan is incremented each time the planner check if an
index could be use whenever it won't use it ?
uot; when a problem occurs or a
modification of the table. I've tested, it loads 20milion rows in 5 min
(without time for reindexing and time to retrieve datas)
2015-03-10 9:31 GMT+01:00 Jim Nasby :
> On 3/9/15 8:17 AM, Nicolas Paris wrote:
>
>> (sorry for top-posting, gmail does not hel
huge_table
WHERE BoolColumnX IS TRUE
Then this only index is used 800times (for each bool col) and saves place
as it does'nt indexes NULL values, and does no replicate. subsets. Moreover
the huge indexes are allways loaded in cache memory.
Nicolas PARIS
2015-03-06 21:26 GMT+01:00 Nicolas
Thanks.
Then,
Is it a good idea to run multiple instance of "create index on tableX" on
the same time ? Or is it better to do it sequentially ?
In other words : Can one seq scan on a table benefit to multiple create
index stmt on that table ?
Nicolas PARIS
2015-03-07 12:56 GMT+01:00
command does not work with Create index.
Thanks by advance
Nicolas PARIS
thing that is really great with postgresql is transaction for drop
table cascade, that allow te restore all stuf index, views on a rollback if
problem in loading appears.
I hope using one transaction to drop/load many table is not a performance
issue ?
Nicolas PARIS
2015-03-06 11:25 GMT+01:00
VS 1).
For the views that are too slow, look at what the expensive part of the
> view and materialize *only that*.
It would be great if I could, but all will be automatic, then It will be
difficult to apply such rules that demands human analyse, and manual
database modification, for one subset
to update boolean for each tuple. Then I need to truncate/bulk load
all data each time I add a new View. Other problem is dynamic number column
table was a bit tricky to implement in an ETL soft such Talend, but the
benefits are I hope great.
Nicolas PARIS
2015-03-06 2:40 GMT+01:00 Jim Nasb
e table
is actually not a good idea. Too bad
Any idea to solve that partial view limitation?
Nicolas PARIS
2015-02-20 17:19 GMT+01:00 Nicolas Paris :
> It appears that in the predicate close (WHERE id IN (foo)), foo cannot
> depend on other table (join or other). It must be a list. I anders
IN(...) keys number.
But creating a partial index, with a query of bilion character length is
not an issue ? It looks like a little dirty, not ?
Thanks for all
Nicolas PARIS
2015-02-20 15:44 GMT+01:00 Matheus de Oliveira :
>
>
> On Fri, Feb 20, 2015 at 11:06 AM, Nicolas Paris
)
Do the following query uses idx_huge_table_for_view1 ?
SELECT * FROM view1
WHERE field LIKE 'brillant idea'
In other words, do all queries on view1 will use the partial index (and
never the idx_hute_table ) ?
Nicolas PARIS
2015-02-20 13:36 GMT+01:00 Matheus de Oliveira :
>
> On F
ms of performances, will MV better than simple Views in my case ?
Thanks for explanation by advance
Nicolas PARIS
Hello,
I have a fact table ( table and indexes are bellow ) that will probably get
arround 2 billion rows.
- Can postgresql support such table (this table is the fact table of a
datamart -> many join query with dimensions tables) ?
- If yes, I would like to test (say insert 2 billion
org/docs/9.3/static/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS>),
because this routines will be at night, and need to be finished quickly.
Thanks
Nicolas PARIS
2014-04-07 14:59 GMT+02:00 Graeme B. Bell :
>
> Hi again Nick.
>
> Glad it helped.
>
> Generally, I
refresh 600 time A,B,C,D,E,F
I guess 1) is faster because they are 600 access to same table loaded in
memory ? But do parallel access to the same table implies concurency
and bad performance ?
Thanks
Nicolas PARIS
2014-04-07 12:29 GMT+02:00 Graeme B. Bell :
> On 04 Apr 2014, at 18:29, Nico
Hello,
My question is about multiprocess and materialized View.
http://www.postgresql.org/docs/9.3/static/sql-creatematerializedview.html
I (will) have something like 3600 materialised views, and I would like to
know the way to refresh them in a multithread way
(anderstand 8 cpu cores -> 8 refresh
25 matches
Mail list logo