Hello,
I have a huge table (100 million rows) of relations between nodes by id in
a Postgresql 11 server. Like this:
CREATE TABLE relations (
pid INTEGER NOT NULL,
cid INTEGER NOT NULL,
)
This table has parent-child relations references between nodes by id. Like:
*pid -> cid*
n1 -> n2
n
. I am sure there's a way to find al the nodes in O(n) time with n =
size of the resulset ...
On Tue, Aug 20, 2019, 6:10 AM Rob Sargent wrote:
>
>
> On Aug 19, 2019, at 7:42 PM, pabloa98 wrote:
>
> Hello,
>
> I have a huge table (100 million rows) of relations
Perhaps you want to TRUNCATE TABLEs. That will mitigate any I/O impact
On Thu, Oct 17, 2019 at 3:13 PM Andrew Kerber
wrote:
> If you are decommissioning the database, why not just rm -rf the whole
> system?
>
> On Thu, Oct 17, 2019 at 4:31 PM Michael Lewis wrote:
>
>> Your plan to loop over ta
Hello,
My schema requires a counter for each combination of 2 values. Something
like:
CREATE TABLE counter(
group INT NOT NULL,
element INT NOT NULL,
seq_number INT NOT NULL default 0,
CONSTRAINT PRIMARY KEY (group, element)
);
For each entry in counter, aka for each (group, element) pair, the m
On Thu, Mar 19, 2020 at 2:50 PM Rob Sargent wrote:
>
>
> > On Mar 19, 2020, at 3:36 PM, pabloa98 wrote:
> >
> > Hello,
> >
> > My schema requires a counter for each combination of 2 values. Something
> like:
> >
> > CREATE TABLE counte
On Thu, Mar 19, 2020 at 3:17 PM Rob Sargent wrote:
>
>
> On Mar 19, 2020, at 4:13 PM, pabloa98 wrote:
>
>
>
> On Thu, Mar 19, 2020 at 2:50 PM Rob Sargent wrote:
>
>>
>>
>> > On Mar 19, 2020, at 3:36 PM, pabloa98 wrote:
>> >
>&
On Thu, Mar 19, 2020 at 5:13 PM Adrian Klaver
wrote:
> On 3/19/20 3:32 PM, pabloa98 wrote:
> >
> >
> > On Thu, Mar 19, 2020 at 3:17 PM Rob Sargent > <mailto:robjsarg...@gmail.com>> wrote:
> >
> >
> >
> >> On Mar 19, 2020, at
On Thu, Mar 19, 2020 at 6:16 PM Rob Sargent wrote:
>
>
> On Mar 19, 2020, at 6:45 PM, pabloa98 wrote:
>
>
>
> On Thu, Mar 19, 2020 at 5:13 PM Adrian Klaver
> wrote:
>
>> On 3/19/20 3:32 PM, pabloa98 wrote:
>> >
>> >
>> > On T
On Thu, Mar 19, 2020 at 9:12 PM Adrian Klaver
wrote:
> On 3/19/20 7:38 PM, Michael Lewis wrote:
> >
> >
> > On Thu, Mar 19, 2020, 5:48 PM David G. Johnston
> > mailto:david.g.johns...@gmail.com>> wrote:
> >
> > However, one other consideration with sequences: do you care that
> > PostgreS
I see.
Any suggestion? It should behave like a sequence in the sense that
concurrent transitions will get different numbers from this alternative
sequence like solution.
In our case, we will need to do a call nextval('some_seq') (or similar)
from different processes no more than twice every minut
On Fri, Mar 20, 2020 at 5:39 AM rob stone wrote:
> Hello,
>
> On Thu, 2020-03-19 at 14:36 -0700, pabloa98 wrote:
> > Hello,
> >
> > My schema requires a counter for each combination of 2 values.
> > Something like:
> >
> > CREATE TABLE counter(
>
On Fri, Mar 20, 2020 at 10:26 AM Adrian Klaver
wrote:
> On 3/20/20 9:59 AM, Adrian Klaver wrote:
> > On 3/19/20 10:31 PM, pabloa98 wrote:
> >> I see.
> >>
> >> Any suggestion? It should behave like a sequence in the sense that
> >> concurrent tra
On Fri, Mar 20, 2020 at 3:59 PM Peter J. Holzer wrote:
> On 2020-03-19 16:48:19 -0700, David G. Johnston wrote:
> > First, it sounds like you care about there being no gaps in the records
> you end
> > up saving. If that is the case then sequences will not work for you.
>
> I think (but I would
> Nothing I saw that said int could not become bigint.
>
>
> My bad. The code cannot be a bigint. Or it could be a bigint between 1 to
:)
I thought it was not important. The code could be a number from 1 to
(so an Int will be OK) assigned in order-ish. This is because of
business
> > As to below that is going to require more thought.
> >
> Still no word on the actual requirement. As someone who believes
> consecutive numbers on digital invoices is simply a mistaken interpretation
> of the paper based system, I suspect a similar error here. But again we
> haven’t really hear
On Fri, Mar 20, 2020 at 9:04 PM John W Higgins wrote:
>
>
> On Fri, Mar 20, 2020 at 8:13 PM pabloa98 wrote:
>
>>
>> I hope I described the problem completely.
>>
>>
> 1) What is a group - does it exist prior to records being inserted? How
> many gro
> Why? "Print" and "screen" forms have all sorts of practical restrictions
> like this.
>
> Legacy I guess. These are all digital stuff. But the final result is an
identifier that people can read and realize what they are talking about.
Pablo
On Sat, Mar 21, 2020 at 12:08 PM Peter J. Holzer wrote:
>
>
> And I think that "care about gaps -> sequence doesn't work" is a
> knee-jerk reaction. It's similar to "can't parse HTML with regexps".
> True in the general case, and therefore people tend to blurt it out
> every time the topic comes
On Sat, Mar 21, 2020 at 4:37 PM Adrian Klaver
wrote:
>
> > Anyway, It will be awesome if we have a sequence data type in a future
> > version of postgresql. They will solve a lot of problems similar to this
> > one.
>
> Actually there are already two:
>
> https://www.postgresql.org/docs/12/dataty
> So the question may actually be:
>
> How do we improve our locking code, so we don't have to spawn millions
> of sequences?
>
> What is the locking method you are using?
>
I am not using locking with the million sequence solution. I do not want
something that locks because the problems described
> > Now I read this paragraph, I realize I was not clear enough.
> > I am saying we do not want to use locks because of all the problems
> > described.
>
> And what I was asking is what locking where you doing?
>
> And it might be better to ask the list how to solve those problems, then
> to create
On Sun, Mar 22, 2020 at 5:36 PM Christopher Browne
wrote:
> On Sun, 22 Mar 2020 at 17:54, pabloa98 wrote:
>
>>
>> So the question may actually be:
>>>
>>> How do we improve our locking code, so we don't have to spawn millions
>>> of sequences
On Sun, Mar 22, 2020 at 6:58 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Sun, Mar 22, 2020 at 5:36 PM Christopher Browne
> wrote:
>
>>
>> Then, on any of the tables where you need to assign sequence values,
>> you'd need to run an "after" trigger to do the assignment. The func
On Mon, Mar 23, 2020 at 9:58 AM Daniel Verite
wrote:
> pabloa98 wrote:
>
> > When I have a medium number of sequence I will report how it behaves. It
> > will take some time though.
>
> Be aware that creating the sequences on the fly has the kind of race
> c
On Wed, May 20, 2020 at 12:34 AM Alfonso wrote:
> Hi colleagues.
>
>
> I'm working in a Java application with some colleagues and we are in
> doubt wether to use Oracle or PostgreSQL as the data store. It will be a
> OLTP mainly application.
>
> Beside of license terms/costs which is a clear poi
On Thu, May 21, 2020 at 8:37 AM stan wrote:
> Worming on a small project, and have been doing a lot of Perl scripting to
> parse various types of files to populate the database. Now I need to get
> data from a cloud services provider (time-keeping). They have a REST API
> that returns data in a J
On Thu, Nov 26, 2020 at 8:25 PM Laurenz Albe
wrote:
> On Thu, 2020-11-26 at 09:07 -0800, Adrian Klaver wrote:
> > So even if Mats where to break this query:
> >
> > INSERT INTO foreign.labels (address, labels)
> > SELECT address_id, ARRAY_AGG(name) AS labels
> > FROM labels
> > GROUP BY 1
> > LIM
I would like to suggest for postgres_fdw: If the foreign database is
PostgreSQL, the link should just pass through all the CRUD SQL commands to
the other database.
If the other database is of a version so different that cannot make sense
of the CRUD SQL command, it will generate an error and that'
Hello
I just migrated our databases from PostgreSQL version 9.6 to version 11.1.
We got a segmentation fault while running this query:
SELECT f_2110 as x FROM baseline_denull
ORDER BY eid ASC
limit 500
OFFSET 131000;
It works in version 11,1 if offset + limit < 131000 approx (it is some
number a
I did not modify it.
I guess I should make it bigger than 1765. is 2400 or 3200 fine?
My apologies if my questions look silly. I do not know about the internal
format of the database.
Pablo
On Mon, Jan 28, 2019 at 11:58 PM Andrew Gierth
wrote:
> >>>>> "pabloa9
y postgres package ready to use like that though.
Pablo
On Tue, Jan 29, 2019 at 12:11 AM pabloa98 wrote:
> I did not modify it.
>
> I guess I should make it bigger than 1765. is 2400 or 3200 fine?
>
> My apologies if my questions look silly. I do not know about the internal
I appreciate your advice. I will check the number of columns in that table.
On Tue, Jan 29, 2019, 1:53 AM Andrew Gierth
wrote:
> >>>>> "pabloa98" == pabloa98 writes:
>
> pabloa98> I found this article:
>
>
I checked the table. It has 1265 columns. Sorry about the typo.
Pablo
On Tue, Jan 29, 2019 at 1:10 AM Andrew Gierth
wrote:
> >>>>> "pabloa98" == pabloa98 writes:
>
> pabloa98> I did not modify it.
>
> Then how did you create a table with more than
I tried. It works
Thanks for the information.
P
On Mon, Jan 28, 2019, 7:28 PM Tom Lane wrote:
> pabloa98 writes:
> > I just migrated our databases from PostgreSQL version 9.6 to version
> 11.1.
> > We got a segmentation fault while running this query:
>
> &
I have a schema with a generated table with information coming from batch
processes.
I would like to store in that table manually generated information. Since
those rows are inserted by hand, they will be lost when the table will be
reimported.
So I was thinking of creating a partitioned table wi
Thank you! This is exactly was I was looking for.
The range thing is good enough for me.
Pablo
On Wed, Apr 17, 2019 at 3:19 PM Adrian Klaver
wrote:
> On 4/17/19 2:21 PM, pabloa98 wrote:
> > I have a schema with a generated table with information coming from
> > batch proce
you are right. What it happens is that the batch importing process drops
the schema and recreates it. I would like some solution that is compatible
with that.
I am sure partitioned tables will have an impact but on the other hand, it
will solve the problem I have now without touching the legacy co
Thank you David, I will use list.
On Wed, Apr 17, 2019 at 6:42 PM David Rowley
wrote:
> On Thu, 18 Apr 2019 at 10:19, Adrian Klaver
> wrote:
> > CREATE TABLE automatic.measurement_automatic PARTITION OF
> > automatic.measurement
> > test-# FOR VALUES FROM (1) TO (1)
> > test-# PARTITION
Hello
Sadly today we hit the 1600 columns limit of Postgresql 11.
How could we add more columns?
Note: Tables are OK. We truly have 2400 columns now. Each column represents
a value in a matrix.
We have millions of rows so I would prefer not to transpose each row to (x,
y, column_value) triplets
rogramming (like MongoDB and Cassandra). Be
Pablo
On Wed, Apr 24, 2019 at 1:27 PM Tom Lane wrote:
> pabloa98 writes:
> > Sadly today we hit the 1600 columns limit of Postgresql 11.
> > How could we add more columns?
>
> You can't, at least not without some pretty
Thank you Joe! I will take a look
Pablo
On Wed, Apr 24, 2019 at 1:47 PM Joe Conway wrote:
> On 4/24/19 4:17 PM, pabloa98 wrote:
> > Sadly today we hit the 1600 columns limit of Postgresql 11.
> >
> > How could we add more columns?
> >
> > Note: Tables are O
Arrays could work, but it will make our code less clear. It is good to read
the column name (meaningful) than a number. We could use constants, but
then we should maintain them...
Pablo
On Wed, Apr 24, 2019 at 1:24 PM Alvaro Herrera
wrote:
> On 2019-Apr-24, pabloa98 wrote:
>
> >
Learning and similar domains.
Pablo
On Wed, Apr 24, 2019 at 1:23 PM Ron wrote:
> On 4/24/19 3:17 PM, pabloa98 wrote:
> > Hello
> >
> > Sadly today we hit the 1600 columns limit of Postgresql 11.
> >
> > How could we add more columns?
> >
> > Note: Tables
On Wed, Apr 24, 2019 at 3:28 PM Gavin Flower
wrote:
The convention here is to bottom post, or to intersperse comments, like
> in all the replies to you.
>
> So it would be appreciated if you did that, rather than top post as you
> have been doing.
>
>
Thanks for the advice. I will follow the conv
If you could use foreign data wrapper to connect
https://github.com/tds-fdw/tds_fdw then you can skip the migration back and
for to CSV.
You could even do partial migrations if needed (it could impact some
queries' speed though).
Pablo
On Fri, May 3, 2019 at 6:37 AM Adrian Klaver
wrote:
> On 5
45 matches
Mail list logo