how to get list of sequences owned by a user/role

2018-03-09 Thread Charlin Barak
Hi,
I can find sequences owned by a schema from information_schema.sequences
and pg_class but how do I find sequences owned by a user/role? What pg
table should I be looking at?

Thanks.

Charlin


ora2pg and invalid command \N

2018-03-16 Thread Charlin Barak
Hi,
I'm using ora2pg to migrate our Oracle database to Postgres. I was able to
generate the data file using TYPE=COPY but when I attempted to load the
file via psql, I got lots of "invalid command \N" errors. The resolution on
the internet was not clear and was  not ora2pg related. How do I resolve
this issue? Can someone provide some guidance?

Thanks.


Re: ora2pg and invalid command \N

2018-03-16 Thread Charlin Barak
Thanks for your response.

the NULL values from Oracle were indeed replaced by \N in the data output.
How do I go about loading this file? I hope I do not have to temporary
replace \N with a string in the flat file and then later update in Postgres.

Thanks.

102 48299   50  C   3  \N
103 48299   \N  G   3  45
...
...

On Fri, Mar 16, 2018 at 2:06 PM, Pavel Stehule 
wrote:

>
>
> 2018-03-16 18:12 GMT+01:00 Charlin Barak :
>
>> Hi,
>> I'm using ora2pg to migrate our Oracle database to Postgres. I was able
>> to generate the data file using TYPE=COPY but when I attempted to load the
>> file via psql, I got lots of "invalid command \N" errors. The resolution on
>> the internet was not clear and was  not ora2pg related. How do I resolve
>> this issue? Can someone provide some guidance?
>>
>
> \N is symbol for NULL. but \cmd are psql commands too. This behave means
> so psql lost synchronization and try to eval data like commands. psql
> import is tolerant - that means so source of this issue is lost usually.
> Try to import data with option ON_ERROR_STOP
>
>  https://stackoverflow.com/questions/4480381/postgres-
> sql-fail-on-script-error
>
> Regards
>
> Pavel
>
>>
>> Thanks.
>>
>>
>>
>>
>>
>


Using the public schema

2018-04-23 Thread Charlin Barak
Hi,
We will be developing three new applications in PostgreSQL, each having its
own database instance running on different hosts. We will only have one
schema per Postgres instance. The data is read-write only by one
application/schema in the DB instance and the data is published to other
applications via API calls.

In such a standalone database configuration, are there any security
implications or any downsides to creating the application in the public
schema?

Thanks.


Re: Using the public schema

2018-04-25 Thread Charlin Barak
Thanks Laurenz. That works.



On Tue, Apr 24, 2018 at 4:00 PM, Laurenz Albe 
wrote:

> Charlin Barak wrote:
>
> > We will be developing three new applications in PostgreSQL, each having
> its own
> > database instance running on different hosts. We will only have one
> schema per
> > Postgres instance. The data is read-write only by one application/schema
> in the
> > DB instance and the data is published to other applications via API
> calls.
> >
> > In such a standalone database configuration, are there any security
> implications
> > or any downsides to creating the application in the public schema?
>
> No, that's fine.
>
> Just REVOKE CREATE ON SCHEMA public FROM PUBLIC.
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>


Question on Buckets and Batches in explain plan

2018-06-14 Thread Charlin Barak
Hi,
I ran a query on two supposedly similarly configured 9.6.8 databases but
got two different timings. One ran three times faster than the other.

The explain plans on both systems look the same except for the Buckets and
Batches. Can someone explain what that means and what configuration
settings I should compare between the systems? The faster plan seems to be
fetching data in bigger batches (2048) thus requiring fewer buckets (
65536)?

Faster plan.
->  Hash  (cost=1805846.88..1805846.88 rows=76895088 width=49) (actual
time=143919.988..143919.988 rows=83895440 loops=1)
   Buckets: 65536  Batches: 2048
Memory Usage: 3513kB
   Buffers: shared hit=2 read=1036894,
temp written=652371


Slower plan
->  Hash  (cost=1805862.40..1805862.40 rows=76895440 width=49) (actual
time=530978.279..530978.279 rows=83895440 loops=1)
Buckets: 2097152  Batches: 64
Memory Usage: 112069kB
Buffers: shared hit=1 read=1036907,
temp written=643448


Thanks.


Oracle number to PostgreSQL

2019-05-01 Thread Charlin Barak
Hi,
When migrating from Oracle number(10,3) to PostgreSQL, is numeric(10,3)
ideal or should I consider some other data types?

Thanks.

Charlin


Re: Oracle number to PostgreSQL

2019-05-01 Thread Charlin Barak
Thanks Stephen. Something for me to consider.

Charlin

On Wed, May 1, 2019 at 9:25 AM Stephen Frost  wrote:

> Greetings,
>
> * Charlin Barak (charlinba...@gmail.com) wrote:
> > When migrating from Oracle number(10,3) to PostgreSQL, is numeric(10,3)
> > ideal or should I consider some other data types?
>
> This really depends on what data is actually in that field and if you
> need it to be exact.
>
> If the field actually contains integers and always will, then using an
> integer type field is *much* more efficient, either 'integer' if the
> values will always be less than 2B, or 'bigint' if it'll be larger.
>
> If the field actually contains floating point and can be inexact, then
> using 'real' or 'float8' will be more efficient than numeric.
>
> If the field needs to be exact (eg: monetary fields), then using
> numeric(10,3) is correct.
>
> I strongly recommend considering the data that goes into the field when
> making this decision- there really is no 'one size fits all' when going
> from number to numeric.
>
> Thanks,
>
> Stephen
>