Hi.
Just starting to look at how to use jsonb columns and I have a question. I have
found out that I can use the following to search for a value inside
the jsonb column:
select * from orders where info ->> 'customer' = 'John Doe'(where info is
the jsonb column)
But what if the jsonb colum
Hi Bjorn,
28.11.2017 11:18, Bjorn T Johansen пишет:
Hi.
Just starting to look at how to use jsonb columns and I have a question. I have
found out that I can use the following to search for a value inside
the jsonb column:
select * from orders where info ->> 'customer' = 'John Doe'(where i
[PostgreSQL 9.5.10 on x86_64-pc-linux-gnu, compiled by gcc (Debian
4.9.2-10) 4.9.2, 64-bit]
I noticed that an update was taking a long time and found this:
UPDATE public.facttable_imf_ifs p
SET [...lots of columns...]
FROM cleansing.cls_imf_ifs_facttable_imf_ifs c, cleansing.cleansing_change_type
On Tue, 28 Nov 2017 11:28:55 +0300
"Ivan E. Panchenko" wrote:
> Hi Bjorn,
>
> 28.11.2017 11:18, Bjorn T Johansen пишет:
> > Hi.
> >
> > Just starting to look at how to use jsonb columns and I have a question. I
> > have found out that I can use the following to search for a value
> > inside the
28.11.2017 13:25, Bjorn T Johansen пишет:
On Tue, 28 Nov 2017 11:28:55 +0300
"Ivan E. Panchenko" wrote:
Hi Bjorn,
28.11.2017 11:18, Bjorn T Johansen пишет:
Hi.
Just starting to look at how to use jsonb columns and I have a question. I have
found out that I can use the following to search
On Tue, 28 Nov 2017 13:52:59 +0300
"Ivan E. Panchenko" wrote:
> 28.11.2017 13:25, Bjorn T Johansen пишет:
> > On Tue, 28 Nov 2017 11:28:55 +0300
> > "Ivan E. Panchenko" wrote:
> >
> >> Hi Bjorn,
> >>
> >> 28.11.2017 11:18, Bjorn T Johansen пишет:
> >>> Hi.
> >>>
> >>> Just starting to look a
Just bumping this because I posted it right before Thanksgiving and it was
very easy to overlook.
Sorry if this is bad etiquette for the list... Just let me know if it is
and I won't do it in the future.
Hello All,
I have run CREATE SCHEMA xyz accidentally, and it replaced my existing
schema (xyz) with tables, and i have lost all my data.
Is there any way to rollback or get the schema back with old tables and
data.
Any help would be appreciated.
--
Regards :
Venktesh Guttedar.
Hello,
We have upgraded our database from 9.5 to 9.6 version.
After upgrade we ran vacuumdb command and we are getting following error:
vacuumdb: vacuuming of database "advdfat" failed: ERROR: duplicate key
value violates unique constraint "pg_statistic_relid_att_inh_index"
DETAIL: Key (stareli
On Tue, Nov 28, 2017 at 2:32 AM, Dilip Kumar wrote:
> I think BitmapHeapScan check whether dsa is valid or not if DSA is not
> valid then it should assume it's non-parallel plan.
>
> Attached patch should fix the issue.
So, create the pstate and then pretend we didn't? Why not just avoid
creati
Hi
I created some materialized views and set a group as owner. My problem is that
I can't refresh the views. I get "permission denied" even when using the
postgres super user. If I change the owner to an ordinary user I still can't
refresh the view as the owner or postgres. Only if I change own
Swapnil Vaze writes:
> We have upgraded our database from 9.5 to 9.6 version.
> After upgrade we ran vacuumdb command and we are getting following error:
> vacuumdb: vacuuming of database "advdfat" failed: ERROR: duplicate key
> value violates unique constraint "pg_statistic_relid_att_inh_index"
Henrik Uggla writes:
> I created some materialized views and set a group as owner. My problem is
> that I can't refresh the views. I get "permission denied" even when using the
> postgres super user. If I change the owner to an ordinary user I still can't
> refresh the view as the owner or post
"Peter J. Holzer" writes:
> I noticed that an update was taking a long time and found this:
> [ crappy plan for join on IS NOT DISTINCT ]
Yeah, there's no optimization smarts at all for IS [NOT] DISTINCT.
It can't be converted into a merge qual, nor a hash qual, nor an
indexscan qual.
In princip
On Tue, Nov 28, 2017 at 7:13 PM, Robert Haas wrote:
> On Tue, Nov 28, 2017 at 2:32 AM, Dilip Kumar
> wrote:
> > I think BitmapHeapScan check whether dsa is valid or not if DSA is not
> > valid then it should assume it's non-parallel plan.
> >
> > Attached patch should fix the issue.
>
> So, cre
The underlying tables are foreign tables. The user has been mapped to a foreign
user with select permission. I have no problem selecting from the foreign
tables or the materialized views.
This is the error in the log (not very useful):
2017-11-28 13:58:03.207 UTC [347] STATEMENT: REFRESH MATERI
Henrik Uggla writes:
> The underlying tables are foreign tables. The user has been mapped to a
> foreign user with select permission. I have no problem selecting from the
> foreign tables or the materialized views.
[ shrug... ] WFM; if I can select from the foreign table then I can make
a mater
On Tue, Nov 28, 2017 at 7:23 AM, VENKTESH GUTTEDAR <
venkteshgutte...@gmail.com> wrote:
> Hello All,
>
>
> I have run CREATE SCHEMA xyz accidentally, and it replaced my existing
> schema (xyz) with tables, and i have lost all my data.
>
> Is there any way to rollback or get the schema back
On Tue, Nov 28, 2017 at 5:23 AM, VENKTESH GUTTEDAR <
venkteshgutte...@gmail.com> wrote:
> I have run CREATE SCHEMA xyz accidentally, and it replaced my existing
> schema (xyz) with tables, and i have lost all my data.
>
As far as I know "CREATE SCHEMA" by itself cannot "drop" data nor would
On Tue, Nov 28, 2017 at 9:45 AM, Dilip Kumar wrote:
>> I haven't checked whether this fixes the bug, but if it does, we can
>> avoid introducing an extra branch in BitmapHeapNext.
>
> With my test it's fixing the problem.
I tested it some more and found that, for me, it PARTIALLY fixes the
proble
I'm writing a migration utility to move data from non-rdbms data
source to a postgres db. Currently I'm generating SQL INSERT
statements involving 6 related tables for each 'thing'. With 100k or
more 'things' to migrate I'm generating a lot of statements and when I
try to import using psql postgres
> On Nov 28, 2017, at 10:17 AM, Ted Toth wrote:
>
> I'm writing a migration utility to move data from non-rdbms data
> source to a postgres db. Currently I'm generating SQL INSERT
> statements involving 6 related tables for each 'thing'. With 100k or
> more 'things' to migrate I'm generating a l
Hi,
On 11/28/2017 06:17 PM, Ted Toth wrote:
> I'm writing a migration utility to move data from non-rdbms data
> source to a postgres db. Currently I'm generating SQL INSERT
> statements involving 6 related tables for each 'thing'. With 100k or
> more 'things' to migrate I'm generating a lot of st
On Tue, Nov 28, 2017 at 11:19 AM, Rob Sargent wrote:
>
>> On Nov 28, 2017, at 10:17 AM, Ted Toth wrote:
>>
>> I'm writing a migration utility to move data from non-rdbms data
>> source to a postgres db. Currently I'm generating SQL INSERT
>> statements involving 6 related tables for each 'thing'.
On Tue, Nov 28, 2017 at 11:22 AM, Tomas Vondra
wrote:
> Hi,
>
> On 11/28/2017 06:17 PM, Ted Toth wrote:
>> I'm writing a migration utility to move data from non-rdbms data
>> source to a postgres db. Currently I'm generating SQL INSERT
>> statements involving 6 related tables for each 'thing'. Wit
On 11/28/2017 10:50 AM, Ted Toth wrote:
On Tue, Nov 28, 2017 at 11:19 AM, Rob Sargent wrote:
On Nov 28, 2017, at 10:17 AM, Ted Toth wrote:
I'm writing a migration utility to move data from non-rdbms data
source to a postgres db. Currently I'm generating SQL INSERT
statements involving 6 rela
On Tue, 28 Nov 2017 11:17:07 -0600
Ted Toth wrote:
> I'm writing a migration utility to move data from non-rdbms data
> source to a postgres db. Currently I'm generating SQL INSERT
> statements involving 6 related tables for each 'thing'. With 100k or
> more 'things' to migrate I'm generating a l
On 11/28/2017 06:54 PM, Ted Toth wrote:
> On Tue, Nov 28, 2017 at 11:22 AM, Tomas Vondra
> wrote:
>> Hi,
>>
>> On 11/28/2017 06:17 PM, Ted Toth wrote:
>>> I'm writing a migration utility to move data from non-rdbms data
>>> source to a postgres db. Currently I'm generating SQL INSERT
>>> stateme
On Tue, Nov 28, 2017 at 12:04 PM, Steven Lembark wrote:
> On Tue, 28 Nov 2017 11:17:07 -0600
> Ted Toth wrote:
>
>> I'm writing a migration utility to move data from non-rdbms data
>> source to a postgres db. Currently I'm generating SQL INSERT
>> statements involving 6 related tables for each 't
On Tue, Nov 28, 2017 at 12:01 PM, Tomas Vondra
wrote:
>
>
> On 11/28/2017 06:54 PM, Ted Toth wrote:
>> On Tue, Nov 28, 2017 at 11:22 AM, Tomas Vondra
>> wrote:
>>> Hi,
>>>
>>> On 11/28/2017 06:17 PM, Ted Toth wrote:
I'm writing a migration utility to move data from non-rdbms data
source
On 11/28/2017 07:26 PM, Ted Toth wrote:
> On Tue, Nov 28, 2017 at 12:01 PM, Tomas Vondra
> wrote:
>>
>> ...
>>
>> That is, most of the memory is allocated for SPI (2.4GB) and PL/pgSQL
>> procedure (500MB). How do you do the load? What libraries/drivers?
>>
>
> I'm doing the load with 'psql -f'.
> On 28 Nov 2017, at 16:43, David G. Johnston
> wrote:
>
> On Tue, Nov 28, 2017 at 5:23 AM, VENKTESH GUTTEDAR
> wrote:
> I have run CREATE SCHEMA xyz accidentally, and it replaced my existing
> schema (xyz) with tables, and i have lost all my data.
>
> As far as I know "CREATE SCHEMA"
> Just what I was thinking. Are you looking in the correct database?
All: He has already replied to me earlier that he had indeed dropped the
schema before creating it.
Peter J. Holzer wrote:
> I noticed that an update was taking a long time and found this:
>
> UPDATE public.facttable_imf_ifs p
> SET [...lots of columns...]
> FROM cleansing.cls_imf_ifs_facttable_imf_ifs c,
> cleansing.cleansing_change_type ct
> WHERE
> (p.macrobondtimeseries is not distinct
I am getting "Login failed" errors when attempting to log into phpPgAdmin as
user 'postgres'. In what file path would I find the relevant log files, to
try and identify why the login is being rejected?
OS: CentOS 7
PostgreSQL: 9.6.6
pg_hba.conf settings for access:
# TYPE DATABASEUSER
> On Nov 28, 2017, at 12:06 PM, Robert Gordon wrote:
>
> I am getting “Login failed” errors when attempting to log into phpPgAdmin as
> user ‘postgres’. In what file path would I find the relevant log files, to
> try and identify why the login is being rejected?
Probably /var/lib/pgsql/data
On 2017-11-28 09:35:33 -0500, Tom Lane wrote:
> "Peter J. Holzer" writes:
> > I noticed that an update was taking a long time and found this:
> > [ crappy plan for join on IS NOT DISTINCT ]
>
> Yeah, there's no optimization smarts at all for IS [NOT] DISTINCT.
> It can't be converted into a merge
On 2017-11-28 20:48:24 +0100, Laurenz Albe wrote:
> Peter J. Holzer wrote:
> > I noticed that an update was taking a long time and found this:
> >
> > UPDATE public.facttable_imf_ifs p
> > SET [...lots of columns...]
> > FROM cleansing.cls_imf_ifs_facttable_imf_ifs c,
> > cleansing.cleansing_chan
I would welcome your comments and suggestions for connecting a user (not
a superuser) to a foreign server.
I have a database, named geo, in which I have geospatial and
geopolitical data. I want to be able to select data from geo from other
databases.
The database that I want to connect up
Greetings Susan,
* Susan Hurst (susan.hu...@brookhurstdata.com) wrote:
> I would welcome your comments and suggestions for connecting a user
> (not a superuser) to a foreign server.
>
> I have a database, named geo, in which I have geospatial and
> geopolitical data. I want to be able to select
On Tue, Nov 28, 2017 at 12:38 PM, Tomas Vondra wrote:
> So what does the script actually do? Because psql certainly is not
> running pl/pgsql procedures on it's own. We need to understand why
> you're getting OOM in the first place - just inserts alone should not
> cause failures like that. Pleas
On 28 Nov 2017 5:18 pm, "Tom Lane" wrote:
Henrik Uggla writes:
> The underlying tables are foreign tables. The user has been mapped to a
foreign user with select permission. I have no problem selecting from the
foreign tables or the materialized views.
[ shrug... ] WFM; if I can select from the
Brian Crowell writes:
> On Tue, Nov 28, 2017 at 12:38 PM, Tomas Vondra > wrote:
>> So what does the script actually do? Because psql certainly is not
>> running pl/pgsql procedures on it's own. We need to understand why
>> you're getting OOM in the first place - just inserts alone should not
>> ca
hi all,
we're in the process of optimizing some queries and we've noted a case
where the planner prefers a sequential scan instead of using an index,
while the index scan is actually much faster. to give you some
context: we have two main tables, keywords and results. keywords has
approximately 70
44 matches
Mail list logo