o migrate into azure postgreSQL (Linux OS, located in
Germany) facing an issue.
I request you to please look into this and help me to resolve this error.
Please let me know if any queries.
Looking forward for the resolution.
Thanks & regards,
Sai Teja
Best Regards,
Sai Teja
On Fri, 4 Aug, 2023, 8:03 am Ron, wrote:
> On 8/3/23 21:22, Sai Teja wrote:
>
> Hi team,
>
> I am trying to migrate the data from db2 to postgreSQL in which one of the
> table is having XML data.
> For one of the file (13MB) I'm facing an err
solve this issue.
Thanks & Regards,
Sai Teja
that particular
row.
Would be appreciated if anyone share the insights.
Thanks,
Sai
On Mon, 14 Aug, 2023, 5:21 pm Sai Teja,
wrote:
> Hi Andreas,
>
> Thank you for the reply!
>
> Currently it is Hex by default. If I change to escape is there any
> possibility to fetch the
ote:
> On 8/14/23 09:29, Sai Teja wrote:
> > Could anyone please suggest any ideas to resolve this issue.
> >
> > I have increased the below parameters but still I'm getting same error.
> >
> > work_mem, shared_buffers
> >
> > Out of 70k rows i
By default the bytea_output is in hex format.
On Tue, 15 Aug, 2023, 12:44 am Ron, wrote:
> Did you *try* changing bytea_output to hex?
>
> On 8/14/23 12:31, Sai Teja wrote:
>
> I am just running select query to fetch the result
> Query : select id, content_data, name from ta
.
Thanks & Regards,
Sai
On Tue, 15 Aug, 2023, 8:10 am Sai Teja,
wrote:
> By default the bytea_output is in hex format.
>
> On Tue, 15 Aug, 2023, 12:44 am Ron, wrote:
>
>> Did you *try* changing bytea_output to hex?
>>
>> On 8/14/23 12:31, Sai Teja wrote:
>&
d retrieve it.
Currently It is not happening due to limit of field size set by postgreSQL.
Would request to share your insights and suggestions on this to help me for
resolving this issue.
Thanks & Regards,
Sai Teja
On Tue, 15 Aug, 2023, 8:53 am Tom Lane, wrote:
> Sai Teja writes:
>
Hi Team,
We have bytea data stored in pg_largeobjects (Large objects table)
Here , the data is 675 MB. We are using Large objects client interface API
provided by postgreSQL to retrieve the data (lo_read, lo_open etc)
When I try to fetch the data from local it took 30-35 sec to retrieve the
cont
is hosted in Linux Machine which is using Locale_ctype:-
en_US_utf.8
Would request you to please suggest any ideas to resolve this issue.
It'll would be very helpful and appreciated.
Thanks,
Sai Teja
f any alternatives are there to resolve
this issue.
Thanks,
Sai Teja
On Wed, 6 Sep, 2023, 7:23 pm Tom Lane, wrote:
> Sai Teja writes:
> > I am using UPPER document name for converting the text from lower case
> into
> > upper case.
> > But here for the below example
>
ta explicitly through UPPER netword like select
UPPER('Mass') then I'm getting expected output such as MASS
Would request you to please suggest the ideas to resolve this issue.
Thanks,
Sai Teja
On Wed, 6 Sep, 2023, 8:59 pm Francisco Olarte,
wrote:
> On Wed, 6 Sept 2023 at 16
I added one column with generated always column with UPPER CASE like below:-
Alter table table_name t add column data varchar(8000) generated always as
(UPPER(t.content)) stored
Data column is generated always constraint here
This column has many sentences for each row in which some of the chara
Hi All,
We have one table which is storing XML data with 30k records and stores
huge amount of data.
We are trying to create the index for this column in the table. But, we’re
getting “Huge input Lookup error” during creation of Index.
Please check the below command which is used to create the i
Thank you so much for all your responses.
I just tried with Hash, GIN etc
But it didn't worked. And I think it is because of "Xpath" expression which
I used in the index create command.
But is there any alternative way to change this Xpath? Since I need to
parse the XML as there is no other opti
15 matches
Mail list logo