Hi,
I'm experimenting with Java client libraries (the usual JDBC and some other
async projects, eg [1]). So far, I'm not finding ways to select/read
composite types without ugly string parsing. The simple cases are okay, but
if I have a column that is an array of composites, the client library mig
Hello,
I have tried doing something like:
SELECT concat_ws(' ', table.*) FROM table;
and if I do that way, it is essentially same as
SELECT concat(table.*) FROM table;
and I get the items in braces like (1,something).
Why do I get it in braces?
Is there a way without specifying specific fie
Hi Adrian,
I'll take a look about pg_bulkload, but I populate the database via a
Java application with JDBC.
I'll try the query you kindly sent to me!
Thanks!
Leandro Guimarães
On Fri, Jun 14, 2019 at 6:59 PM Adrian Klaver
wrote:
> On 6/14/19 2:04 PM, Leandro Guimarães wrote:
> > Hi,
> >
Hi Tim, thanks for you answer!
The columns were just examples, but let me explain the database structure,
the fields in *bold are the keys*:
*customer_id integer*
*date_time timestamp*
*indicator_id integer*
*element_id integer*
indicator_value double precision
The table is partitioned per day a
Leandro Guimarães writes:
> Hi,
>
>I have a scenario with a large table and I'm trying to insert it via a
> COPY command with a csv file.
>
>Everything works, but sometimes my source .csv file has duplicated data
> in the previously fulfilled table. If I add a check constraint and try t
On 6/14/19 2:04 PM, Leandro Guimarães wrote:
Hi,
I have a scenario with a large table and I'm trying to insert it via
a COPY command with a csv file.
Everything works, but sometimes my source .csv file has duplicated
data in the previously fulfilled table. If I add a check constraint
On 6/14/19 2:04 PM, Leandro Guimarães wrote:
Hi,
I have a scenario with a large table and I'm trying to insert it via
a COPY command with a csv file.
Everything works, but sometimes my source .csv file has duplicated
data in the previously fulfilled table. If I add a check constraint
Hi,
I have a scenario with a large table and I'm trying to insert it via a
COPY command with a csv file.
Everything works, but sometimes my source .csv file has duplicated data
in the previously fulfilled table. If I add a check constraint and try to
run the COPY command I have an error tha
On 6/14/19 2:13 PM, Ron wrote:
On 6/14/19 2:55 PM, Rob Sargent wrote:
Is reindex table redundant after vacuum(analyse,verbose)?
Instead of "redundant", I'd call it "backwards", since doing a
vacuum(analyse,verbose) on a freshly reindexed table seems more fruitful.
Does reindex remove (dea
On 6/14/19 2:55 PM, Rob Sargent wrote:
Is reindex table redundant after vacuum(analyse,verbose)?
Instead of "redundant", I'd call it "backwards", since doing a
vacuum(analyse,verbose) on a freshly reindexed table seems more fruitful.
--
Angular momentum makes the world go 'round.
Is reindex table redundant after vacuum(analyse,verbose)?
On 6/14/19 11:23 AM, Shane Duan wrote:
Thanks, Adrian.
I am using prebuilt packages for Redhat 7(using yam install.
https://www.postgresql.org/download/linux/redhat/) and Ubuntu
18.04(using apt-get instal.
https://www.postgresql.org/download/linux/ubuntu/) on Linux. For
Windows, I am downloa
Thanks, Adrian.
I am using prebuilt packages for Redhat 7(using yam install.
https://www.postgresql.org/download/linux/redhat/) and Ubuntu 18.04(using
apt-get instal. https://www.postgresql.org/download/linux/ubuntu/) on
Linux. For Windows, I am downloading pre-built package from EnterpriseDB
foll
On 6/14/19 9:35 AM, Shane Duan wrote:
Is the default PostgreSQL installer built with openSSL 1.0 and or 1.1? I
checked on Windows, it seems it was built with openSSL 1.02g. If it
true, is there any plan to pre-build next minor release(10.x) with
latest openSSL1.1? OpenSSL 1.0 will be deprecated
Is the default PostgreSQL installer built with openSSL 1.0 and or 1.1? I
checked on Windows, it seems it was built with openSSL 1.02g. If it true,
is there any plan to pre-build next minor release(10.x) with latest
openSSL1.1? OpenSSL 1.0 will be deprecated by the end of 2019...
Thanks,
Shane
On 2019-06-14 16:01:40 +0200, Tiemen Ruiten wrote:
> FS is ZFS, the dataset with the PGDATA directory on it has the following
> properties (only non-default listed):
[...]
> My problem is that checkpoints are taking a long time. Even when I run a few
> manual checkpoints one after the other, they k
Greetings,
* Tiemen Ruiten (t.rui...@tech-lab.io) wrote:
> checkpoint_timeout = 60min
That seems like a pretty long timeout.
> My problem is that checkpoints are taking a long time. Even when I run a
> few manual checkpoints one after the other, they keep taking very long, up
> to 10 minutes:
Y
Greetings,
* Ravi Krishna (ravikris...@mail.com) wrote:
> On 6/14/19 10:01 AM, Tiemen Ruiten wrote:
> >LOG: checkpoint starting: immediate force wait
>
> Does it mean that the DB is blocked until the completion of checkpoint.
> Years ago
> Informix use to have this issue until they fixed around
On 6/14/19 10:01 AM, Tiemen Ruiten wrote:
LOG: checkpoint starting: immediate force wait
Does it mean that the DB is blocked until the completion of checkpoint.
Years ago
Informix use to have this issue until they fixed around 2006.
Hello,
I setup a new 3-node cluster with the following specifications:
2x Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz (2*20 cores)
128 GB RAM
8x Crucial MX500 1TB SSD's
FS is ZFS, the dataset with the PGDATA directory on it has the following
properties (only non-default listed):
NAMEPROPE
On 6/14/19 12:38 AM, Daulat Ram wrote:
Hello team,
Please suggest how to connect to toad Edge with postgresql running with
docker container.
Use the below and fill in appropriate settings?:
https://support.quest.com/technical-documents/toad-edge/2.0.9/user-guide/3#TOPIC-1189203
Regards,
Laurenz Albe writes:
> Alexander Farber wrote:
>> But creating an SQL function fails -
>>
>> words_ru=> CREATE OR REPLACE FUNCTION words_all_letters()
>> words_ru-> RETURNS array AS
>> words_ru-> $func$
>> words_ru$> SELECT ARRAY[...
> "array" is not an existing data type.
> You
Alexander Farber wrote:
> But creating an SQL function fails -
>
> words_ru=> CREATE OR REPLACE FUNCTION words_all_letters()
> words_ru-> RETURNS array AS
> words_ru-> $func$
> words_ru$> SELECT ARRAY[...
"array" is not an existing data type.
You'll have to specify an array of wh
Hello,
in PostgreSQL 10.8 the following works -
words_ru=> SELECT ARRAY[
words_ru-> '*', '*', 'А', 'А', 'А', 'А', 'А', 'А', 'А', 'А',
words_ru-> 'Б', 'Б', 'В', 'В', 'В', 'В', 'Г', 'Г', 'Д', 'Д',
words_ru-> 'Д', 'Д', 'Е', 'Е', 'Е', 'Е', 'Е',
Hello team,
Please suggest how to connect to toad Edge with postgresql running with docker
container.
Regards,
Daulat
25 matches
Mail list logo