ar with OLTP. Also, I don't expect many
people will use existing popular SaaS for data warehousing like Amazon
Redshift, Azure Synapse, Google BigQuery and Snowflake, rather than
build their analytics databases on public IaaS or on-premises.
Regards
MauMau
will use existing popular SaaS for data warehousing like Amazon
Redshift, Azure Synapse, Google BigQuery and Snowflake, rather than
build their analytics databases on public IaaS or on-premises.
Regards
MauMau
he
troubleshooting when the user asks why they experience unsteady
response time.
Regards
MauMau
o provide each catcache
with a separate memory context, which is the child of
CacheMemoryContext. This gives slight optimization by using the slab
context (slab.c) for a catcache with fixed-sized tuples. But that'd
be a bit complex, I'm afraid for PG 12.
Regards
MauMau
From: Thomas Munro
> Ok, back-patched.
Thank you very much!
> It seems like the other patch[1] might need the same treatment,
right?
I believe so, because that patch is based on the same cause.
Regards
MauMau
rectly by having the
> subtrasaction-for-savepoint appear *before* the internal
subtransaction,
> so a subsequent "SELECT 0/0" doesn't remove the user declared
> savepoint.)
That sounds interesting.
* How can PLs like PL/pgSQL utilize this to continue upon an SQL
failure? They don't call StartTransactionCommand.
* How can psql make use of this feature for its ON_ERROR_ROLLBACK?
Regards
MauMau
arding records all DDL statements and re-send them to the down nodes
later.
> Doing this imposes a cost at
> DDL-execution-time only, which seems much better than imposing the
cost
> of translating name to OID on every server for every query.
Agreed.
Regards
MauMau
arding records all DDL statements and re-send them to the down nodes
later.
> Doing this imposes a cost at
> DDL-execution-time only, which seems much better than imposing the
cost
> of translating name to OID on every server for every query.
Agreed.
Regards
MauMau
From: Simon Riggs
On 5 June 2018 at 17:14, MauMau wrote:
>> Furthermore, an extra hop and double parsing/planning could matter
for
>> analytic queries, too. For example, SAP HANA boasts of scanning 1
>> billion rows in one second. In HANA's scaleout architecture, an
>
or extra GUC switches to prevent any kind of inconsistent
> operations.
Yes, I hope our deadlock detection/resolution can be ported to
PostgreSQL. But I'm also concerned like you, because Symfoware is
locking-based, not MVCC-based.
Regards
MauMau
coordinator handles the Row Description message ('T')
from the data node. I guess the parsing is necessary to process type
names combined with type modifiers, e.g. "char(100)".
create_tuple_desc
parseTypeString
typeStringToTypeName
raw_parser
Regards
MauMau
if it
needs to produce or consume tuples, or both.
Note that there may be mulitple shared_queues used even for a single
query. So a value should be set taking into account the number of
connections it can accept and expected number of such joins occurring
simultaneously.
------
Regards
MauMau
Recognition in C++/Java/Go/Scala".
Proceedings of Scala Days 2011
Regards
MauMau
e. But I don't preclude a central node.
Some node needs to manage sequences, and it may as well manage the
system catalog.
Regards
MauMau
But managing the catalog at one place and using
the same OID values seems to concise to me as a concept.
Regards
MauMau
-Original Message-
From: Ashutosh Bapat
Sent: Saturday, June 2, 2018 1:00 AM
To: Tom Lane
Cc: MauMau ; Robert Haas ; PostgreSQL Hackers
Subject: Re: I'd like to
But managing the catalog at one place and using
the same OID values seems to concise to me as a concept.
Regards
MauMau
From: Robert Haas
On Thu, May 31, 2018 at 8:12 AM, MauMau wrote:
>> Oh, I didn't know you support FDW approach mainly for analytics. I
>> guessed the first target was OLTP read-write scalability.
>
> That seems like a harder target to me, because you will have an
extra
>
r FDW, I think we should leverage the
code and idea of XC/XL.
Regards
MauMau
1d...@mail.gmail.com
Michael Paquier
9 years, 2009-08-07
https://www.postgresql.org/message-id/c64c5f8b0908062031k3ff48428j824a
9a46f2818...@mail.gmail.com
Tomas Vondra
11 years, 2007-01-25
https://www.postgresql.org/message-id/20070125155424.gg64...@nasby.net
Regards
MauMau
2018-05-31 22:44 GMT+09:00, Robert Haas :
> On Thu, May 31, 2018 at 8:12 AM, MauMau wrote:
>> Oh, I didn't know you support FDW approach mainly for analytics. I
>> guessed the first target was OLTP read-write scalability.
>
> That seems like a harder target to me, beca
for looking at the chart and telling me the figures.
> I
> think it's pretty clear that we need to both continue to improve some
> of these major new features we've added and at the same time keep
> introducing even more new things if we want to continue to gain market
> share and mind share. I hope that features like scale-out and also
> zheap are going to help us continue to whittle away at the gap, and I
> look forward to seeing what else anyone may have in mind.
Definitely. I couldn't agree more.
Regards
MauMau
cle=1476.14 MySQL=1321.13 SQL Server=??
MongoDB=?? PostgreSQL=288.66
(Oracle / PostgreSQL ratio is 5.1)
Regards
MauMau
t may be after PGCon.
Regards
MauMau
dirty buffers too soon.
However, I have a question. How does the truncation failure in
autovacuum lead to duplicate keys? The failed-to-be-truncated pages
should only contain dead tuples, so pg_dump's table scan should ignore
dead tuples in those pages.
Regards
MauMau
think this doesn't need more profound review so I'll mark this
Ready For Commit after confirming the amendment.
I'm sorry for my late reply. Last week I was off for a week.
And thank you for your review. All modifications are done.
Regards
MauMau
pgtypes_freemem_v3.patch
Description: Binary data
From: Robert Haas
I also said it would be worse on spinning disks.
Also, Yoshimi Ichiyanagi did not find it to be true even on NVRAM.
Yes, let me withdraw this proposal. I couldn't see any performance
difference even with ext4 volume on a PCIe flash memory.
Regards
MauMau
ge is necessary.
Regards
MauMau
27 matches
Mail list logo