Hi!
> "Sergey" == Sergey Vojtovich writes:
Sergey> revision-id: 3a8f101a173b987b314ba5fcc5c74d80ab56802b
(mariadb-10.2.2-160-g3a8f101)
Sergey> parent(s): c846ebe9df9adc90a5c25be0dce816c5d4302794
Sergey> committer: Sergey Vojtovich
Sergey> timestamp: 2017-02-07 13:27:42 +0400
Sergey> messag
Hi Alexander,
I'm sorry to waste your time for formatting and/or coding style issues.
Is there a document that lists your best practices ?
Regards,
Jérôme.
> -Message d'origine-
> De : Alexander Barkov [mailto:b...@mariadb.org]
> Envoyé : mercredi 8 février 2017 07:06
> À : jerome brauge;
On Tue, Feb 7, 2017 at 8:12 PM, jan wrote:
> # Write file to make mysql-test-run.pl expect the "crash", but don't
> # start it until it's told to
> -# We give 30 seconds to do a clean shutdown because we do not want
> -# to redo apply the pages of t1.ibd at the time of recovery.
> -# We want SQL
Hi Varun,
On Tue, Feb 07, 2017 at 10:19:50PM +0100, Sergei Golubchik wrote:
>
> I almost replied to your email with "it's impossible, Archive did not
> have HA_RECORD_MUST_BE_CLEAN_ON_WRITE in the table_flags(), no engine
> did. So removal could not have changed anything".
>
> But then I noticed
Hello:
In XtraDB, if we adjust the size of doublewrite buffer and relative
variable (i.e. srv_doublewrite_batch_size) , in theory, we can get better
throughput when buffer pool flush to disk.
I wonder that why doublewrite buffer size is 2 block and each flush page
number is 120 (decided by srv_do
Hank -
A very similar idea has been implemented in XtraDB of Percona Server
5.7, see "Parallel Doublewrite" at
https://www.percona.com/doc/percona-server/5.7/performance/xtradb_performance_improvements_for_io-bound_highly-concurrent_workloads.html
AFAIK, this feature is not in XtraDB of MariaDB a
6 matches
Mail list logo