Ok, I see. Thanks for syncing information Ling Miao
Thomas Cai <thomas...@126.com> 于2021年4月6日周二 上午11:40写道: > Ling Miao, > > Yes,your understanding is correct. Almost the same based on the test > result. > > Thomas > Thomas Cai > 邮箱:thomas...@126.com > > <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&name=Thomas%20Cai&uid=example%40163.com&ftlId=3&items=%5B%22%E9%82%AE%E7%AE%B1%EF%BC%9Athomascai%40126.com%22%5D> > > Signature is customized by Netease Mail Master > <https://mail.163.com/dashi/dlpro.html?from=mail88> > On 04/06/2021 11:21, ling miao <emmymia...@gmail.com> wrote: > > > Thanks for the info about limit clause. Actually I did try to query the > whole records without limit both in MySql and Doris. The situation was > almost the same. > If you use doris to query the whole mysql table without limit, it is > possible that the performance gap is large > The reason is that the amount of data in the entire table is large, and > there will be network and serialization consumption during the period. > > > Share with you:I changed the table engine type of the original MySQL > table from InnoDB to MYISAM. Then rebuild the mapping table in Doris. The > query performance is almost the same as in MySQL directly. > Do you mean that the engine of the original MYSQL table changed, the speed > of reading mysql data directly and reading mysql data through Doris is the > same? > > Ling Miao > > Thomas Cai <thomas...@126.com> 于2021年4月3日周六 上午11:39写道: > >> Hi All, >> >> >> Share with you:I changed the table engine type of the original MySQL >> table from InnoDB to MYISAM. Then rebuild the mapping table in Doris. The >> query performance is almost the same as in MySQL directly. >> >> >> Best Regards, >> Thomas >> >> >> | | >> Thomas Cai >> 邮箱:thomas...@126.com >> | >> >> Signature is customized by Netease Mail Master >> >> On 04/01/2021 18:05, Thomas Cai wrote: >> Dear Lee, >> >> >> Thanks for the info about limit clause. Actually I did try to query the >> whole records without limit both in MySql and Doris. The situation was >> almost the same. >> >> >> Regards >> Thomas >> | | >> Thomas Cai >> 邮箱:thomas...@126.com >> | >> >> Signature is customized by Netease Mail Master >> >> On 04/01/2021 11:50, Lee Happen wrote: >> Hi Thomas Cai, >> >> I did the work to push down the limit clause for odbc table and mysql >> table. You can refer to this pr: >> https://github.com/apache/incubator-doris/pull/4707/files. >> >> If you want better performance of mapping table with limit clause, please >> update doris version after 0.13.15. >> >> >> >> ________________________________ >> From: Thomas Cai <thomas...@126.com> >> Sent: March 31, 2021 12:44 >> To: ling miao <lingm...@apache.org> >> Cc: dev@doris.apache.org <dev@doris.apache.org> >> Subject: Re:Re: The performance issue about Doris mapping table >> >> Dear Ling Miao, >> >> >> Thanks for your advice. Here is the result by looking for the profile. >> Any more suggestions? >> >> >> Query: >> Summary: >> Query ID: 3ef6a5a504e4466e-aeffd78390f003e6 >> Start Time: 2021-03-31 20:24:25 >> End Time: 2021-03-31 20:24:34 >> Total: 9s452ms >> Query Type: Query >> Query State: EOF >> Doris Version: 0.12.0-rc03 >> User: root >> Default Db: default_cluster:yongjin_doris >> Sql Statement: select >> projectid,projectname,uid,Cst_Proj_ID,Success_Bb_Date from >> yj_process_channel limit 2000 >> Execution Profile 3ef6a5a504e4466e-aeffd78390f003e6:(Active: 9s451ms, % >> non-child: 0.00%) >> Fragment 0: >> Instance 3ef6a5a504e4466e-aeffd78390f003e7 >> (host=TNetworkAddress(hostname:172.20.0.12, port:9060)):(Active: 9s438ms, % >> non-child: 0.00%) >> - PeakUsedReservation: 0.00 >> - RowsProduced: 2.0K (2000) >> - AverageThreadTokens: 4607182418800017400.00 >> - PeakReservation: 0.00 >> BlockMgr: >> - BlocksCreated: 0 >> - MemoryLimit: 2.00 GB >> - BlockWritesOutstanding: 0 >> - BytesWritten: 0.00 >> - TotalEncryptionTime: 0ns >> - BufferedPins: 0 >> - TotalReadBlockTime: 0ns >> - TotalBufferWaitTime: 0ns >> - BlocksRecycled: 0 >> - TotalIntegrityCheckTime: 0ns >> - MaxBlockSize: 8.00 MB >> DataBufferSender >> (dst_fragment_instance_id=3ef6a5a504e4466e-aeffd78390f003e7): >> MYSQL_SCAN_NODE (id=0):(Active: 9s437ms, % non-child: 0.00%) >> - NumThread: 0 >> - TotalRawReadTime(*): 0ns >> - ScanRangesComplete: 0 >> - RowsReturned: 2.0K (2000) >> - ScannerThreadsInvoluntaryContextSwitches: 0 >> - RowsRead: 0 >> - PerReadThreadRawHdfsThroughput: 0.0 /sec >> - ScannerThreadsVoluntaryContextSwitches: 0 >> - TotalReadThroughput: 0.0 /sec >> - ScannerThreadsTotalWallClockTime: 0ns >> - MaterializeTupleTime(*): 616.430us >> - ScannerThreadsUserTime: 0ns >> - ScannerThreadsSysTime: 0ns >> - MemoryUsed: 0.00 >> - BytesRead: 0.00 >> - RowsReturnedRate: 211 >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> At 2021-03-31 19:43:06, "ling miao" <lingm...@apache.org> wrote: >> >> Hi Thomas Cai, >> >> >> Doris's mapping table read performance is indeed slightly worse than the >> direct read mysql table performance. >> But your case should not be affected. >> It may be that some optimizations of our later versions are not in 0.12 >> version. >> >> You can use this document to print out the profile of the running >> process. Take a look at where the entire query is slow. >> >> http://doris.apache.org/master/en/administrator-guide/running-profile.html#noun-interpretation >> >> | >> | >> >> >> Thomas Cai <thomas...@126.com> 于2021年3月31日周三 下午7:17写道: >> >> Dear All, >> >> >> I got a performance issue when I selected records(limit 1000) from a >> Doris mapping table(MySQL,records were around 150K). It took less than 2s >> to look for the same records from MySQL table directly, but it would take >> about 20s from the Doris mapping table. My Doris version is 0.12 and I >> tried to change several Doris configuration/variables, unfortunattely it >> didn't work. >> >> >> Did someone meet the same situation before to provide any suggestions? >> Whether proper configuration setting may solve the issue or have to wait >> JDBC for SQL in version 0.14 release. >> >> >> Appreciated for any feedback. >> >> >> Best Regards, >> Thomas >> >