If the scheduling model increases the vsetvls, we shouldn't set it as default scheduling model
juzhe.zh...@rivai.ai From: Robin Dapp Date: 2024-02-26 21:29 To: Edwin Lu; gcc-patches CC: rdapp.gcc; gnu-toolchain; pan2.li; juzhe.zh...@rivai.ai Subject: Re: [PATCH] RISC-V: Update test expectancies with recent scheduler change On 2/24/24 00:10, Edwin Lu wrote: > Given the recent change with adding the scheduler pipeline descriptions, > many scan-dump failures emerged. Relax the expected assembler output > conditions on the affected tests to reduce noise. I'm not entirely sure yet about relaxing the scans like this. There seem to be uarchs that want to minimize vsetvls under all circumstances while others don't seem to care all that much. We could (not must) assume that the tests that now regress have been written with this minimization aspect in mind and that we'd want to be sure that we still manage to emit the minimal number of vsetvls. Why is the new upper bound acceptable? What if a vector_load cost of 12 (or so) causes even more vsetvls? The 6 in generic_ooo is more or less arbitrary chosen. My suggestion before was to create another sched model that has load costs like before and run the regressing tests with that model. That's of course also not really ideal and actually shoehorned a bit, in particular as no scheduling also increases the number of vsetvls. Juzhe: What's your intention with those tests? I'd suppose you want the vsetvl number to be minimal here and not higher? Did you plan to add a particular scheduling model or are you happy with the default (all 1) latencies? Regards Robin