On 06.09.2023 10:11, Boris Brezillon wrote:
>On Tue,  5 Sep 2023 19:45:24 +0100
>Adrián Larumbe <adrian.laru...@collabora.com> wrote:
>
>> The current implementation will try to pick the highest available size
>> display unit as soon as the BO size exceeds that of the previous
>> multiplier.
>> 
>> By selecting a higher threshold, we could show more accurate size numbers.
>> 
>> Signed-off-by: Adrián Larumbe <adrian.laru...@collabora.com>
>> ---
>>  drivers/gpu/drm/drm_file.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>> 
>> diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
>> index 762965e3d503..0b5fbd493e05 100644
>> --- a/drivers/gpu/drm/drm_file.c
>> +++ b/drivers/gpu/drm/drm_file.c
>> @@ -879,7 +879,7 @@ static void print_size(struct drm_printer *p, const char 
>> *stat,
>>      unsigned u;
>>  
>>      for (u = 0; u < ARRAY_SIZE(units) - 1; u++) {
>> -            if (sz < SZ_1K)
>> +            if (sz < (SZ_1K * 10000))
>>                      break;
>
>This threshold looks a bit random. How about picking a unit that allows
>us to print the size with no precision loss?
>
>       for (u = 0; u < ARRAY_SIZE(units) - 1; u++) {
>               if (sz & (SZ_1K - 1))
>                       break;
>       }

In this case I picked up on Rob Clark's suggestion of choosing a hard limit of
perhaps 10k or 100k times the current unit before moving on to the next one.
While this approach guarantees that we don't lose precision, it would render a
tad too long a number in KiB for BO's that aren't a multiple of a MiB.

>>              sz = div_u64(sz, SZ_1K);
>>      }

Reply via email to