mikemccand commented on PR #15549:
URL: https://github.com/apache/lucene/pull/15549#issuecomment-3920976077

   Thanks @Pulkitg64, this is a very exciting change.  It's frustrating to 
receive fp16 vectors (at Amazon Customer facing product search team) for 
indexing and have to fluff them up to fp32 entirely, before then quantizing 
them down to more sane (1,2,4,8 bit) bits per dim.  And because these fluffy 
vectors take 2X the storage they really should have, we [build ways to drop 
them from read-only replica 
indices](https://github.com/apache/lucene/issues/13158).
   
   It would be so much better if Lucene could handle incoming vectors entirely 
as their original fp16 form (this PR).
   
   So, it's JDK 27 which will introduce Panama access to fp16 SIMD 
capabilities?  And modern CPUs generally have good supports for fp16?  And 
today (pre-JDK27) this PR must emulate (simple java code) the fp16 operations?  
And that's why it's slower?
   
   If we enabled users to swap in their own `PanamaVectorUtilSupport` (#15508 
-- whoa, merged!), users could in theory make a gcc-compiled auto-vectorized 
(via gcc backend) and then accessible through JNA/I, and have good performance, 
before JDK 27?
   
   I haven't looked closely at the code changes yet ... just trying to get a 
grip on the high level situation. Thanks @Pulkitg64.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to