It seems like having an LLM as the centerpiece they would be subject to
hallucinations though for the time being. Maybe if you had some situation
with acceptable errors like a mass of notes it would be OK to use something
like this.

On Wed, May 14, 2025 at 3:01 PM Matt Mahoney <mattmahone...@gmail.com>
wrote:

> Researchers in China demonstrate using LLMs to compress text, images,
> audio, and video. In the abstract of the linked paper they claim 3x better
> text compression than zpaq. In the preprint they also claim to improve on
> lossless images, video, and audio by the simple method of converting pixels
> or audio samples to characters in 2K chunks to fit LLaMA-8B's context
> window.
>
>
> https://techxplore.com/news/2025-05-algorithm-based-llms-lossless-compression.html
>
> Of course this does not include the size of the model. Still, they make
> the important point that compression = understanding. We test understanding
> using prediction, and we measure prediction using compression.
>
> -- Matt Mahoney, mattmahone...@gmail.com
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T3fa66c5feeacc892-Mae55b5bee5c5c380de9943ba>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T3fa66c5feeacc892-Mdacdeecc43abc24557456931
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to