> Regarding no. 2, I feel like this is an interesting write up:
>
> https://dl.acm.org/doi/10.1145/1045339.1045340
>
> It is about how naming a piece of software by its purpose, for example,
> naming a program "intelligence" or "understand" or "think" or "comprehend"
> causes people to projec
Regarding no. 2, I feel like this is an interesting write up
https://dl.acm.org/doi/10.1145/1045339.1045340 "Artificial Intelligence
Meets Natural Stupidity". It is about how naming a piece of software by
its purpose, for example, naming a program "intelligence" or "understand"
or "think" or "comp
On Feb 26, 2024, at 4:31 PM, Karl Benedict wrote:
> Eric - it sounds like we may be at about the same point: I am wanting to
> start working in the area of fine-tuning, specifically focusing on Chat-GPT
> generated data management plans that would then be revised by experts and
> used as a fin
On Feb 26, 2024, at 4:05 PM, Eric Lease Morgan wrote:
> Who out here in Code4Lib Land is practicing with either one or both of the
> following things: 1) fine-tuning large-language models, or 2)
> retrieval-augmented generation (RAG). If there is somebody out there, then
> I'd love to chat...
the traditional owners and
> custodians of the land on which I live and work (Walyalup), and pay my
> respects to their elders past and present.
>
> From: Code for Libraries On Behalf Of Alex Dunn
> Sent: Tuesday, February 27, 2024 6:03 AM
> To: CODE4LIB@LISTS.CLIR.ORG
> Subject:
From: Code for Libraries On Behalf Of Alex Dunn
Sent: Tuesday, February 27, 2024 6:03 AM
To: CODE4LIB@LISTS.CLIR.ORG
Subject: Re: [CODE4LIB] genrative ai; fine-tuning and rag
I think it's important to ask first what your aims are with a LLM.
Personally I have never seen a valid use-case for
Gary Price did a presentation for CDL last week, the bullet points of which
are here: bit.ly/CDLgpt (warning, that's a bit of a firehose), Gary
mentioned RAG a bit. Based on his suggestion, I've been trying out phind.com
which uses RAG. Phind works as both an AI chat, as well as a search engine.
I
Talpa is a fascinating project--thanks for working on it! I've been trying
to spend more time with various LLMs lately in an attempt to be able to
speak less foolishly about them (eh), but my eight-year-old child
unwittingly provided the most interesting use case I've attempted so far:
"In Which Bi
I and other LibraryThing developers have done a lot of this work in the
process of making Talpa.ai, so here's my quick take:
1. Fine-tuning is a poor way to add knowledge to an LLM, especially at
scale. It's mostly useful for controlling how LLM "thinking" is
presented—for example ensuring clean,
I think it's important to ask first what your aims are with a LLM.
Personally I have never seen a valid use-case for ChatGPT or any of its
varieties in libraries. These models are, ultimately, little more than
glorified text compressors[1] that perform pattern-matching and which do
not, and cannot
I took note of something recently from the Library Innovation Lab at Harvard
Law School: WARC-GPT: An Open-Source Tool for Exploring Web Archives Using AI.
It takes the contents of WARC files and feeds them into a Retrieval Augmented
Generation tool. Been meaning to play with it as a way to enha
Eric - it sounds like we may be at about the same point: I am wanting to start
working in the area of fine-tuning, specifically focusing on Chat-GPT generated
data management plans that would then be revised by experts and used as a
fine-tuning data corpus for (hopefully) improving the draft DMP
12 matches
Mail list logo