На 15 юни 2018 г. 14:57:46 GMT+02:00, Steven D'Aprano
написа:
>Seriously, you are asking strangers to help you out of the goodness of
>their heart. If your intention was to send the message that you're
>lazy,
>drunk, or just don't give a damn about the question, you were
>successful.
Answers
On Sat, Jun 16, 2018 at 11:08 AM, Rick Johnson
wrote:
> On Friday, June 15, 2018 at 8:00:36 AM UTC-5, Steven D'Aprano wrote:
>> Seriously, you are asking strangers to help you out of the goodness of
>> their heart.
>
> Stop lying Steven.
>
> Nobody here has a heart.
>
> This is Usenet, dammit.
I
On Friday, June 15, 2018 at 8:00:36 AM UTC-5, Steven D'Aprano wrote:
> Seriously, you are asking strangers to help you out of the goodness of
> their heart.
Stop lying Steven.
Nobody here has a heart.
This is Usenet, dammit.
--
https://mail.python.org/mailman/listinfo/python-list
On Fri, 15 Jun 2018 05:23:17 -0700, mohan shanker wrote:
> anotology using text mining in python pls any one one described fo me
fi u cant b bothed to rite prper sentences y shld we be bothered to anwsr
ur qestion
Seriously, you are asking strangers to help you out of the goodness of
th
anotology using text mining in python pls any one one described fo me
--
https://mail.python.org/mailman/listinfo/python-list
On Wed, 2010-03-10 at 19:58 +0100, mk wrote:
> I need to do the following:
[...]
> Is there some good open source engine out there that would be suitable
> to the task at hand? Anybody has experience with them?
It sounds like a full text search engine might do a bit more than you
need, but based
On 2010-03-10 12:58 PM, mk wrote:
Hello everyone,
I need to do the following:
(0. transform words in a document into word roots)
1. analyze a set of documents to see which words are highly frequent
2. detect clusters of those highly frequent words
3. map the clusters to some "special" keywor
Hello everyone,
I need to do the following:
(0. transform words in a document into word roots)
1. analyze a set of documents to see which words are highly frequent
2. detect clusters of those highly frequent words
3. map the clusters to some "special" keywords
4. rank the documents on cluste