Dear Group,
I am trying to search the following pattern in Python.
I have following strings:
(i)"In the ocean"
(ii)"On the ocean"
(iii) "By the ocean"
(iv) "In this group"
(v) "In this group"
(vi) "By the new group"
.
I want to extract from the first word to the last word,
whe
On Saturday, June 15, 2013 7:58:44 PM UTC+5:30, Mark Lawrence wrote:
> On 15/06/2013 14:45, Denis McMahon wrote:
>
> > On Sat, 15 Jun 2013 13:41:21 +, Denis McMahon wrote:
>
> >
>
> >> first_and_last = [sentence.split()[i] for i in (0, -1)] middle =
>
> >> sentence.split()[1:-2]
>
> >
>
>
On Saturday, June 15, 2013 8:34:59 PM UTC+5:30, Mark Lawrence wrote:
> On 15/06/2013 15:31, subhabangal...@gmail.com wrote:
>
> >
>
> > Dear Group,
>
> >
>
> > I know this solution but I want to have Regular Expression option. Just
> > learning.
>
> >
>
> > Regards,
>
> > Subhabrata.
>
> >
On Saturday, June 15, 2013 3:12:55 PM UTC+5:30, subhaba...@gmail.com wrote:
> Dear Group,
>
>
>
> I am trying to search the following pattern in Python.
>
>
>
> I have following strings:
>
>
>
> (i)"In the ocean"
>
> (ii)"On the ocean"
>
> (iii) "By the ocean"
>
> (iv) "In this grou
On Sunday, June 16, 2013 12:17:18 AM UTC+5:30, ru...@yahoo.com wrote:
> On Saturday, June 15, 2013 11:54:28 AM UTC-6, subhaba...@gmail.com wrote:
>
>
>
> > Thank you for the answer. But I want to learn bit of interesting
>
> > regular expression forms where may I?
>
> > No Mark, thank you for
Dear Group,
I was looking for a good tutorial for a "HTML Parser". My intention was to
extract tables from web pages or information from tables in web pages.
I tried to make a search, I got HTMLParser, BeautifulSoup, etc. HTMLParser
works fine for me, but I am looking for a good tutorial to le
Dear Group,
I am trying to use Gensim for Topic Modeling with LDA.
I have trained LDA but now I want to test it with new documents.
Should I use
doc_lda = lda[doc_bow]
or is it something else?
If any one of the esteemed members of the group can kindly suggest?
Thanking in Advance,
Regards,
Dear Group,
I am Sri Subhabrata Banerjee writing from India. I am running a small program
which exploits around 12 1 to 2 KB .txt files. I am using MS Windows XP Service
Pack 3 and Python 2.6 where IDLE is GUI. The text is plain ASCII text. The RAM
of the machine is around 2 GB. To run the progr
On Wednesday, June 27, 2012 11:03:44 PM UTC+5:30, (unknown) wrote:
> Dear Group,
> I am Sri Subhabrata Banerjee writing from India. I am running a small program
> which exploits around 12 1 to 2 KB .txt files. I am using MS Windows XP
> Service Pack 3 and Python 2.6 where IDLE is GUI. The text is
Dear Group,
I am Sri Subhabrata Banerjee trying to write from Gurgaon, India to discuss
some coding issues. If any one of this learned room can shower some light I
would be helpful enough.
I got to code a bunch of documents which are combined together.
Like,
1)A Mumbai-bound aircraft with
On Thursday, July 5, 2012 4:51:46 AM UTC+5:30, (unknown) wrote:
> Dear Group,
>
> I am Sri Subhabrata Banerjee trying to write from Gurgaon, India to discuss
> some coding issues. If any one of this learned room can shower some light I
> would be helpful enough.
>
> I got to code a bunch of do
Dear Peter,
That is a nice one. I am thinking if I can write "for lines in f" sort of code
that is easy but then how to find out the slices then, btw do you know in any
case may I convert the index position of file to the list position provided I
am writing the list for the same file we are read
On Thursday, July 5, 2012 4:51:46 AM UTC+5:30, (unknown) wrote:
> Dear Group,
>
> I am Sri Subhabrata Banerjee trying to write from Gurgaon, India to discuss
> some coding issues. If any one of this learned room can shower some light I
> would be helpful enough.
>
> I got to code a bunch of do
On Sunday, July 8, 2012 2:21:14 AM UTC+5:30, Dennis Lee Bieber wrote:
> On Sat, 7 Jul 2012 12:54:16 -0700 (PDT), subhabangal...@gmail.com
> declaimed the following in gmane.comp.python.general:
>
> > But I am bit intrigued with another question,
> >
> > suppose I say:
> > file_open=open("/pytho
On Sunday, July 8, 2012 1:33:25 PM UTC+5:30, Chris Angelico wrote:
> On Sun, Jul 8, 2012 at 3:42 PM, wrote:
> > Thanks for pointing out the mistakes. Your points are right. So I am trying
> > to revise it,
> >
> > file_open=open("/python32/doc1.txt","r")
> > for line in file_open:
> > l
On Tuesday, July 10, 2012 11:16:08 PM UTC+5:30, Subhabrata wrote:
> Dear Group,
>
> I kept a good number of files in a folder. Now I want to read all of
> them. They are in different formats and different encoding. Using
> listdir/glob.glob I am able to find the list but how to open/read or
> proc
On Sunday, July 8, 2012 10:47:00 PM UTC+5:30, Chris Angelico wrote:
> On Mon, Jul 9, 2012 at 3:05 AM, wrote:
> > On Sunday, July 8, 2012 1:33:25 PM UTC+5:30, Chris Angelico wrote:
> >> On Sun, Jul 8, 2012 at 3:42 PM,
> wrote:
> >> > file_open
Dear Group,
I was looking for the following solutions.
(i) a Python Hidden Markov Model(HMM) library.
(ii)a Python Conditional Random Field(CRF) library.
(iii) I am using Python 3.2.1 on Windows 7(64 bit) and also like to get a NLTK
version.
(iv) I may use unicode character as input.
If any one
On Tuesday, July 24, 2012 9:09:02 PM UTC+5:30, (unknown) wrote:
> Dear Group,
>
> I was looking for the following solutions.
>
> (i) a Python Hidden Markov Model(HMM) library.
> (ii)a Python Conditional Random Field(CRF) library.
> (iii) I am using Python 3.2.1 on Windows 7(64 bit) and also like
On Thursday, July 26, 2012 1:28:50 AM UTC+5:30, Terry Reedy wrote:
> On 7/25/2012 11:58 AM, subhabangal...@gmail.com wrote:
> > As most of the libraries give so many bindings and conditions best way
> is to make it. Not very tough, I made earlier, but as some files were
> lost so was thinking inst
Dear Group,
I was trying to convert the list to a set, with the following code:
set1=set(list1)
the code was running fine, but all on a sudden started to give the following
error,
set1=set(list1)
TypeError: unhashable type: 'list'
please let me know how may I resolve.
And sometimes some good
On Sunday, July 29, 2012 2:57:18 PM UTC+5:30, (unknown) wrote:
> Dear Group,
>
>
>
> I was trying to convert the list to a set, with the following code:
>
>
>
> set1=set(list1)
>
>
>
Dear Peter,
Thanks for the answer. But my list does not contain another list that is the
issue. Intriguing
On Sunday, July 29, 2012 7:53:59 PM UTC+5:30, Roy Smith wrote:
> In article <81818a9c-60d3-48da-9345-0c0dfd5b2...@googlegroups.com>,
>
> subhabangal...@gmail.com wrote:
>
>
>
> > set1=set(list1)
>
> >
>
> > the code was running fine, but all on a sudden started to give the
> > following
>
On Sunday, July 29, 2012 2:57:18 PM UTC+5:30, (unknown) wrote:
> Dear Group,
>
>
>
> I was trying to convert the list to a set, with the following code:
>
>
>
> set1=set(list1)
>
>
>
> the code was running fine, but all on a sudden started to give the following
> error,
>
>
>
> set1=se
On Friday, August 3, 2012 5:19:46 PM UTC+5:30, Subhabrata wrote:
> Dear Group,
>
>
>
> I am trying to call the values of one function in the another function in the
> following way:
>
> def func1():
>
> num1=10
>
> num2=20
>
> print "The Second Number is:",num2
>
>
On Friday, August 3, 2012 10:50:52 PM UTC+5:30, Dennis Lee Bieber wrote:
> On Fri, 3 Aug 2012 04:49:46 -0700 (PDT), Subhabrata
>
> declaimed the following in
>
> gmane.comp.python.general:
>
>
>
> > Dear Group,
>
> >
>
> > I am trying to call the values of one function in the another funct
Dear Group,
I am trying to use NLTK and its statistical classifiers. The system is working
fine but I am trying to use my own data, instead of things like,
from nltk.corpus import brown
from nltk.corpus import names
If any one can kindly guide me up.
Thanks in Advance,
Regards,
Subhabrata.
--
On Wednesday, September 19, 2012 12:40:00 AM UTC+5:30, Mark Lawrence wrote:
> On 18/09/2012 19:35, subhabangal...@gmail.com wrote:
>
> > Dear Group,
>
> > If anyone of the learned members can kindly help with a HMM/CRF based
> > chunker on NLTK.
>
> >
>
> > Regards,
>
> > Subhabrata.
>
> >
>
Dear Group,
I am using Python on Windows 7 SP-1 (64 bit).
I have two versions of Python installed 2.7 and 3.2.
I want to install networkx in both.
How may I do that?
If any one may kindly let me know.
Regards,
Subhabrata.
--
http://mail.python.org/mailman/listinfo/python-list
Dear Group,
Suppose I have a string as,
"Project Gutenberg has 36000 free ebooks for Kindle Android iPad iPhone."
I am terming it as,
str1= "Project Gutenberg has 36000 free ebooks for Kindle Android iPad iPhone."
I am working now with a split function,
str_words=str1.split()
so, I would ge
On Monday, October 8, 2012 1:00:52 AM UTC+5:30, subhaba...@gmail.com wrote:
> Dear Group,
>
>
>
> Suppose I have a string as,
>
>
>
> "Project Gutenberg has 36000 free ebooks for Kindle Android iPad iPhone."
>
>
>
> I am terming it as,
>
>
>
> str1= "Project Gutenberg has 36000 free eb
Dear Group,
To improve my code writing I am trying to read good codes. Now, I have received
a code,as given below,(apology for slight indentation errors) the code is
running well.
Now to comprehend the code, I am looking to understand it completely.
class Calculate:
def __init__(self):
On Tuesday, November 13, 2012 4:12:52 PM UTC+5:30, Peter Otten wrote:
> subhabangal...@gmail.com wrote:
>
>
>
> > Dear Group,
>
> > To improve my code writing I am trying to read good codes. Now, I have
>
> > received a code,as given below,(apology for slight indentation errors) the
>
> > cod
Dear Group,
I am looking for some Python based Natural Language Tools.
(i)Parsers (either syntactic or semantic). NLTK has but there I have to input
the grammar. I am looking for straight built in library like nltk tagging
module.
(ii) I am looking for some ner extraction tools. NLTK has I am
Dear Group,
Python has one textming library, but I am failing to install it in Windows.
If any one can kindly help.
Regards,
Subhabrata.
--
http://mail.python.org/mailman/listinfo/python-list
On Saturday, December 1, 2012 5:13:17 AM UTC+5:30, Dave Angel wrote:
> On 11/30/2012 02:48 PM, subhabangal...@gmail.com wrote:
>
> > Dear Group,
>
> > Python has one textming library, but I am failing to install it in Windows.
>
> > If any one can kindly help.
>
> > Regards,
>
> > Subhabrata.
Dear Group,
I am using NLTK and I used the following command,
chunk=nltk.ne_chunk(tag)
print "The Chunk of the Line Is:",chunk
The Chunk of the Line Is: (S
''/''
It/PRP
is/VBZ
virtually/RB
a/DT
homecoming/NN
,/,
''/''
said/VBD
(PERSON Gen/NNP Singh/NNP)
on/IN
arrival/NN)
On Sunday, December 2, 2012 5:39:32 PM UTC+5:30, subhaba...@gmail.com wrote:
> Dear Group,
>
>
>
> I am using NLTK and I used the following command,
>
>
>
> chunk=nltk.ne_chunk(tag)
>
> print "The Chunk of the Line Is:",chunk
>
>
>
>
>
> The Chunk of the Line Is: (S
>
> ''/''
>
>
Dear Group,
I have a list of the following pattern,
[("''", "''"), ('Eastern', 'NNP'), ('Army', 'NNP'), ('Commander', 'NNP'),
('Lt', 'NNP'), ('Gen', 'NNP'), ('Dalbir', 'NNP'), ('Singh', 'NNP'), ('Suhag',
'NNP'), ('briefed', 'VBD'), ('the', 'DT'), ('Army', 'NNP'), ('chief', 'NN'),
('on', 'IN'),
On Sunday, December 2, 2012 9:29:22 PM UTC+5:30, Thomas Bach wrote:
> On Sun, Dec 02, 2012 at 04:16:01PM +0100, Lutz Horn wrote:
>
> >
>
> > len([x for x in l if x[1] == 'VBD'])
>
> >
>
>
>
> Another way is
>
>
>
> sum(1 for x in l if x[1] == 'VBD')
>
>
>
> which saves the list creati
Dear Group,
I have a tuple of list as,
tup_list=[(1,2), (3,4)]
Now if I want to covert as a simple list,
list=[1,2,3,4]
how may I do that?
If any one can kindly suggest? Googling didn't help much.
Regards,
Subhabrata.
--
http://mail.python.org/mailman/listinfo/python-list
On Tuesday, December 4, 2012 1:28:17 AM UTC+5:30, subhaba...@gmail.com wrote:
> Dear Group,
>
>
>
> I have a tuple of list as,
>
>
>
> tup_list=[(1,2), (3,4)]
>
> Now if I want to covert as a simple list,
>
>
>
> list=[1,2,3,4]
>
>
>
> how may I do that?
>
>
>
> If any one can kindl
Dear Group,
I am trying to work out a data visualization module.
Here,
I am taking raw corpus,and processing it
linguistically(tokenization,tagging,NED recognition)
and then trying to link the NED's with Latent Semantic Analysis or Relationship
Mining or Network graph theory or cluster analysis
Dear Group,
I am trying to use the cluster module as,
>>> from cluster import *
>>> data = [12,34,23,32,46,96,13]
>>> cl = HierarchicalClustering(data, lambda x,y: abs(x-y))
>>> cl.getlevel(10)
[[96], [46], [12, 13, 23, 34, 32]]
>>> cl.getlevel(5)
[[96], [46], [12, 13], [23], [34, 32]]
but now I
On Wednesday, December 5, 2012 2:33:56 AM UTC+5:30, Miki Tebeka wrote:
> On Tuesday, December 4, 2012 11:04:15 AM UTC-8, subhaba...@gmail.com wrote:
>
> > >>> cl = HierarchicalClustering(data, lambda x,y: abs(x-y))
>
> > but now I want to visualize it if any one suggest how may I use
> > visuali
Dear Group,
I am looking for some example of implementing Cosine similarity in python. I
searched for hours but could not help much. NLTK seems to have a module but did
not find examples.
If anyone of the learned members may kindly help out.
Regards,
Subhabrata.
--
http://mail.python.org/ma
T
On Friday, December 7, 2012 9:47:46 AM UTC+5:30, Miki Tebeka wrote:
> On Thursday, December 6, 2012 2:15:53 PM UTC-8, subhaba...@gmail.com wrote:
>
> > I am looking for some example of implementing Cosine similarity in python.
> > I searched for hours but could not help much. NLTK seems to hav
Dear Group,
I am looking at a readymade tool to resolve anaphora, and I am looking a Python
based one. I checked NLTK. It has DRT parser. But I do not like that. In other
parsers you have to insert grammar. But I am looking for a completely built in.
If anyone can kindly suggest.
Regards, S
Dear Group,
I am trying to enumerate few interesting errors on pylab/matplotlib.
If any of the learned members can kindly let me know how should I address them.
I am trying to enumerate them as follows.
i) >>> import numpy
>>> import pylab
>>> t = numpy.arange(0.0, 1.0+0.01, 0.01)
>>> s = numpy
On Tuesday, December 11, 2012 2:10:07 AM UTC+5:30, subhaba...@gmail.com wrote:
> Dear Group,
>
>
>
> I am trying to enumerate few interesting errors on pylab/matplotlib.
>
> If any of the learned members can kindly let me know how should I address
> them.
>
>
>
> I am trying to enumerate t
Dear Group,
In networkx module we generally try to draw the graph as,
>>> import networkx as nx
>>> G=nx.Graph()
>>> G.add_edge(1, 2, weight=4.7 )
>>> G.add_edge(1, 3, weight=4.5 )
.
Now, if I want to retrieve the information of traversal from 1 to 3, I can
give,
G.edges()
but I am looking
Dear Group,
If I take a list like the following:
fruits = ['banana', 'apple', 'mango']
for fruit in fruits:
print 'Current fruit :', fruit
Now,
if I want variables like var1,var2,var3 be assigned to them, we may take,
var1=banana,
var2=apple,
var3=mango
but can we do something to as
Dear Group,
I have a list like,
>>> list1=[1,2,3,4,5,6,7,8,9,10,11,12]
Now, if I want to take a slice of it, I can.
It may be done in,
>>> list2=list1[:3]
>>> print list2
[1, 2, 3]
If I want to iterate the list, I may do as,
>>> for i in list1:
print "Iterated Value Is:",i
It
Dear Group,
I have two questions, if I take a subseries of the matrix as in eigenvalue here,
provided I have one graph of the full form in G, how may I show it, as if I do
the nx.draw(G) it takes only the original graph.
>>> import numpy
>>> import networkx as nx
>>> import matplotlib.pyplot as
On Monday, January 14, 2013 6:05:49 AM UTC+5:30, Steven D'Aprano wrote:
> On Sun, 13 Jan 2013 12:05:54 -0800, subhabangalore wrote:
>
>
>
> > Dear Group,
>
> >
>
> > I have two questions, if I take a subseries of the matrix as in
>
> > eige
On Friday, January 4, 2013 11:18:24 AM UTC+5:30, Steven D'Aprano wrote:
> On Thu, 03 Jan 2013 12:04:03 -0800, subhabangalore wrote:
>
>
>
> > Dear Group,
>
> > If I take a list like the following:
>
> >
>
> > fruits = ['
Dear Group,
As I know Python Foundation organizes some conferences all through the year.
Most probably they are known as Pycon. But I have some different question. The
question is, is it possible to attend it by Video Conferencing? Or if I request
for the same will it be granted?
Regards,
Subha
Dear Group,
I am looking for a Python implementation of Maximum Likelihood Estimation. If
any one can kindly suggest. With a google search it seems
scipy,numpy,statsmodels have modules, but as I am not finding proper example
workouts I am failing to use them.
I am using Python 2.7 on Windows
On Friday, February 1, 2013 11:07:48 PM UTC+5:30, 8 Dihedral wrote:
> subhaba...@gmail.com於 2013年2月2日星期六UTC+8上午1時17分04秒寫道:
>
> > Dear Group,
>
> >
>
> >
>
> >
>
> > I am looking for a Python implementation of Maximum Likelihood Estimation.
> > If any one can kindly suggest. With a goog
On Friday, February 1, 2013 10:47:04 PM UTC+5:30, subhaba...@gmail.com wrote:
> Dear Group,
>
>
>
> I am looking for a Python implementation of Maximum Likelihood Estimation. If
> any one can kindly suggest. With a google search it seems
> scipy,numpy,statsmodels have modules, but as I am not
I have a string
"Hello my name is Richard"
I have a list of words as,
['Hello/Hi','my','name','is','Richard/P']
I want to identify the match of 'Hello' and 'Richard'
in list, and replace them with 'Hello/Hi" and 'Richard/P'
respectively.
The result should look like,
"Hello/Hi my name is Richard
On Saturday, November 12, 2016 at 7:34:31 AM UTC+5:30, Steve D'Aprano wrote:
> On Sat, 12 Nov 2016 09:29 am wrote:
>
> > I have a string
> > "Hello my name is Richard"
> >
> > I have a list of words as,
> > ['Hello/Hi','my','name','is','Richard/P']
> >
> > I want to identify the match of 'Hello'
I have a python script where I am trying to read from a list of files in a
folder and trying to process something.
As I try to take out the output I am presently appending to a list.
But I am trying to write the result of individual files in individual list or
files.
The script is as follows:
I am getting the error:
UnicodeDecodeError: 'utf8' codec can't decode byte 0x96 in position 15: invalid
start byte
as I try to read some files through TaggedCorpusReader. TaggedCorpusReader is a
module
of NLTK.
My files are saved in ANSI format in MS-Windows default.
I am using Python2.7 on MS-
On Monday, December 26, 2016 at 3:37:37 AM UTC+5:30, Gonzalo V wrote:
> Try utf-8-sig
> El 25 dic. 2016 2:57 AM, "Grady Martin" <> escribió:
>
> > On 2016年12月22日 22時38分, wrote:
> >
> >> I am getting the error:
> >> UnicodeDecodeError: 'utf8' codec can't decode byte 0x96 in position 15:
> >> inval
On Friday, December 30, 2016 at 3:35:56 AM UTC+5:30, subhaba...@gmail.com wrote:
> On Monday, December 26, 2016 at 3:37:37 AM UTC+5:30, Gonzalo V wrote:
> > Try utf-8-sig
> > El 25 dic. 2016 2:57 AM, "Grady Martin" <> escribió:
> >
> > > On 2016年12月22日 22時38分, wrote:
> > >
> > >> I am getting the
On Friday, December 30, 2016 at 7:16:25 AM UTC+5:30, Steve D'Aprano wrote:
> On Sun, 25 Dec 2016 04:50 pm, Grady Martin wrote:
>
> > On 2016年12月22日 22時38分, wrote:
> >>I am getting the error:
> >>UnicodeDecodeError: 'utf8' codec can't decode byte 0x96 in position 15:
> >>invalid start byte
> >
> >
I have a string like
"Trump is $ the president of USA % Obama was $ the president of USA % Putin is
$ the premier of Russia%"
Here, I want to extract the portions from $...%, which would be
"the president of USA",
"the president of USA",
"the premier of Russia"
and would work some post extr
I wrote a small piece of following code
import nltk
from nltk.corpus.reader import TaggedCorpusReader
from nltk.tag import CRFTagger
def NE_TAGGER():
reader = TaggedCorpusReader('/python27/', r'.*\.pos')
f1=reader.fileids()
print "The Files of Corpus are:",f1
sents=reader.tagged_s
I have a list of lists (177 lists).
I am trying to write them as file.
I used the following code to write it in a .csv file.
import csv
def word2vec_preprocessing():
a1=open("/python27/EngText1.txt","r")
list1=[]
for line in a1:
line1=line.lower().replace(".","").split()
On Tuesday, May 22, 2018 at 3:55:58 PM UTC+5:30, Peter Otten wrote:
>
>
> > lst2=lst1[:4]
> > with open("my_csv.csv","wb") as f:
> > writer = csv.writer(f)
> > writer.writerows(lst2)
> >
> > Here it is writing only the first four lists.
>
> Hint: look at the first line
I have a text as,
"Hawaii volcano generates toxic gas plume called laze PAHOA: The eruption of
Kilauea volcano in Hawaii sparked new safety warnings about toxic gas on the
Big Island's southern coastline after lava began flowing into the ocean and
setting off a chemical reaction. Lava haze is
On Friday, May 25, 2018 at 3:59:57 AM UTC+5:30, Cameron Simpson wrote:
> First up, thank you for a well described problem! Remarks inline below.
>
> On 24May2018 03:13, wrote:
> >I have a text as,
> >
> >"Hawaii volcano generates toxic gas plume called laze PAHOA: The eruption of
> >Kilauea volca
On Saturday, May 26, 2018 at 3:54:37 AM UTC+5:30, Cameron Simpson wrote:
> On 25May2018 04:23, Subhabrata Banerjee wrote:
> >On Friday, May 25, 2018 at 3:59:57 AM UTC+5:30, Cameron Simpson wrote:
> >> On 24May2018 03:13, wrote:
> >> >I have a text as,
> >> >
> >> >"Hawaii volcano generates toxic g
On Sunday, May 27, 2018 at 2:41:43 AM UTC+5:30, Cameron Simpson wrote:
> On 26May2018 04:02, Subhabrata Banerjee wrote:
> >On Saturday, May 26, 2018 at 3:54:37 AM UTC+5:30, Cameron Simpson wrote:
> >> It sounds like you want a more general purpose parser, and that depends
> >> upon
> >> your purp
I have the following sentence,
"Donald Trump is the president of United States of America".
I am trying to extract the index 'of', not only for single but also
for its multi-occurance (if they occur), from the list of words of the
string, made by simply splitting the sentence.
index1=[index for
On Wednesday, June 13, 2018 at 6:30:45 AM UTC+5:30, Cameron Simpson wrote:
> On 11Jun2018 13:48, Subhabrata Banerjee wrote:
> >I have the following sentence,
> >
> >"Donald Trump is the president of United States of America".
> >
> >I am trying to extract the index 'of', not only for single but als
Dear Group,
I have a list of tuples, as follows,
list1=[u"('koteeswaram/BHPERSN engaged/NA himself/NA in/NA various/NA
philanthropic/NA activities/NA ','class1')", u"('koteeswaram/BHPERSN is/NA
a/NA very/NA nice/NA person/NA ','class1')", u"('koteeswaram/BHPERSN came/NA
to/NA mumbai/LOC but/
On Monday, April 25, 2016 at 10:07:13 PM UTC+5:30, Steven D'Aprano wrote:
> On Tue, 26 Apr 2016 12:56 am, wrote:
>
> > Dear Group,
> >
> > I have a list of tuples, as follows,
> >
> > list1=[u"('koteeswaram/BHPERSN engaged/NA himself/NA in/NA various/NA
> [... 17 more lines of data ...]
>
> Hi
On Monday, April 25, 2016 at 10:07:13 PM UTC+5:30, Steven D'Aprano wrote:
>
>
> > Dear Group,
> >
> > I have a list of tuples, as follows,
> >
> > list1=[u"('koteeswaram/BHPERSN engaged/NA himself/NA in/NA various/NA
> [... 17 more lines of data ...]
>
> Hi Subhabrata, and thanks for the quest
Hi
I am trying to use the following set of tuples in list of lists.
I am using a Python based library named, NLTK.
>>> import nltk
>>> from nltk.corpus import brown as bn
>>> bt=bn.tagged_sents()
>>> bt_5=bt[:5]
>>> print bt
[[(u'The', u'AT'), (u'Fulton', u'NP-TL'), (u'County', u'NN-TL'), (u'G
I was trying to implement the code,
import nltk
import nltk.tag, nltk.chunk, itertools
def ieertree2conlltags(tree, tag=nltk.tag.pos_tag):
words, ents = zip(*tree.pos())
iobs = []
prev = None
for ent in ents:
if ent == tree.node:
iobs.append('O')
pr
On Saturday, February 27, 2016 at 9:43:56 PM UTC+5:30, Rustom Mody wrote:
> On Saturday, February 27, 2016 at 2:47:53 PM UTC+5:30, subhaba...@gmail.com
> wrote:
> > I was trying to implement the code,
> >
> > import nltk
> > import nltk.tag, nltk.chunk, itertools
> > def ieertree2conlltags(tree,
I have few sentences, like,
the film was nice.
leonardo is great.
it was academy award.
Now I want them to be tagged with some standards which may look like,
the DT film NN was AV nice ADJ
leonardo NN is AV great ADJ
it PRP was AV academy NN award NN
I could do it but my goal is to see it a
Dear Group,
I am trying to write a code for pulling data from MySQL at the backend and
annotating words and trying to put the results as separated sentences with each
line. The code is generally running fine but I am feeling it may be better in
the end of giving out sentences, and for small dat
On Wednesday, March 9, 2016 at 9:49:17 AM UTC+5:30, subhaba...@gmail.com wrote:
> Dear Group,
>
> I am trying to write a code for pulling data from MySQL at the backend and
> annotating words and trying to put the results as separated sentences with
> each line. The code is generally running fin
On Friday, March 11, 2016 at 12:22:31 AM UTC+5:30, Matt Wheeler wrote:
> On 10 March 2016 at 18:12, wrote:
> > Matt, thank you for if...else suggestion, the data of NewTotalTag.txt
> > is like a simple list of words with unconventional tags, like,
> >
> > w1 tag1
> > w2 tag2
> > w3 tag3
> > ...
>
Dear Group,
I am trying to build a search engine in Python.
To do this, I have read tutorials and working methodologies from web and books
like Stanford IR book [ http://www-nlp.stanford.edu/IR-book/]. I know how to
design a crawler, I know PostgresSql, I am fluent with PageRank, TF-IDF, Zipf
On Sunday, February 22, 2015 at 10:12:39 AM UTC+5:30, Steven D'Aprano wrote:
> wrote:
>
> > Dear Group,
> >
> > I am trying to build a search engine in Python.
>
> How to design a search engine in Python?
>
> First, design a search engine.
>
> Then, write Python code to implement that search e
On Sunday, February 22, 2015 at 11:08:47 AM UTC+5:30, Denis McMahon wrote:
> On Sat, 21 Feb 2015 21:02:34 -0800, subhabangalore wrote:
>
> > Thank you for your suggestion. But I was looking for a small tutorial of
> > algorithm of the whole engine. I would try to check i
On Sunday, February 22, 2015 at 2:42:48 PM UTC+5:30, Laura Creighton wrote:
> In a message of Sat, 21 Feb 2015 22:07:30 -0800, write
> >Dear Sir,
> >
> >Thank you for your kind suggestion. Let me traverse one by one.
> >My special feature is generally Semantic Search, but I am trying to build
> >
Dear Room,
I was trying to go through a code given in
http://en.wikipedia.org/wiki/Forward%E2%80%93backward_algorithm[ Forward
Backward is an algorithm of Machine Learning-I am not talking on that
I am just trying to figure out a query on its Python coding.]
I came across the following codes.
On Sunday, May 11, 2014 12:57:34 AM UTC+5:30, subhaba...@gmail.com wrote:
> Dear Room,
>
>
>
> I was trying to go through a code given in
> http://en.wikipedia.org/wiki/Forward%E2%80%93backward_algorithm[ Forward
> Backward is an algorithm of Machine Learning-I am not talking on that
>
> I am
On Sunday, May 11, 2014 11:50:32 AM UTC+5:30, subhaba...@gmail.com wrote:
> On Sunday, May 11, 2014 12:57:34 AM UTC+5:30, subhaba...@gmail.com wrote:
>
> > Dear Room,
>
> >
>
> >
>
> >
>
> > I was trying to go through a code given in
> > http://en.wikipedia.org/wiki/Forward%E2%80%93backwar
Dear Group,
It seems there is a nice language processing library named TextBlob, like NLTK.
But I am being unable to install it on my Windows(MS-Windows 7 machine. I am
using Python 2.7
If anyone of the esteemed members may kindly suggest me the solution.
I tried the note in following URL
http
Dear Group,
I am trying to work out a solution to the following problem in Python.
The Problem:
Suppose I have three lists.
Each list is having 10 elements in ascending order.
I have to construct one list having 10 elements which are of the lowest value
among these 30 elements present in the th
On Friday, July 10, 2015 at 5:36:48 PM UTC+5:30, Laura Creighton wrote:
> In a message of Fri, 10 Jul 2015 04:46:25 -0700,
> writes:
> >Dear Group,
> >
> >I am trying to make a search engine. I used Whoosh to do it.
> >I want to add documents to it. This is going fine.
> >Now, I want to add doc
Dear Group,
I have a Python code taken from
Wikipedia.("http://en.wikipedia.org/wiki/Forward%E2%80%93backward_algorithm";)
The code is pasted below.
>>> states = ('Healthy', 'Fever')
>>> end_state = 'E'
>>> observations = ('normal', 'cold', 'dizzy')
>>> start_probability = {'Healthy': 0.6, 'Fe
On Thursday, June 19, 2014 12:45:49 AM UTC+5:30, Ian wrote:
>
>
> > The questions are,
>
> > i) prev_f_sum = sum(f_prev[k]*a[k][st] for k in states)
>
> > here f_prev is called,
>
> > f_prev is assigned to f_curr ["f_prev = f_curr"]
>
> > f_curr[st] is again being calculated as, ["f_curr[st
On Thursday, June 19, 2014 12:30:12 PM UTC+5:30, Ian wrote:
> On Wed, Jun 18, 2014 at 11:50 PM, wrote:
>
> > Thank you for the reply. But as I checked it again I found,
>
> > f_prev[k] is giving values of f_curr[st] = e[st][x_i] * prev_f_sum
>
> > which is calculated later and again uses prev_
1 - 100 of 125 matches
Mail list logo