On Tue, Jan 26, 2010 at 5:15 PM, Matthew Wakeling wrote:
> On Mon, 25 Jan 2010, nair rajiv wrote:
>
>> I am working on a project that will take out structured content from
>> wikipedia and put it in our database...
>>
>> there is a table which will approximately h
On Tue, Jan 26, 2010 at 6:19 AM, Andres Freund wrote:
> On Tuesday 26 January 2010 01:39:48 nair rajiv wrote:
> > On Tue, Jan 26, 2010 at 1:01 AM, Craig James
> wrote:
> > I am working on a project that will take out structured content
> > from wikipedia
> &
On Tue, Jan 26, 2010 at 1:01 AM, Craig James wrote:
> Kevin Grittner wrote:
>
>> nair rajiv wrote:
>>
>>
>>> I found there is a table which will approximately have 5 crore
>>> entries after data harvesting.
>>> Is it advisable to keep so much
Hello,
I am working on a project that will take out structured content
from wikipedia
and put it in our database. Before putting the data into the database I
wrote a script to
find out the number of rows every table would be having after the data is in
and I found
there is a table which
--
TABLE STRUCTURE
--
CREATE TABLE gbobjects
(
ssid bigint NOT NULL,
nid character varying NOT NULL,
inid bigint NOT NULL,
uid bigint NOT NULL,
status character varying,
noofchanges integer NOT NULL