So, according to ScribbleWiki (scribblewikiblog.com), SW is shutting down. 
Their hard drives failed and their backup provider didn't work. So, it appears 
the wiki is not coming back. I've sent emails to scribblewiki.com and offered 
to pay for data recovery, but I have received no response.

I've got an offer from wikia.com to help up get our data from Google cache. 
What do you all think? We could also divide the work among ourselves of getting 
content from Google's cache, but we should probably be quick about it in case 
the cache data gets cleared.

Jared

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Grubb, Jared
Sent: Thursday, September 25, 2008 12:18 PM
To: Mailing list for lwIP users
Subject: RE: [lwip-users] scribblewiki down

It's not a simple copy-paste, because from Google we would get the HTML 
content, but not the wiki markup... The best thing is to get a DB dump from 
scribblewiki... Let me see if I can find an email address.

Jared

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Thomas Taranowski
Sent: Thursday, September 25, 2008 11:45 AM
To: Mailing list for lwIP users
Subject: Re: [lwip-users] scribblewiki down

If we had a listing of all the wiki pages, we could use a wget script
to grab them all from the google cache.  It would be straightforward
copy and paste to the media wiki then. Ideally, though, we would get
an lwip db dump from the scribblewiki folks, which we could then merge
into our own wiki db.

As a side note, does anyone have a cool lwIP logo?  Everything I do
looks like bad programmer art.

On Thu, Sep 25, 2008 at 10:04 AM, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Grubb, Jared wrote:
>>
>> So, what we need now is a DB dump from the old wiki... I'm not sure how to
>> get that.
>> jared
>>
>
> That won't be easy - unless there is another way than asking the hosters of
> the 'old' wiki (which really is the 'current' :)
> Anyway, who tells us the 'old' wiki will be online again at all to grab the
> pages back? I tried the google cache method, but all links lead to the real
> site, which makes it pretty hard to even grab the whole wiki as a backup of
> html pages... (can't use a simple crawler and tell it to stay on the google
> cache server ignoring extern links)
>
> Does anyone have a better idea? I'm a little afraid of losing the pages
> built so far!
>
> Simon
>
>
> _______________________________________________
> lwip-users mailing list
> [email protected]
> http://lists.nongnu.org/mailman/listinfo/lwip-users
>



--
Thomas Taranowski
Certified netburner consultant
baringforge.com


_______________________________________________
lwip-users mailing list
[email protected]
http://lists.nongnu.org/mailman/listinfo/lwip-users


_______________________________________________
lwip-users mailing list
[email protected]
http://lists.nongnu.org/mailman/listinfo/lwip-users


_______________________________________________
lwip-users mailing list
[email protected]
http://lists.nongnu.org/mailman/listinfo/lwip-users

Reply via email to