"Joerg Jaspert" <[EMAIL PROTECTED]> wrote:
[snip]
c.) We can host an own archive for it under control of ftpmaster.
[snip]
So the way to go for us seems to be c.), hosting the archive ourself
(somewhere below data.debian.org probably).
[snip]
A data.d.o would presumably be running on a debian
On Tue, May 27, 2008 at 03:54:25PM -0500, Raphael Geissert wrote:
> Daniel Jacobowitz wrote:
> >
> > FYI, the most recent CVS snapshots of GDB can read zlib-compressed
> > debug info. If someone gets around to an objcopy patch to create it,
> > then we can change debhelper to use it...
> >
>
>
Ove Kaaven <[EMAIL PROTECTED]> writes:
> Joerg Jaspert skrev:
>> - Packages in main need to be installable and not cause their (indirect)
>>reverse build-depends to FTBFS in the absence of data.debian.org.
>>If the data is necessary for the package to work and there is a small
>>datas
Joerg Jaspert <[EMAIL PROTECTED]> writes:
> On 11397 March 1977, Goswin von Brederlow wrote:
>
>>> No, we have to distribute the source. DFSG#2, many licenses require it.
>>> Relying on some external system to keep the source for us is also a
>>> no-go.
>> What about the hack to have the source bu
Daniel Jacobowitz wrote:
>
> FYI, the most recent CVS snapshots of GDB can read zlib-compressed
> debug info. If someone gets around to an objcopy patch to create it,
> then we can change debhelper to use it...
>
What's the runtime cost?
Usually things get slower with such kind of changes.
Che
* Joerg Jaspert:
> Any comments?
In the long term, I'd like to see a better CDN, so that such
considerations would magically disappear.
> Timeframe for this? I expect it to be ready within 2 weeks.
Oooh. For a production-quality CDN, 2 years seem more reasonable.
I don't know the reason for t
On Tue, May 27, 2008 at 02:05:02PM -0400, Daniel Jacobowitz wrote:
> On Mon, May 26, 2008 at 05:52:13PM +0100, Darren Salt wrote:
> > I demand that Alexander E. Patrakov may or may not have written...
> >
> > > Joerg Jaspert wrote:
> > >> That already has a problem: How to define "large"? One way,
On Mon, May 26, 2008 at 05:52:13PM +0100, Darren Salt wrote:
> I demand that Alexander E. Patrakov may or may not have written...
>
> > Joerg Jaspert wrote:
> >> That already has a problem: How to define "large"? One way, which we
> >> chose for now, is simply "everything > 50MB".
>
> > Random th
Joerg Jaspert wrote:
That already has a problem: How to define "large"? One way, which we
chose for now, is simply "everything > 50MB".
Random thought: some architecture-dependent -dbg packages are also > 50 MB in
size. Shouldn't they get some special treatment, too?
--
Alexander E. Patrako
Le Mon, May 26, 2008 at 02:02:52AM +0200, Joerg Jaspert a écrit :
> On 11397 March 1977, Charles Plessy wrote:
>
> > I have a question about the sources: for big datasets, would it be
> > acceptable that the source package does not contain the data itself but
> > only a script to download it? Sinc
On 11397 March 1977, Charles Plessy wrote:
> I have a question about the sources: for big datasets, would it be
> acceptable that the source package does not contain the data itself but
> only a script to download it? Since the source packages are not to be
> autobuilt and the binary packages only
On 11396 March 1977, Raphael Geissert wrote:
> What about going the 'b.)' way but define it as a RG (or even RC) with some
> other changes to policy (like requiring big data package's source packages
> to be arch-indep and not build anything else but the data packages).
No, as already written in
Le Sun, May 25, 2008 at 08:18:01PM +0200, Joerg Jaspert a écrit :
> Basic Problem: "What to do with large data packages?"
>
> That already has a problem: How to define "large"? One way, which we
> chose for now, is simply "everything > 50MB".
(...)
> - It is an own archive, so it needs full sou
Luk Claes <[EMAIL PROTECTED]> writes:
> Raphael Geissert wrote:
>> Hi all,
>>
>> What about going the 'b.)' way but define it as a RG (or even RC) with some
>> other changes to policy (like requiring big data package's source packages
>> to be arch-indep and not build anything else but the data p
Raphael Geissert wrote:
> Luk Claes wrote:
>> Are you sure that the current sync scripts make that possible and won't
>> sync everything unless explicitely stated differently and will keep
>> working without intervention for the time being? Because otherwise it's
>> like Joerg said not an option IM
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Luk Claes wrote:
>
> Are you sure that the current sync scripts make that possible and won't
> sync everything unless explicitely stated differently and will keep
> working without intervention for the time being? Because otherwise it's
> like Joerg s
Raphael Geissert wrote:
> Hi all,
>
> What about going the 'b.)' way but define it as a RG (or even RC) with some
> other changes to policy (like requiring big data package's source packages
> to be arch-indep and not build anything else but the data packages).
>
> That way the transition could
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi all,
What about going the 'b.)' way but define it as a RG (or even RC) with some
other changes to policy (like requiring big data package's source packages
to be arch-indep and not build anything else but the data packages).
That way the transiti
Joerg Jaspert skrev:
- Packages in main need to be installable and not cause their (indirect)
reverse build-depends to FTBFS in the absence of data.debian.org.
If the data is necessary for the package to work and there is a small
dataset (like 5 to 10 MB) that can be reasonably substitu
Hi,
On Sun, May 25, 2008 at 08:18:01PM +0200, Joerg Jaspert wrote:
> So assume we go for solution c. (which is what happens unless someone
> has a *very* strong reason not to, which I currently can't imagine) we
> will setup a seperate archive for this. This will work the same way as
> our main a
Hi,
one important question lately has been "What should we do with large
packages containing data", like game data, huge icon/wallpaper sets,
some science data sets, etc. Naturally, this is a decision ftpmaster has
to take, so here are our thoughts on it to facilitate discussion and see
if we miss
21 matches
Mail list logo