On 2009-04-02, Tim Roberts wrote:
> Grant Edwards wrote:
>
>>On 2009-03-31, Dave Angel wrote:
>>
>>> They were added in NTFS, in the Windows 2000 timeframe, to my
>>> recollection.
>>
>>NTFS was added in NT 3.1 (which predates Win2K by 7-8 years).
>
> Although that's true, you didn't read his se
Grant Edwards wrote:
>On 2009-03-31, Dave Angel wrote:
>
>> They were added in NTFS, in the Windows 2000 timeframe, to my
>> recollection.
>
>NTFS was added in NT 3.1 (which predates Win2K by 7-8 years).
Although that's true, you didn't read his sentence. Sparse file support
was not added to N
1) How random is random enough? Some PRNGs are very fast, and some are
very random, but theres always a compromise.
2) How closely related can the files be? It would be easy to generate
1GB of pseudorandom numbers, then just append UUIDs to them
3) Unique filenames can be generated with tmpnam
--
On 2009-03-31, Dave Angel wrote:
>>> Unfortunately, although the program still ran under NT (which
>>> includes Win 2000, XP, ...), the security system insists on
>>> zeroing all the intervening sectors, which takes much time,
>>> obviously.
>> Why would it even _allocate_ intevening sectors? T
The FAT file system does not support sparse files. They were added in
NTFS, in the Windows 2000 timeframe, to my recollection.
Don't try to install NTFS on a floppy.
Grant Edwards wrote:
On 2009-03-31, Dave Angel wrote:
I wrote a tiny DOS program called resize that simply did a
seek out
venutaurus...@gmail.com wrote:
On Mar 31, 1:15 pm, Steven D'Aprano
wrote:
On Mon, 30 Mar 2009 22:44:41 -0700, venutaurus...@gmail.com wrote:
Hello all,
I've a requirement where I need to create around 1000
files under a given folder with each file size of around 1GB. The
constraint
Is there a way to create a file to big withouth actually writing
anything in python (just give me the garbage that is already on the
disk)?
No. That would be a monstrous security hole.
Sure...just install 26 hard-drives and partition each up into 40
1-GB unformatted partitions each, and then
venutaurus...@gmail.com wrote:
That time is reasonable. The randomness should be in such a way that
MD5 checksum of no two files should be the same.The main reason for
having such a huge data is for doing stress testing of our product.
For most purposes (other than stress testing the HD and HD
On 2009-03-31, Steven D'Aprano wrote:
[writing a bunch of files with a bunch of random data in each]
>> Can this be done within few minutes of time. Is it possble
>> only using threads or can be done in any other way. This has
>> to be done in Windows.
>
> Is it possible? Sure. In a couple of mi
On 2009-03-31, Dave Angel wrote:
> I wrote a tiny DOS program called resize that simply did a
> seek out to a (user specified) point, and wrote zero bytes.
> One (documented) side effect of DOS was that writing zero
> bytes would truncate the file at that point. But it also
> worked to extend th
I wrote a tiny DOS program called resize that simply did a seek out to a
(user specified) point, and wrote zero bytes. One (documented) side
effect of DOS was that writing zero bytes would truncate the file at
that point. But it also worked to extend the file to that point without
writing any
andrea wrote:
On 31 Mar, 12:14, "venutaurus...@gmail.com"
wrote:
That time is reasonable. The randomness should be in such a way that
MD5 checksum of no two files should be the same.The main reason for
having such a huge data is for doing stress testing of our product.
In randomness is not n
On 31 Mar, 12:14, "venutaurus...@gmail.com"
wrote:
>
> That time is reasonable. The randomness should be in such a way that
> MD5 checksum of no two files should be the same.The main reason for
> having such a huge data is for doing stress testing of our product.
In randomness is not necessary (
venutaurus...@gmail.com wrote:
On Mar 31, 1:15 pm, Steven D'Aprano
The fastest HDDs can reach about 125 MB per second under
ideal circumstances, so that will take at least 8 seconds
per 1GB file or 8000 seconds in total.
That time is reasonable.
You did catch the bit about "the *fastest* HD
On Mar 31, 1:15 pm, Steven D'Aprano
wrote:
> On Mon, 30 Mar 2009 22:44:41 -0700, venutaurus...@gmail.com wrote:
> > Hello all,
> > I've a requirement where I need to create around 1000
> > files under a given folder with each file size of around 1GB. The
> > constraints here are each f
On Mon, 30 Mar 2009 22:44:41 -0700, venutaurus...@gmail.com wrote:
> Hello all,
> I've a requirement where I need to create around 1000
> files under a given folder with each file size of around 1GB. The
> constraints here are each file should have random data and no two files
> should
On Mar 31, 4:44 pm, "venutaurus...@gmail.com"
wrote:
> Hello all,
> I've a requirement where I need to create around 1000
> files under a given folder with each file size of around 1GB. The
> constraints here are each file should have random data and no two
> files should be unique eve
Hello all,
I've a requirement where I need to create around 1000
files under a given folder with each file size of around 1GB. The
constraints here are each file should have random data and no two
files should be unique even if I run the same script multiple times.
Moreover the filename
18 matches
Mail list logo