Hi,
Couldn't agree more..but i just asked if there was such a tool :)
Bruno
Richard Elling wrote:
> On Dec 9, 2009, at 11:07 AM, Bruno Sousa wrote:
>> Hi,
>>
>> Despite the fact that i agree in general with your comments, in reality
>> it all comes to money..
>> So in this case, if i could prov
On Dec 9, 2009, at 11:07 AM, Bruno Sousa wrote:
Hi,
Despite the fact that i agree in general with your comments, in
reality
it all comes to money..
So in this case, if i could prove that ZFS was able to find X amount
of
duplicated data, and since that X amount of data has a price of Y per
On Wed, 9 Dec 2009, Andrey Kuzmin wrote:
Um, I thought deduplication had been invented to reduce backup window :).
Unless the backup system also supports deduplication, in what way does
deduplication reduce the backup window?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.s
Hi,
The data needs to be stored somewhere, and usually we need to have a
server, disk array, disks, and more data means more disks, and more
disks active means more power usage , therefore higher costs, and less
green IT :)
So, at my point of view, deduplication is relevant for lowering costs,
but
On Wed, Dec 9, 2009 at 10:43 PM, Bob Friesenhahn
wrote:
> On Wed, 9 Dec 2009, Bruno Sousa wrote:
>>
>> Despite the fact that i agree in general with your comments, in reality
>> it all comes to money..
>> So in this case, if i could prove that ZFS was able to find X amount of
>> duplicated data, a
On Wed, 9 Dec 2009, Bruno Sousa wrote:
Despite the fact that i agree in general with your comments, in reality
it all comes to money..
So in this case, if i could prove that ZFS was able to find X amount of
duplicated data, and since that X amount of data has a price of Y per
GB, IT could be see
Hi,
Despite the fact that i agree in general with your comments, in reality
it all comes to money..
So in this case, if i could prove that ZFS was able to find X amount of
duplicated data, and since that X amount of data has a price of Y per
GB, IT could be seen as business enabler instead of a co
On Dec 9, 2009, at 3:47 AM, Bruno Sousa wrote:
Hi Andrey,
For instance, i talked about deduplication to my manager and he was
happy because less data = less storage, and therefore less costs .
However, now the IT group of my company needs to provide to management
board, a report of duplicated d
Hi,
The tool to report storage usage per share is du -h / df -h :) , so yes,
these tools could be deduplication aware.
I know for instance that microsoft has a feature (in Win2003 R2), called
File Server Resource Manager, and inside theres the possibility to make
Storage Reports, and one of those
On Wed, Dec 9, 2009 at 2:47 PM, Bruno Sousa wrote:
> Hi Andrey,
>
> For instance, i talked about deduplication to my manager and he was
> happy because less data = less storage, and therefore less costs .
> However, now the IT group of my company needs to provide to management
> board, a report of
Hi Andrey,
For instance, i talked about deduplication to my manager and he was
happy because less data = less storage, and therefore less costs .
However, now the IT group of my company needs to provide to management
board, a report of duplicated data found per share, and in our case one
share mea
On Wed, Dec 9, 2009 at 2:26 PM, Bruno Sousa wrote:
> Hi all,
>
> Is there any way to generate some report related to the de-duplication
> feature of ZFS within a zpool/zfs pool?
> I mean, its nice to have the dedup ratio, but it think it would be also
> good to have a report where we could see wha
12 matches
Mail list logo