> I know Proxmox is a huge Debian fan... does Debian offer ZFS kernel modules
> and if not, why not? How about Proxmox VE?
Besides, I would like to improve support for more storage types on OpenVZ.
I think direct support for zfs, rbd, dm-thin would be great (snaphshot, clone).
But for me the cur
> Unless I misunderstood, they also say there that ZFS code can be merged into
> the Linux source tree... but that distributing a binary built from it would be
> a no-no.
They claim distributing as binary module is no problem! They have split the code
into spl (Solaris porting Layer), and a sepa
Greetings,
- Original Message -
> > License issues of ZFS.
> >
> > License issues is not an critical because installing of ZFS is
> > straightforward and do not require any deep integration to system or
> > kernel and work on almost any kernel.
>
> OpenZFS and zfsonline people claim that
Hello!
I can't find any info about linking :(
But I found big article from ZoL team:
http://zfsonlinux.org/faq.html#WhatAboutTheLicensingIssue
Will be fine if OpenVZ command can add ZFS into standard shipment :)
On Mon, Jan 12, 2015 at 10:00 AM, Dietmar Maurer wrote:
>> License issues of ZFS.
Hello, all!
Thank you for feedback!
Kirill, you are absolutely right and this issue mentioned in my
comparison table
https://github.com/pavel-odintsov/OpenVZ_ZFS/blob/master/OpenVZ_containers_on_zfs_filesystem.md
But there are some progress at this field there
https://github.com/zfsonlinux/zfs/i
> License issues of ZFS.
>
> License issues is not an critical because installing of ZFS is
> straightforward and do not require any deep integration to system or
> kernel and work on almost any kernel.
OpenZFS and zfsonline people claim that it is perfectly valid to ship
zfs binary kernel module
BTW, Pavel one issue which you or others might consider and test well before
moving to ZFS: 2nd level (i.e. CT user) disk quotas.
One will have to emulate Linux quota APIs and quota files for making this work.
e.g. some apps like CPanel call quota tools directly and depending on OS
installed in
Hello!
Because your question is very big I will try to answer in multiple blocks :)
---
My disk space issue.
24GB is a wasted space from only one container :) Total wasted space
per server is about 900Gb and it's really terrible for me. Why?
Because I use server SSD with hardware RAID array's a
Greetings,
- Original Message -
> And I checked my containers with 200% disk overuse from first message
> and got negative result. 24Gb of wasted space is not related with
> cluster size issue.
Yeah, but 24GB is a long way off from your original claim (if I remember
correctly) of about 9
And I checked my containers with 200% disk overuse from first message
and got negative result. 24Gb of wasted space is not related with
cluster size issue.
./ploop_gramentation_checker.py /vz/private/41507/root.hdd/root.hdd
We count 43285217280 bytes
We count 6079506655 zero bytes
We count 3720571
Hello, folks!
I read your message again and found suggestion about decreasing block
size of ploop. But unfortunately it's not possible with vzctl in any
ways. We can do it only with direct call of ploop.
Because I can't change block size or recreate VE with another block
size I tried to do some r
Hello!
Thank you! I will contact with you out off list.
On Sat, Jan 10, 2015 at 4:44 PM, Kirill Korotaev wrote:
> Pavel,
>
> it’s impossible to analyze it just by `du` and `df` output, so please give me
> access if you want me to take a look into it.
> (e.g. if I would create 10 million of 1KB
Pavel,
it’s impossible to analyze it just by `du` and `df` output, so please give me
access if you want me to take a look into it.
(e.g. if I would create 10 million of 1KB files du would show me 10GB while
ext4 (and most other file systems) would allocate 40GB in reality assuming 4KB
block siz
It is also important to note that there is wasted space with ZFS as is
right now if you use advanced format drives (usually 2TB or larger).
When using ashift=12 (4k sector size) to create a ZFS raid, you'll lose
about 10-20% of your disk capacity with ZFS depending on the RAID type,
I don't re
Thank you, Kirill! I am grateful for your answer!
I reproduced this issue specially for you on one container with 2.4
times (240% vs 20%) overuse.
I do my tests with current vzctl and ploop 1.12.2 (with fixed
http://bugzilla.openvz.org/show_bug.cgi?id=3156).
Please check this gist:
https://gist.
> On 09 Jan 2015, at 21:39, Pavel Odintsov wrote:
>
> Hello, everybody!
>
> Do somebody have any news about ZFS and OpenVZ experience?
>
> Why not?
>
> Did you checked my comparison table for simfs vs ploop vs ZFS volumes?
> You should do it ASAP:
> https://github.com/pavel-odintsov/OpenVZ_ZF
Hello, everybody!
Do somebody have any news about ZFS and OpenVZ experience?
Why not?
Did you checked my comparison table for simfs vs ploop vs ZFS volumes?
You should do it ASAP:
https://github.com/pavel-odintsov/OpenVZ_ZFS/blob/master/openvz_storage_backends.md
Still not interesting?
For exa
17 matches
Mail list logo