pus does not specified in CT config?
How will it work, if --cpus is specified and less then physical cores?
How will it work, if --cpus is specified and CPU has hyper-threading and
1) --cpus less then cpu cores
2) or --cpus less and odd(!) (example: --cpus: 3, physical CPU cores: 4
+ HT)
Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
Hello all,
couple years ago vzctl had option --noatime. But now there is no such
option:
# vzctl set ${ve} --noatime yes --save
non-option ARGV-elements: --save
# man vzctl | grep noatime
#
What happened with it? Did not find anything about it in google.
--
Best Regards,
Nick Knutov
http
Hello all,
Is it possible to set cpu limit in % for specific user inside CT?
--
Best Regards,
Nick Knutov
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
mpiling and using it.
>
> Any other SSD caching software that works with openvz?
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
> I knew about few incidents with ___FULL___ data loss from customers of
> flashcache. Beware of it in production.
>
> If you want speed you can try ZFS with l2arc/zvol cache because it's
> native solution.
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ
ttp://zfsonlinux.org/) is as stable but more performant than it is on the
> OpenSolaris forks... so you can build your own if you can spare the people to
> learn the best practices.
>
> I don't have a use for ZFS myself so I'm not really advocating it.
>
> TYL,
>
--
of
> flashcache. Beware of it in production.
>
> If you want speed you can try ZFS with l2arc/zvol cache because it's
> native solution.
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
User
ext4 in individual zvols? I
> have done some testing with root and private directly on a zfs file
> system and so far everything seems to work just fine.
>
> What am I to expect down the road?
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voi
t;> >change, I don't see why the inode numbers should change.
> Do you have really working zero downtime vzmigrate on ZFS?
>
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
User
I have old server with usual disks and new server with two ssd which are
smaller size. I have /vz on one disk and /vz2 on another.
I want to live migrate CTs from the old server to specified partition on
the new server but I can't find how to do it. Does anybody know?
--
Best Regards,
amp;& run CT
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
with mount
--bind, but this is also not a good way.
12.09.2014 5:33, Devon B. пишет:
> On 9/11/2014 7:00 PM, Nick Knutov wrote:
>> I have old server with usual disks and new server with two ssd which are
>> smaller size. I have /vz on one disk and /vz2 on another.
>>
>> I
the second SSD
> prior to migrating.
>
> mkdir /vz2/private/VEID
> ln -s /vz2/private/VEID /vz/private/VEID
>
> Then try the migration, does it work?
>
> On 9/11/2014 8:51 PM, Nick Knutov wrote:
>> I'm not good enough with such openvz internals and was hoped there
> On 9/11/2014 9:57 PM, Nick Knutov wrote:
>> I did exactly so.
>>
>> Migration to symlink is working. And CT is running ok after. But
>> private/root paths are rewritten after migration to /vz + for simfs with
>> billions small files running CT from symlink can be
l set ${ve} --diskinodes 10:10 --save
and there are about 20Gb of files inside this CT.
2.6.32-042stab093.5 and latest ploop.
--
Best Regards,
Nick Knutov
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
ized
> to as low as 40G, the minimum seems to be around 240G (values printed in
> the error message are in sectors which are 512 bytes each).
>
> Solution: please be reasonable when requesting diskinodes for ploop.
--
Best Regards,
Nick Knutov
_
erting. So for 40GiB, set diskinodes to 2621440
>
> Either that, or just remove the DISKINODES from CT config
>
>> On 10/24/2014 8:05 PM, Nick Knutov wrote:
>>>
>>> Thanks, now I understand why this occurred, but what is the easiest way
>>> to conver
over iSCSI...
..and there are still no speed tests.
12.11.2014 15:20, Pavel Odintsov пишет:
> Any questions/suggestions/performance test and other feedback are
> welcome here or on GitHub!
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-
x27;t this issues by design. Quotes maybe problems, good
> addition. I just added remark about quotes to comparison table.
>
> On Wed, Nov 12, 2014 at 9:56 PM, Nick Knutov wrote:
>> Well, good beginning, but..
>>
>> as we discussed earlier:
>>
>> in most cases o
Oh. I missed this.
13.11.2014 2:28, Devon B. пишет:
> I don't think you can just run ploop over ZFS. Ploop requires ext4 as
> the host filesystem according to bug 2277:
> https://bugzilla.openvz.org/show_bug.cgi?id=2277
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 27287
te`. Is it possbile?
(I know I can edit source, just want to check is it already implemented
while I can't find it)
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@o
jira.sw.ru/browse/PSBM-34874
>
> *6. What was checked by developer
> *
>
> *a) *Two servers connected with a crossover. Measured HTB accuracy,
> got the following results:
> https://jira.sw.ru/browse/PSBM-18245?focusedCommentId=2525949&page=com.atlassian.jir
Bronnikov пишет:
> we want find people who still use simfs for OpenVZ containers.
> Do we have such users?
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
Hello all,
what are the best/recommended mount options for ext4 on SSD disks for a
large amount of Ploop-only CTs?
--
Best Regards,
Nick Knutov
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo
Is it possible to do live migration between physical disks inside one
physical node?
I suppose the answer is still no, so the question is what is possible to
do for this?
--
Best Regards,
Nick Knutov
Voice: +7-904-84-23-130
___
Users mailing list
uot;vzctl restore CTID" after you
changed the configuration file.
>
> On 09/08/2015 07:14 AM, Nick Knutov wrote:
>
> > Is it possible to do live migration between physical disks inside one
> > physical node?
>
> > I suppose the answer is
Cancelling...
so I have to log in other ssh session and kill -9 it.
Kernel: 042stab108.8
Is it a bug or I'm doing something wrong?
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users maili
I know ipset is not virtualized, but I have number of trusted CTs and I
want to use ipset inside them (and it's ok in my case to share all data
between CTs and node).
Is it possible to enable ipset for selected CTs?
--
Best Regards,
Nick K
dered (and still tune2fs -O ^has_journal)
be fine?
Was that fixed bug already compiled and sent to yum repository (package
ploop I suppose) ?
07.10.2015 17:03, Dmitry Monakhov пишет:
> Sergey Bronnikov writes:
>
>> Dima, could you help?
>>
>> On 02:08 Wed 30 Sep , Nic
Was that fixed bug already compiled and sent to yum repository (package
ploop I suppose) ?
07.10.2015 17:03, Dmitry Monakhov пишет:
> Sergey Bronnikov writes:
>
>> Dima, could you help?
>>
>> On 02:08 Wed 30 Sep , Nick Knutov wrote:
>>> Hello all,
>>>
dered?
07.10.2015 21:05, Dmitry Monakhov пишет:
Nick Knutov writes:
yes, I'm using SSDs.
Partition was
tune2fs -O ^has_journal /dev/sdX
so I thought the journal was removed completely and data= section is not
important at all.
WOW.. This is hilarious. Indeed even w/o journal ext4 show jo
Is it possible to recalculate quota without stopping vds and vzquota drop?
Case from real life:
vzmigrate (or vzmove, which I plan to release soon) with exclude filter
for rsync to exclude hundreds gigabytes of cache files.
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice
.
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
x27;t migrate
Error: CPU capabilities check failed!
Error: Destination node CPU is not compatible
Error: Can't continue live migration
Should it be so? What is possible to do with this?
Thanks
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904
again.
2015-10-28 12:22 GMT+03:00 Nick Knutov <mailto:m...@knutov.com>>:
Hello all,
I have CT with sshfs mounted. When I tried to migrate this CT I got:
Starting live migration of CT ... to ...
OpenVZ is running...
Checking for CPT version compatibility
Checking
No. Even 2.x flashcashe is not possible to compile with recent openvz
rhel6 kernels.
13.11.2015 15:57, CoolCold пишет:
Bumping up - anyone still on flashcache & openvz kernels? Tried to
compile flashcache 3.1.3 dkms against 2.6.32-042stab112.15 , getting
errors:
--
Best Regards,
SSD acceleration, i.e. it is distributed and it offers file system corruption
prevention (background scrubbing).
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
storage in clusters > 7 - 9 nodes
and wishes to share his or her experience, that's more than welcome.
Thanks,
Corrado
On 16/11/2015, at 4:44 AM, Nick Knutov wrote:
Unfortunately, pstorage has two major disadvantages:
1) it's not free
2)
After kill -9 I have empty folder with path /vz5/private/2016 (not
2016.tmp!)
dmesg | tail
ploop19205: unknown partition table
What can be wrong?
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mai
_
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
ou.
--
Best regards,
Konstantin Khorenko,
Virtuozzo Linux Kernel Team
On 01/28/2016 02:42 PM, Nick Knutov wrote:
Hello,
One of big reasons to prefer simfs over ploop is disk space overhead in
ploop after using snapshots (for backups for example).
It can be really huge - we have one CT which takes 1
really be much more efficient than it
is now without committing the write log, which is what happens when
you merge/delete the snapshot.
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
User
ok, OVZ-6680 created.
04.02.2016 13:16, Konstantin Khorenko пишет:
Hi Nick,
i haven't found a jira issue from you, have you filed it?
On 01/29/2016 05:04 AM, Nick Knutov wrote:
Yes, the question is about ploop of course.
How to get metadata of ploop image? `man what`?
If you are ok
I think I saw it in the wiki but was unable to find now
How to create ploop CT with vzctl create using smaller ploop block size
then defaut 1MB ? Can I change it in some config file?
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
Hello,
is it possible now to limit CPU per user inside CT? I assume it should
be possible with cgroups but I don't know what exactly keywords should I
google.
kernel - latest openvz6
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-2
econd, the logo for OpenVZ projects is updated to be more aligned with the
Virtuozzo logo. The artwork, including buttons, is available in
the OpenVZ wiki [2].
...
Links
=
[1] https://lists.openvz.org/pipermail/users/2016-June/006927.html
[2] https://openvz.org/Artwork
...
--
Best Regards,
Hello all,
will PCI-e NVMe like Intel P3600 and P3608 work with OpenVZ 6 if it is
not boot drive?
or should I forget about NVMe untill Virtuozzo 7
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users
As far as I understand - Virtuozzo 7 kernel DOES NOT contain latest NVMe
driver and RHEL 7 kernel has some speed problerms with NVMe.
Are there any official recomendations or suggestions from Openvz team?
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
Does OpenVZ affected by Dirty COW?
What is the best solution to fix it now?
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
://www.spinics.net/lists/stable/msg147964.html
On 21.10.2016 19:39, Vasily Averin wrote:
yes
2.6.22+ are affected
here you can find an system tap script for mitigation:
https://bugzilla.redhat.com/show_bug.cgi?id=1384344#c13
On 21.10.2016 19:22, Nick Knutov wrote:
Does OpenVZ affected by Dirty
st
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
Hello all,
`top ` shows privvmpages as used memory with all latest Openvz 6 kernel,
instead of oomguarpages.
Is it possible to fix it?
I suppose it happened after COW bug was fixed
ps: vswap is used, of course.
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https
Hello all,
Is it possible to use sa inside CT's?
When I do
# sa -im
inside CT I get
sa: ERROR -- print_stats_nicely called with num_calls == 0
But all seems to be ok if run on the node.
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-2
in the logfiles.
Afaik accton (=BSD Process Accounting) can't be run in a OpenVZ container.
Bye,
Thorsten
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users
--
Best Regards,
Nick Knutov
http://knutov.com
ICQ
Done, http://bugzilla.openvz.org/show_bug.cgi?id=1380
How to know when it is planned to be done? Will it be in near future?
09.11.2009 19:54, Kir Kolyshkin пишет:
Nick Knutov wrote:
Yes,
# accton on
Turning on process accounting, file set to the default
'/var/log/account/pacct
Hello all,
are there any known problems with ext4 on latest development RHEL6
OpenVZ kernel?
--
Best Regards,
Nick Knutov
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users
58 matches
Mail list logo