You could do effectively this by using dedicated zfs filesystems per jail
On 09/06/2017 09:45, Willem Jan Withagen wrote:
Hi,
I'm writting/building a test environment for my ceph cluster, and I'm
using jails for that
Now one of the things I'd be interested in, is to pass a few raw disks
to
Checkout qjail from your description I think it will do what you want.
On 22/02/2016 01:13, Aristedes Maniatis wrote:
I've been using FreeBSD jails (with ezjail) for many years and they work very
well. However I'm now reaching a critical mass (30+ jails) where I want to be
able to manage them
- Original Message -
From:
- Original Message -
From: "Patrick Dung"
Sent: Friday, October 25, 2013 4:53 PM
Subject: Re: [ADMIN] ZFS-FreeBSD + postgresql performance
>> I would also recommend to use 4K sector size using
>
> gnop and zpool export/import.
This will have no eff
- Original Message -
From: "Patrick Dung"
Sent: Friday, October 25, 2013 4:53 PM
Subject: Re: [ADMIN] ZFS-FreeBSD + postgresql performance
I would also recommend to use 4K sector size using
gnop and zpool export/import.
This will have no effect, the ashift for a pool is set at cre
- Original Message -
From: "Bjoern A. Zeeb"
To: ; ; ;
Sent: Tuesday, September 04, 2012 9:55 AM
Subject: Fixed Jail ID for ZFS -> need proper mgmt?
Hi,
I had been talking to someone about jail management and it turns out
people are using jail jid=42 to always have a fixed jail ID
- Original Message -
From: "Doug Barton"
So first question is, is there some sort of hard-coded limit somewhere?
If not, what is the largest number of jails that you've created
successfully/reliably on a system, and what are the specs for that system?
We happilly run up ~80 single pr
ezjails are managed differently I'm not familiar with it myself but it should
be all there in the docs for ezjail
Regards
Steve
- Original Message -
From: Bender, Chris
To: Eirik Øverby ; Steven Hartland
Cc: freebsd-jail@freebsd.org
Sent: Tuesday, Janua
You should just need to put those jail_ lines without the export in
/etc/rc.conf
e.g
jail_tools2_hostname="tools2"
jail_tools2_ip="172.19.4.41"
Along with jail_enable="YES" and then you should be good to just
run /etc/rc.d/jail start
We also tend to add the following which enables you to config
Wanted to use mtr to diagnose an issue in a jail
but it seems it totally fails even with
security.jail.allow_raw_sockets: 1
Any ideas?
Regards
Steve
This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or e
- Original Message -
From: "Jamie Gritton"
In essence I think we can get the following flow where 1# = process1
and 2# = process2
1#1. prison1.pr_uref = 1 (single process jail)
1#2. prison_deref( prison1,...
1#3. prison1.pr_uref-- (prison1.pr_uref = 0)
1#3. prison1.mtx_unlock <-- this no
- Original Message -
From: "Andriy Gapon"
on 20/08/2011 23:24 Steven Hartland said the following:
- Original Message ----- From: "Steven Hartland"
Looking through the code I believe I may have noticed a scenario which could
trigger the problem.
Give
- Original Message -
From: "Steven Hartland"
Something else you many be more interested in Andriy:-
I added in debugging options DDB & INVARIANTS to see if I can get a more
useful info and the panic results in a looping panic constantly scrolling up
the console. Not s
- Original Message -
From: "Andriy Gapon"
diff -u sys/kern/kern_jail.c.orig sys/kern/kern_jail.c
--- sys/kern/kern_jail.c.orig 2011-08-20 21:17:14.856618854 +0100
+++ sys/kern/kern_jail.c2011-08-20 21:18:35.307201425 +0100
@@ -2455,7 +2455,8 @@
if (--tp
- Original Message -
From: "Steven Hartland"
Looking through the code I believe I may have noticed a scenario which could
trigger the problem.
Given the following code:-
static void
prison_deref(struct prison *pr, int flags)
{
struct prison *ppr, *tpr;
int vfslock
- Original Message -
From: "Andriy Gapon"
thanks for doing this! I'll reiterate my suspicion just in case - I think that
you should look for the cases where you stop a jail, but then re-attach and
resurrect the jail before it's completely dead.
Yer that's where I think its happening
- Original Message -
From: "Roger Marquis"
To: ;
Sent: Saturday, August 20, 2011 7:10 PM
Subject: Re: debugging frequent kernel panics on 8.2-RELEASE
Repeat this enough times and prison0.pr_uref reaches zero.
To reach zero even sooner just kill enough of non-jailed processes.
Inter
- Original Message -
From: "Andriy Gapon"
BTW, I suspect the following scenario, but I am not able to verify it either via
testing or in the code:
- last process in a dying jail exits
- pr_uref of the jail reaches zero
- pr_uref of prison0 gets decremented
- you attach to the jail and
- Original Message -
From: "Andriy Gapon"
BTW, I suspect the following scenario, but I am not able to
verify it either via testing or in the code:
- last process in a dying jail exits
- pr_uref of the jail reaches zero
- pr_uref of prison0 gets decremented
- you attach to the jail and
- Original Message -
From: "Andriy Gapon"
Probably I have mistakenly assumed that the 'prison' in prison_derefer() has
something to do with an actual jail, while it could have been just prison0 where
all non-jailed processes belong.
That makes sense as this particular panic was cause
- Original Message -
From: "Andriy Gapon"
Thats interesting, are you using http as an example or is that something thats
been gleaned from the debugging of our output? I ask as there's only one process
running in each of our jails and thats a single java process.
It's from the debug d
- Original Message -
From: "Andriy Gapon"
Thanks to the debug that Steven provided and to the help that I received from
Kostik, I think that now I understand the basic mechanics of this panic, but,
unfortunately, not the details of its root cause.
It seems like everything starts with
- Original Message -
From: "Ian Downes"
Thanks, indeed jls -d does show the jail as in the process of dying. I
watched jls -d and (unscientifically) as soon as jls -d reported the jail was
completely dead I was able to umount and destroy the filesystem.
I hadn't expected it to take
- Original Message -
From: "Ian Downes"
$ jls
JID IP Address Hostname Path
What does jls -d say? i.e. is the jail really shutdown or is it still dieing?
Regards
Steve
This e.mail is private and confide
I would dearly like to see this make the 7.1 release, multi IP's in order
to support backend interfaces in jails, is something that we hit against
all the time.
Regards
Steve
- Original Message -
From: "Sami Halabi" <[EMAIL PROTECTED]>
To:
Sent: Wednesday, October 01, 2008 12:21
I shutdown a jail on one of our 7.0-release boxes the other day and
while doing some more maintenance on one of the other jails I notice
the other still listed in jls.
After doing some digging I found we have 60 sockets still open for
said jail.
tcp4 0 58500 X.X.X.X.80Y.Y.Y.Y.266
- Original Message -
From: "Geoffroy DESVERNAY" <[EMAIL PROTECTED]>
This is something we're really looking forward to tbh a great
feature :) One of the reasons for this is hosting jails, with
the addition of multi IP support we will be able to enable
jails to connect to "backdoor" secure
This is something we're really looking forward to tbh a great
feature :) One of the reasons for this is hosting jails, with
the addition of multi IP support we will be able to enable
jails to connect to "backdoor" secure services such as a
mysql server.
- Original Message -
From: "Bjoern
27 matches
Mail list logo