Re: Debian/kFreeBSD vs linux jail?

2013-04-05 Thread Christoph Egger
Hi!

Joshua Isom  writes:
> Considering Debian's ported the "standard Linux userland" to the
> FreeBSD kernel, I'm wondering if it's possible/practical to use Debian
> inside of a jail instead of a Linux CentOS jail, which has been
> documented.  I know some applications are linux specific, but are they
> really linux specific or gnu specific?  I'm going to retry getting a
> printer driver working with cups that had issues with FreeBSD in the
> past, but I don't know if it's FreeBSD userland or FreeBSD kernel that
> caused the quirks. Has anyone tried using Debian's kFreeBSD userland
> inside a jail?  Is it just pointless on a FreeBSD system?

If it is a free software CUPS driver, chances are it is a GNU thing and
Debian GNU/kFreeBSD might work for you. For all the proprietary stuff
(say flash, acrobat, ..) Debian GNU/kFreeBSD usually is worse of than
either GNU/Linux or pure FreeBSD systems (because no comercial vendor
ever builds for this platform).

Christoph
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Debian/kFreeBSD vs linux jail?

2013-04-05 Thread Eduardo Morras
On Thu, 04 Apr 2013 19:50:40 -0500
Joshua Isom  wrote:

> Considering Debian's ported the "standard Linux userland" to the FreeBSD 
> kernel, I'm wondering if it's possible/practical to use Debian inside of 
> a jail instead of a Linux CentOS jail, which has been documented.  I 
> know some applications are linux specific, but are they really linux 
> specific or gnu specific?  I'm going to retry getting a printer driver 
> working with cups that had issues with FreeBSD in the past, but I don't 
> know if it's FreeBSD userland or FreeBSD kernel that caused the quirks. 
>   Has anyone tried using Debian's kFreeBSD userland inside a jail?  Is 
> it just pointless on a FreeBSD system?

A bit old tutorial (2011) about this topic


http://blog.vx.sk/archives/22-Updated-Tutorial-Debian-GNUkFreeBSD-in-a-FreeBSD-jail.html


---   ---
Eduardo Morras 
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: [ZFS] recover destroyed zpool - what are the available options?

2013-04-05 Thread Volodymyr Kostyrko

04.04.2013 19:26, Beeblebrox:

test them with `zdb -l device`. When the output would be correct - you

guessed your slice!

LABEL 1

 version: 28
 name: 'bsdr'
 state: 2
 txg: 10
 pool_guid: 12018916494219117471
 hostid: 2193536600
 hostname: 'mfsbsd'
 top_guid: 17860002997423999070
 guid: 17860002997423999070
 vdev_children: 1
 vdev_tree:
 type: 'disk'
 id: 0
 guid: 17860002997423999070
 path: '/dev/ad6p2'
 phys_path: '/dev/ad6p2'
 whole_disk: 1
 metaslab_array: 30
 metaslab_shift: 31
 ashift: 9
 asize: 287855869952
 is_log: 0
 create_txg: 4

Do you mean that in this case 'asize 287855869952' is what I should look at?
But 287855869952 /1024 /1024 /2 => 137.260GB is far smaller than I recall
the geom part to be...


I can't has the math. But looking at ashift I can guess your disk should 
be 287855869952/2**9 == 562218496. Is this one right?


Actually if you see all 4 labels correctly you can try to proceed as ZFS 
would guess the correct disk size anyway.


--
Sphinx of black quartz, judge my vow.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


[ZFS] recover destroyed zpool - what are the available options?

2013-04-05 Thread Beeblebrox
>>Actually if you see all 4 labels correctly you can try to proceed as ZFS
would guess the correct disk size anyway. 

I should clarify:  # zdb -l /dev/ada0p2 => all 4 LABELS visible and correct
(zpool name: bsdr)
# zdb -l /dev/ada0p1 => all 4 LABELS visible and correct (zpool name: asp)
# zdb -l /dev/ada0 => only LABEL #2 visible (this is an OLDER zpool with
GUID 5853256800575798014, also named bsdr, the pool was whole-disk-as-raw)
This is the gpt table + partitions as I re-created them immediately after
the gpt delete. It looks like I have re-created the gpt partitions
correctly...

I don't understand what you mean by "you can try to proceed"?
# zpool import -D -f -R /bsdr -N -F -n -X bsdr 
cannot import 'bsdr': a pool with that name already exists. use the form
'zpool import  ' to give it a new name




--
View this message in context: 
http://freebsd.1045724.n5.nabble.com/ZFS-recover-destroyed-zpool-what-are-the-available-options-tp5800299p5801716.html
Sent from the freebsd-questions mailing list archive at Nabble.com.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Regarding zfs send / receive

2013-04-05 Thread Joar Jegleim
sounds like a good idea, I might look into that, thnx.
Terje: zpool.cache is only 860 bytes, I don't think that should cause any
problems (?)


-- 
--
Joar Jegleim
Homepage: http://cosmicb.no
Linkedin: http://no.linkedin.com/in/joarjegleim
fb: http://www.facebook.com/joar.jegleim
AKA: CosmicB @Freenode

--

On 5 April 2013 02:52, Waitman Gobble  wrote:

> Waitman Gobble
> San Jose California USA
>
> On Apr 4, 2013 2:07 PM, "Joar Jegleim"  wrote:
> >
> > Hi Terje !
> > sorry for late reply, I've been checking my mail, forgetting that all my
> > mailing list mail are sorted into their own folders skipping inbox :p
> >
> > the zfs sync setup is a huge advantage over rsync simply because
> > incremental rsync of the volume takes ~12 hours, while the zfs
> differential
> > snapshot's usually take less than a minute . Though it's only ~1TB of
> data,
> > it's  more than 2 million jpegs which rsync have to stat ...
> > I'm guessing my predecessor who chose this setup, over for instance HAST,
> > didn't feel confident enough regarding HAST in production ( I'm looking
> > into that for a future solution) .
> >
> > There's no legacy stuff on the receiving end, old pools are deleted for
> > every sync. I haven't got my script here but google pointed me too
> > https://github.com/hoopty/zfs-sync/blob/master/zfs-sync which look like
> a
> > script very similar to the one I'm using .
> > In fact, I'm gonna take a closer look at that script and see what differs
> > from my script (apart from it being much prettier :p )
> > I didn't know about zpool.cache, gonna check that tomorrow, thanks.
> >
> >
> >
> > --
> > --
> > Joar
> > Jegleim
> > Homepage: http://cosmicb.no
> > Linkedin: http://no.linkedin.com/in/joarjegleim
> > fb: http://www.facebook.com/joar.jegleim
> > AKA: CosmicB @Freenode
> >
> > --
> >
> > On 2 April 2013 14:40, Terje Elde  wrote:
> >
> > > On 2. apr. 2013, at 13.44, Joar Jegleim wrote:
> > > > So my question(s) to the list would be:
> > > > In my setup have I taken the use case for zfs send / receive too far
> > > > (?) as in, it's not meant for this kind of syncing and this often, so
> > > > there's actually nothing 'wrong'.
> > >
> > > I'm not sure if you've taken it too far, but I'm not entirely sure if
> > > you're getting any advantage over using rsync or similar for this kind
> of
> > > thing.
> > >
> > > First two things that spring to mind:
> > >
> > > Do you have any legacy stuff on the receiving machine?  Things like
> > > physically removed old zpools, that are still in zpool.cache, seems to
> slow
> > > down various operations, including creation of new stuffs (such as the
> > > snapshots you receive).
> > >
> > > Also, you don't mention if you're deleting old snapshots on the
> receiving
> > > end?  If you're doing an incremental run every 15 minutes, that's
> something
> > > like 3000 snapshots pr. month, pr. filesystem.
> > >
> > > Terje
> > >
> > >
> >
>
> hi,
> i have a similar situation. its better to only rsync new stuff in this
> case, because you should know when somebody ads something new.
>
> for example, a user uploads 200 new images, these are marked 'to sync' and
> are transferred to the other servers. letting rsync figure out what's new
> just isnt practical.
>
> an idea, works for me. hope it helps.
>
> Waitman Gobble
> San Jose California ___
> > freebsd-questions@freebsd.org mailing list
> > http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> > To unsubscribe, send any mail to "
> freebsd-questions-unsubscr...@freebsd.org"
>
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: [ZFS] recover destroyed zpool - what are the available options?

2013-04-05 Thread Volodymyr Kostyrko

05.04.2013 11:54, Beeblebrox:

Actually if you see all 4 labels correctly you can try to proceed as ZFS

would guess the correct disk size anyway.

I should clarify:  # zdb -l /dev/ada0p2 => all 4 LABELS visible and correct
(zpool name: bsdr)
# zdb -l /dev/ada0p1 => all 4 LABELS visible and correct (zpool name: asp)
# zdb -l /dev/ada0 => only LABEL #2 visible (this is an OLDER zpool with
GUID 5853256800575798014, also named bsdr, the pool was whole-disk-as-raw)
This is the gpt table + partitions as I re-created them immediately after
the gpt delete. It looks like I have re-created the gpt partitions
correctly...

I don't understand what you mean by "you can try to proceed"?
# zpool import -D -f -R /bsdr -N -F -n -X bsdr
cannot import 'bsdr': a pool with that name already exists. use the form
'zpool import  ' to give it a new name


Ok, let's check a few things:

zpool import

zpool import -D

From your previous mails I saw that pool bsdr is FAULTED but not 
deleted. If the system would list bsdr on `zpool import` you should 
remove -D from the command.


--
Sphinx of black quartz, judge my vow.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


[ZFS] recover destroyed zpool - what are the available options?

2013-04-05 Thread Beeblebrox
Thank you for your help Volodymyr,

1. ZPOOL LIST shows that the pool is listed
NAMESIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
bsdr   -  -  -  -  -  FAULTED  -
tank0  49.8G  13.3G  36.5G26%  1.00x  ONLINE  -

2. ZPOOL IMPORT => no pools available to import
3. zpool import -D -f -R /bsdr -N -F -n -X bsdr =>
Gives error because of condition (#1)
4. ZPOOL IMPORT -D shows 2 BSDR pools:
A) config:  bsdr   UNAVAIL  insufficient replicas
  5853256800575798014  UNAVAIL  cannot open  (THIS IS NOT THE POOL I 
WANT -
THIS ONE IS OLDER POOL, WHOLE-DISK-RAW)
B) config:  bsdrUNAVAIL  insufficient replicas
  17860002997423999070  UNAVAIL  cannot open (THIS SHOULD BE THE POOL I
NEED, BUT LOOK AT PROBLEM IN #5)
5. ZPOOL STATUS -V BSDR shows different guid!!
config: bsdrUNAVAIL  0 0 0
  12606749387939346898  UNAVAIL  0 0 0  was /dev/ada0p2 
(THIS
GUID DOES NOT MATCH THE GUID OF 4-B)
It is normal in my opinion that the guid should not match, but that is why I
cannot import pool 4-B. I must either delete the BSDR POOL that is shown as
"on-line", or import 4-B with another name I think.

Thanks and Regards.





--
View this message in context: 
http://freebsd.1045724.n5.nabble.com/ZFS-recover-destroyed-zpool-what-are-the-available-options-tp5800299p5801734.html
Sent from the freebsd-questions mailing list archive at Nabble.com.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


[ZFS] recover destroyed zpool - what are the available options?

2013-04-05 Thread Beeblebrox
I think I might have a better understanding of the situation.

'zpool status' and 'zdb -C' commands both show bsdr properties as:
pool_guid: 17852168552651762162
children[0]:  \  type: 'disk'  \  guid: 12606749387939346898

Whereas 'zpool import -D' and 'zdb -l' commands give the bsdr properties as:
pool_guid: 12018916494219117471
vdev_tree:  \  type: 'disk  \  top_guid: 17860002997423999070

Since the LABEL info on the HDD is the more relevant data, I should be using
the output of zdb -l, and disregard the pool that shows itself as "already
imported - albeit faulted". I therefore plan to import the bsdr pool with a
newname and should run the import command as:

zpool import -D -f -R /bsdr -N -F -n -X 12018916494219117471 newname
Any objections? We must also keep in mind that '-D' flag is showing TWO
deleted bsdr pools, so I must use the unique ID.
Please let me know if I should go ahead and run above command.
Thanks and Regards.



--
View this message in context: 
http://freebsd.1045724.n5.nabble.com/ZFS-recover-destroyed-zpool-what-are-the-available-options-tp5800299p5801834.html
Sent from the freebsd-questions mailing list archive at Nabble.com.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


[ZFS] recover destroyed zpool - what are the available options?

2013-04-05 Thread Beeblebrox
Sadly, the command I ran did nothing - no error message, no output, no
result:
# zpool import -D -f -R /bsdr -N -F -n -X 12018916494219117471 newname



--
View this message in context: 
http://freebsd.1045724.n5.nabble.com/ZFS-recover-destroyed-zpool-what-are-the-available-options-tp5800299p5801851.html
Sent from the freebsd-questions mailing list archive at Nabble.com.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"