I build code using static linking for deployment across a set of
machines. For me this has a lot of advantages - I know that the
code will run, no matter what the state of the ports is on the
machine, and if there is a need to upgrade a library then I do it
once on the build machine, rebuild the ex
Interesting reading the responses to this from over the weekend,
and I think that Stuart Barkley's comment below strikes the biggest
chord with me:
> Today, I probably wouldn't fight using dynamic linking. I do wish
> things would continue to provide static libraries unless there are
> specific r
Just saw that the TRIM support for UFS has been MFC'd. Excellent stuff.
I was wondering if there were any plans to do similar for ZFS at all ?
thanks,
-pete.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-s
If you want failover using lagg then your best bet is to get lagg between
two ports on different switches. If you have a pair of switches which
will present as a single device then you can use LACP to do this, else
use simple failover. I do this for all our servers and it works very
nicely.
In you
> Why may it hurt ? How may it hurt ? Which sector is written to by
> this 'gpart' command ?
>
> As far as I understand, GPT writes some stuff at the beginning and
> the end of the harddisk.
Yup, this is true.
> How/why will the newfs overwrite those parts ?
Because you are also giving it the wh
I havent investigated far enough yet to see if this is the same problem, but
I am also seeing hangs on em0 when under heavy load. This is 8-STABLE from
the 17 at around 3pm.
em0@pci0:0:25:0:class=0x02 card=0x281e103c chip=0x10bd8086 rev=0x02
hdr=0x00
vendor = 'Intel Corporatio
Just as an addendum to this - in my case ifconfig down/up does not fix
the problem. I need a reboot for the network to come back normally.
-pete.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To uns
I found a solution to my em0 problem - I dropped in a PCI card with two
Intel controllers on it (em1 and em2 obviously) and those work fine.
So, the question is, what is the difference between the plug in card,
which is 82546EB, and the onboard 82566DM controller which makes the
latter stop workin
> If you ifconfig down/up the interface does it come back to life?
Nope - has no effect. See another separate post of mine about how
I added another em card, and that works fine.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailma
> So, please, someone, somewhere, share a success story, where you're
> using FreeBSD, ZFS, and HAST. Let me know that it does work. I'm
> starting to lose faith in my abilities here. :(
I ran our main database for the old company using ZFS on top of HAST
without any problems at all. Had a sing
> It is not a hastd crash, but a kernel crash triggered by hastd process.
>
> I am not sure I got the same crash as you but apparently the race is possible
> in g_gate on device creation.
>
> I got the following crash starting many hast providers simultaneously:
This is very interestng to me - my
> I agree with that. I had problems with Flash on AMD64 so sometimes
Am impressed - I didnt realise it was possible at all under amd64! I
ended up using 'gnash' which doesnt really do the job to be honest,
but it better than nothing. These days I find the best solution is
keeping a copy of Windows
> Yes, you may hit it only on hast devices creation. The workaround is to avoid
> using 'hastctl role primary all', start providers one by one instead.
Interesting to note that I just hit a lockup in hast (the discs froze
up - could not run hastctl or zpool import, and could not kill
them). I have
> The other 5% of the time, the hastd crashes occurred either when
> importing the ZFS pool, or when running multiple parallel rsyncs to
> the pool. hastd was always shown as the last running process in the
> backtrace onscreen.
This is what I am seeing - did you manage to reproduce this with the
> This looks like a different problem. If you have this again please provide the
> output of 'procstat -kka'.
Will do...
-pete.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any
> Adding some swap would help a lot more.
So, I run a lot of systems without swap - basically my
thinking at the time I set them up went like this.
"I have 4 gig of memory, and 4 gig of swap. Surely running 8 gig of
memory and no swap will be just as good ?"
but, is that actually true ? Is real
> Having swap provides some cushion. Swap kind of smooths any bursts. (And it
> can
> also slow things down as a side effect)
This is why I got rid of it - my application is a lot of CGI scripts. The
overload condition is that we run out of memory - and we run *way* out
of memory its never
> My original idea was to set up blades so that they run HAST on pairs of
> disks, and run ZFS in number of mirror vdevs on top of HAST. The ZFS
> pool will exist only on the master HAST node. Let's call this setup1.
This is exactly how I run things. Personally I think
it is the best solution, a
> Everything is detected correctly, everything comes up correctly. See
> a new option (reload) in the RC script for hast.
same here - have patched the master databse achines, all came up fine,
everything running erfectly, have flip-flopped between the two machines
with no ill effects whatsoever,
I updated to STABLE yesterday to get the net hast patches - all seemed fine,
so I went round and upgraded all the machines. But since then have been
fighting with some odd network issues - to the point where I have rolled
back to an earlier kernel to fix them.
The main issue for me appears to be t
Following on from what I write about CARP, I've noticed another strange
thing happening with IPv6. Things which appear not to connect, and then
connect after a few seconds. To me it looks like ndp is having a
hard time mapping addresse to ether addresses. What I see is that
if a ping doesnt work an
> I haven't modified the ndp related code for quite a long time, but recently
> have seen
> some postings regarding incomplete ndp entries (still catching up on emails).
>
> Changes committed in the past year or so were related to locking and memory
> leak, not
> functional updates.
>
> I have se
> I assume both are stable/8? Are you sure on the Feb 25th kernel
> doing ok or could it be that you are just more lucky?
I've rolled everything back now to a kernel from April 1st - that
works OK. The one from April 11th does not. If I get some time I will
barrow it down.
> I would assume that
> One nice feature of ZFS I have discovered is with USB flash media. You
> are not typically supposed to write much to that media, but using UFS on
> USB sticks is awful. On contrary, when used with ZFS, the USB sticks
> behave much differently, because ZFS will group writes and not do silly
>
> Correct. The layering is not, in itself, the issue. The issue is
> that the loader or kernel or whatever reads the first sector of the
> disk, finds a GPT so it then looks for the backup GPT in the last
> physical sector of the disk and doesn't find it. At this point,
> gmirror is not loaded (
> Is this simple to do? When I setup my home ZFS server, I couldn't get
> it to boot from ZFS, so I configured 2 disks as 'boot' discs:
Its fairly simple - I generally dont boot from ZFs either, my
standard config has a 4 gig UFS boot partition, and then a large zpool
on the rest of the drive. usu
> While zfs on geli is less complex (in the sense that geli on zfs
> involves two layers of filesystems), I'm concerned as to whether
> encrypting the device will somehow affect zfs' ability to detect
> silent corruption, self-heal, or in any other way adversely affect
> zfs' functionality. In my
> iSCSI as in the target (server) function? net/istgt in ports seemed ok
> last time I tried it.
Plus the older (and simpler) net/iscsi-target works fine.
Quite some time agao I did spent a lot of time playing around with
the initiator. My expereinces were that it works very nicely when
connecti
> Jeremy may not have seen PCI express x16 HBAs working on consumer
> boards, but I have, plenty of times.
Ditto. Indeed, I wasn't aware you could do anything to a PCIe slot
to prevent it working. Have seen motherboards which wont *boot* from
an HBA in a PCIe slot, but never one which couldnt use
Can anyone see any problem is doing this ? i.e. creating a vlan interface
which doesnt correspond to any physical interface, just as a place to hang IP
addresses. I am trying to work around a problem with carp and ndp when
there are multiple IPv6 addresses bound to it.
cheers,
-pete.
> Does it specifically have to be a vlan(4), or can you perhaps add another
> address to lo(4), or perhaps create a "lo1" in addition to the "lo0"?
It can be anything really - I was looking for a "generic" interface
I can configure with IP addresses. But adding real addresses to
loopback interface
> I uploaded a patch last night for this issue, it's sitting at
>
>http://people.freebsd.org/~qingli/in6.c.diff
I just tested this and works fine for me too - was actually tearing
my hair out, as I asked how to do this on STABLE last week
and got the answer "clone lo1", which I tested and work
I upgraded my system to -stable on January 6th, and since then I
have noticed a very odd problem. I have a zpool with 4 drives in it,
and one of them is always 'OFFLINE' - if I put it online and it
styarts resolvering then another one immediately goes offline.
It's the same two drives alternating
So, I was trying to create a disc witha sector size of 4096 bytes, and I
assumed that simply creating a zvol with that blocksize would do the trick.
But it appears that whatever the blocksize is on the xvol, diskinfo is
reporting the sector size as 512 bytes.
I this the intended behaviour ? I don
> AFAIK, there is no way to specify the sector size to use in a ZFS pool: it
> is completely automatic when you call "zpool create". Ideally it should
> query the disk about its sector size and use that, but I don't know if
> that has been implemented (and can't be bothered to dig through the sourc
> Try the following from the zfs(1M) man page:
>
> zfs create [-ps] [-b blocksize] [-o property=value] ... -V size volume
> [...]
>-b blocksize
>
> Equivalent to -o volblocksize=blocksize. If this option is
> specified in conjunction with -o volblocksize, t
> You can use the method described here to create a zvol with 4k sector size:
> http://lists.freebsd.org/pipermail/freebsd-fs/2010-December/010350.html
I saw that, but it describes setting up a zpool, not a zvol - or are
you saying that a zvol created on such a zpool will have 4k sectors ?
Unfortu
> Do you have a spare partition? Probably use the swap partition temporarily.
> Install the 64 bits stuff into it. Boot from it and than install the 64
> bits stuff over the (now unused) 32 bits stuff and reboot into that. If
> something fails you can always go back to a bootable system.
> NB:
> I wasn't aware you could do that. I was only aware that it was the
> other way around. That (my) misconception seems to also be relayed
> by others such as Miroslav who said:
Should this not be the recommended way of doing things even for MBR
disks ? I have a lot of machines booting from gmirr
> The problem with mirroring partitions is that you thrash the disk
> during the rebuild after replacing a failed disk. And the more
> partitions you have, the worse it gets.
yes, this is true - actually I have had this on older
machiens, and have had to stop the rebuilds of each bit until
the ot
> Yes it does? Am I the only one person on the whole earth seeing the big
> difference in easy setup of mirroring two drives instead of many
> individual partitions?
Sorry, I wasnt suggesting that you should always mirror
the indiviudual partititons - just I happen to do that where
I am mixing Z
Am posting this to stable not really as a question, but more in case anyone
else hits the same problem. Last patch tuesday one of my virtual Windows
machines running under VirtualBox started crashing. By which I mean
that VirtualBox would quit. This had been running tsably for a long
tine, so it p
> Can you please file a PR? Having broken AIO and ZFS would be .. bad.
OK: http://www.freebsd.org/cgi/query-pr.cgi?pr=168298
I thought it was most likely a bug in VirtualBox to be honest - broken
AIO and ZFS would be bad, but also highly noticeable in other
configurations, and this only seems to
> I have seen similar behaviour, but I did not disable AIO to solve it.
> Instead, in the VirtualBox VM, I made sure that the storage controller was
> created with the "--hostiocache on" option. Without that, the virtual
> machines were unreliable on ZFS with the same behaviour you saw.
Interes
Meant to reply to this at the time, but have been away...
> Has anyone else run into problems when using IPv6 + CARP ?
I ran into some - aliases on a CARP integface did not seem
to work proprly - but if you workaround that then it appears
to work fine. We are using it in production with no proble
> Thanks for the feedback Pete, what are you running ?
>
> We're on 8-STABLE here.
Yup, same here - aactually running a very recent STABLE now,
but for most of this year it's been on one from January. The
one running on the firewalls is from May 7th, and that works
beautifully.
> I've got some sp
> I have noticed this issue (CARP + IPv4 aliases) with older (pre 9.x)
> versions of FreeBSD.
Ah, just to be clear, the only problems I had with aliases weher IPv6 - it
always worked properly with IPv4. But I didnt try on anything pre 8.1!
-pete.
___
f
> - Lack of proper support for a decent hypervisor for virtualisation.
>We can't make a hypervisor out of freebsd, if there are no such
>virtualisations available like XEN, kvm or something similar, that
>just works out of the box.
What do you need that VirtualBox doesn't provide ? I
> Yes, virtualbox is not that bad. However, to get some really nice
> features, you need the non-free version. Also, we can use citrix's
> xenserver's management tool to manage non-citrix xen clusters, because
> the API is same. With that we get a management tool for our clusters,
> which is really
So, my work surprise for a Thursday morning is an urgent requirement to
see if we can run a set of FreeBSD machines under virtualised servers.
I have not done this before personally, but I notice from post here
that it doesnt seem uncommon, and I see Xen related commits flowing
past, so I am guessi
> It helps if you tell people what you are looking for.
Ah, sorry, just dashed that off before I went into a meeting, here's
a bit more info.
> - realtime moving of guests between host-servers?
> - do you really need separate OS'es or would jails serve your purposes?
> - Are you only going to run
> So to recap, vfs.zfs.zio.use_uma doesn't show up in sysctl output.
Errr, does for me
$ sysctl -a | grep vfs.zfs.zio.use_uma
vfs.zfs.zio.use_uma: 1
Thats from 8.1-PRERELEASE on June 2nd.
...but all the question of sysctls is a bit of a red herring to me ?I'm
more intyerested in whether we
I dont know when this changed, but diff under 8-STABLE has started
treating UTF-8 text files as if they were binary. diff under 7-STABLE
doesnt do this, and the files in question are valid UTF-8 (and I
have my locale set to en_GB.UTF-8).
Deliberate ? Or should I file a PR ?
-pete.
___
> I'm running a resent 8.1-Pre (Friday July 2nd), but I've seen this in previous
> ones too, make buildworld -j will sometimes fail, or even panic.
> when it failes it's usually some 'internal compiler error' or
> panic: page fault. The failures I've seen on different hardware, all runing
> amd64 v
> Mandatory? I'm googling, but can't find a document that declares it
> mandatory and only sendmail seems to do it.
> I think it is lame to use DNS info to rewrite e-mail addresses, but the
> person who made it 'mandatory' will have good reasons for it.
Rewiting may not be mandatory, but it is
I've been testing out hast intrenally here, and as a replacement for the
gmirror+ggate stack which we are currently using it seems excellent. I am
actually about to take the plunge and deploy this in production for our
main database server as it seems to operate fine in testing, but before I
do has
> Please see the freebsd-fs mailing list, which has quite a large number
> of problem reports/issues being posted to it on a regular basis (and
> patches are often provided).
Thanks have signed up - I was signed up to 'geom' but not that one.
A large number of problem reports is not quite what I w
> Ok. But how stable (production ready) the FreeBSD-8-STABLE is? What is your
> opinion?
I am running 8-STABLE from 27th September on all our ptoduction
machines (from webservers to database servers to the company mail
server) and it is fine. I am going to update again over the next
few days, as
> Being the author of many problem reports I can say that most of them were not
> critical and for marginal cases (like some issues with hooks or a race that
> showed up when changing HAST role in loop -- you would never do this in
> production). And fixes were committed in several days after a rep
Well, I bit the bullet and moved to using hast - all went beautifully,
and I migrated the pool with no downtime. The one thing I do notice,
however, is that the synchronisation with hast is much slower
than the older ggate+gmirror combination. It's about half the
speed in fact.
When I orginaly set
> What speed do you expect? IIRC from my tests, I was able to saturate
> 1Gbit link with initial synchronization. Also note, that hast
> synchronize only differences, and not the entire thing after crash or
> power failure.
I should probably have put some numbers in the original email,
sorry! I am
> If you are 50ms RTT from the remote system, the default buffer size will
> limit you to about 21 Mbps. Formula is Window-size(in bits/sec)/RTT(in
> sec.) The result is the absolute maximum possible bandwidth in
> bits/sec. Of course, you can replace window size with the bytes/sec and
> the result
> You could change the values and recompile hastd :-). It would be interesting
> to know about the results of your experiment (if you do).
I changed the buffer sizes to the same as I was using for ggate, but the speed
is still the same - 44meg/second (about half of what the link can do)
interesti
> You can check if the queue size is an issue monitoring with netstat Recv-Q and
> Send-Q for hastd connections during the test. Running something like below:
>
> while sleep 1; do netstat -na |grep '\.8457.*ESTAB'; done
Interesting - I ran those and started a complete resilvert (I do
this by chan
Actually, I just llooked I dmesg on the secondary - it is full
of messages thus:
Oct 26 15:44:59 serpentine-passive hastd[10394]: [serp0] (secondary) Unable to
receive request header: RPC version wrong.
Oct 26 15:45:00 serpentine-passive hastd[782]: [serp0] (secondary) Worker
process exited ung
> In hast_proto_send() we send header and then data. Couldn't it be that
> remote_send and sync threads interfere and their packets are mixed? May be
> some synchronization is needed here?
Interesting - I haven't looked very closely at the code, but I didn't
realise that more than one thread was i
Just to report back on this - I just tried the patches from last week,
which fixed the sending of the keepalives in the different
thread, but my original issue (the sychronisation speed) remains
I'm afraid - so much for the theory that the corruption was causing
the speed decrease. It's obviously g
Trying to re-install a set of HP servers over the iLO, and every time
I tried to install the 8.1 CD I ended up with an error message
telling me that the CD looked more like an audio CD than a FreeBSD
distibution.
I ouzzled over it a bit, and then remembered that I originally installed
these machin
> Basically if it feels like you are the first one to report the problem, then,
> unfortunately, the onus is on you...
Sure, I've been around here long enough to know this - was just trying to see
if anyone else knew about it, as it takes so long to boot over the network
that I really didn't want
> Apparently you've been born with a silver spoon or something :-)
...more likely an ability to irritate people until it gets fixed ;-)
> Because few thousand other people do not seem to be as lucky:
> http://people.freebsd.org/~edwin/gnats/gnats-openpercategorycummulative.html
ah, not good gra
So, I just gave the 8.1 CD a try - which didnt take nearly as long as
anticipated. End result is that it also fails to insall. Pr
filed as http://www.freebsd.org/cgi/query-pr.cgi?pr=152874
Am wndering if this could be due to the change in the USB stack
somehow though - I have a feeling that iLO pr
> This problem has been reported many times in the past and almost
> certainly has nothing to do with 8.x. Here's a thread about the matter
> where a user states the same as you but about 7.1:
>
> http://unix.derkeiler.com/Mailing-Lists/FreeBSD/questions/2008-10/msg00307.html
I found that - it ap
> This problem may be a reverse problem of this PR:
>
> http://www.freebsd.org/cgi/query-pr.cgi?pr=138789
>
> In other words, your virtual CD-ROM reads correct TOC from mounted
> image but block size is wrong? Maybe old umass(4) corrects this case
> heuristicly but the new USB stack doesn't?
Mi
> If there's more than one CD device you'll get a prompt asking which to
> use.
...which is precisely what happens under 7.X
So, now that I have it installed, when I am logged into the machine
I have these devices in /dev:
crw-r- 1 root operator0, 89 7 Dec 02:39 /dev/acd0
crw-r-
> My problem was that the ILO CDROM presented it's drive as starting
> from block one (or somesuch nonsense). I ended up doing a netinstall
> because the boot would work (BIOS understood what was going on) but
> FreeBSD did not.
Interesting ... I've never seen that, but it's always good to know t
> With more cloud infrastructure providers using KVM than ever before, the
> importance of having FreeBSD performant as a guest on these
> infrastructures [1], [2], [3] is increasing. It seems that using Virtio
> drivers give a pretty significant performance boost [4], [5].
>
> There was a NetBSD d
Actually, it does look like virtio is more than just for
networking...
http://vbox.innotek.de/pipermail/vbox-dev/2009-November/002053.html
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe
I have a very odd problem here - two interfaces bundled using lagg
in 'failover' mode, so one interface is active and the other not being
used. if the carrier drops on the active one I expect it to
failover, but it doesnt.
...until I type 'ifconfig bce0' to look at the status of the interface
whic
> The bce driver is not properly generating link state events.
OK, that explains why it doesnt failover - but why does looking at it
with ifconfig make a difference ? surely that should be 'read only ?
-pete.
___
freebsd-stable@freebsd.org mailing list
Pyun YongHyeon <[EMAIL PROTECTED]> wrote:
> Try attached patch and check whether bce(4) correctly reports link
> state changes.
>
> After seeing 'link state changed to UP' message, unplug the cable
> and see whether it reports link DOWN. The message should be printed
> in a second. Also try replugg
> Since I added IPv6 to my network, and started really using it, I'm seeing
> some strange things happening.
>
> For instance, I'm on machine 2a01:678:1:443::443, and I do :
>
> $ traceroute6 -n 2a01:678:100:2::
> traceroute6 to 2a01:678:100:2:: (2a01:678:100:2::) from
> 2a01:678:1:443::443, 64 hop
> However, IMO lacp doesn't solve that problem. lacp is used for link
> aggregation, not failover.
It does both - if one of the links becomes unavailable then it will
stop using it. We use this for failover and it works fine, the only
caveat being that your LACP device at the far end needs to look
> As far as I can tell, not especially well :-(. It doesn't seem to detect
> much short of layer 1 failure. In particular, shutting down the switch
> port will not trigger a failover.
Are you using bce devices as your phsyical interfaces ? Take a look at
the thread from last week about ifconfig
> Hum, 2a01:678:1:443::443 is a /64, and 2a01:678:100:2:: is on a /48, both
> have the "same" gateway, that is, the same box, which has :
> inet6 2a01:678:1:443:: prefixlen 64
> inet6 2a01:678:100:: prefixlen 48
O.K., that should work. My best advice here is to do what I did - whic
> The network is pretty simple,
>
> gateway :
> em0: flags=8843 mtu 1500
> options=b
> inet6 fe80::207:e9ff:fe0e:dead%em0 prefixlen 64 scopeid 0x1
> inet6 2a01:678:1:443:: prefixlen 64
> inet6 2a01:678:100:: prefixlen 48
Hmmm, are machine numbers of all zeroes le
> These are the tuning settings I use:
>
> vm.kmem_size="1536M"
> vm.kmem_size_max="1536M"
> vfs.zfs.arc_min="16M"
> vfs.zfs.arc_max="64M"
>
> The entire copying process took almost 2 hours. Not once did I
> experience kmem exhaustion. I can *guarantee* that I would have crashed
> the box numero
> The ARC uses kmem. "Should not use that much more memory" is a matter
> of opinion; if an additional 64MB given to ARC causes kmem exhaustion
Well, yes :-) Though I would think if I was runnign a system with 1.5 gig
of kernel memory where an extra 64 meg was the differebce between life
and deat
> 1 megabit = 106 = 1,000,000 bits which is equal to 125,000 bytes.
you are assuming eight bits per byte - but this is a serial line so
you should use ten bits per byte instead.
-pete.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.or
> Yeah, ZFS offers a lot, which can create confusion, unfortunately. Do we
> limit physical space with quota or only logical (before compression)?
> Should we take space consumed by snapshots into account or not? etc.
On a related note, is there any way to make du tell me how big files
are in actu
> That was a rule of thumb in the heyday of async serial lines, which used
> a start and stop bit per byte.
>
> However, ethernet at 100Mbit is 4B5B coded at a 125mhz rate. So the raw
Errr, 4B5B *is* 10 bits per byte surely?
> Even in the later days of modems this rule applied less and less,
>
> Did you do the following before running csup on the supfile with
> the RELENG_7 tag?
>
> rm -fr /usr/src/*
> rm -fr /var/db/sup/src-all
This is the second time I have seen this mentioned, but on none
of the machines that I csup on do I haave a "/var/db/sup" at all.
Is this a hangover from cvsup
> Sorry for not responding back in Jan. I have a hard time recommending
> the 29320/39320 cards because of the long history they have with
> incompatibilities with certain U320 drives. I don't think that the
Out of interest, what cards would you recommend ? I have just
started running 4 drives o
I am trying out BEAT2 on a machine here, installing onto a compact
flash card, but when I boot I get the error above. This is slightly
puzzling as the CF card is in an adapter which plugs directly into
the motherboard.
If I move the card to dangling off the end of a cable, the warning
goes away an
> Try setting the following in /boot/loader.conf:
>
> hw.ata.ata_dma_check_80pin="0"
Now that looked promising, but unfortunately it doesnt help. Even
with this set I still get the same message. If this is supposed to
disbale the check (as it appears) then I am even more puzzled.
-pete.
_
> > > hw.ata.ata_dma_check_80pin="0"
>
> Unfortunately this useful tunable is unavailable for 6.x.
Ahh, but this is 7.1-BETA2 on amd64 - it should be available there, yes ?
The manual page says it should work, and I have found the point in
the source code where it is supposed to interpret it. Ther
> Creating a filesystem is something that can only be done by root. I'm
> not sure what gave you the impression non-root users can do this...?
He probably though that because it's possible in the current Solaris
implementation of ZFS:
http://blogs.sun.com/marks/entry/zfs_delegated_administration
> Yes,that's is what I want to say.
> In other word is the command "zfs allow" and "zfs unallow"
> I think it is not "Support chflags(2)" which is described in at the bottom
> of http://wiki.freebsd.org/ZFS
Sorry, my unclear use of english! I didn't mean the last item, I meant
that it was near th
I have a couple of boxes here which make daily snapshots
of their filesystems. One just makes a snapshot at 7am, called '7am'
which it does by deleting the previous days and making a new one called
'7am'. The other has snapshots called 'today', 'yesterday', '2daysago'
etc, up to a week. It does thi
> Is there a way for me to reproduce that?
I am not sure how to reproduce it, as I am unclear as to what
causes is. I have two machines making regular snapshots, one of which
ends up in this state, and one which doesnt. The only difference is
that the one which goes wwrong is actually trying to ac
> It's not file system on-disk structure fault, as far as I understand,
> because reboot fixes it. I belive it's how you access the snapshots.
I was about to say "a reboot doesnt fix it for me" - as I swear i tried
that before, but I have discovered you are right, as I just rebooted the
server and
1 - 100 of 789 matches
Mail list logo