On 05/30/2010 04:22 PM, Richard Elling wrote:
> If you want to decouple the txg commit completely, then you might consider
> using a buffer of some sort. I use mbuffer for pipes, but that may be tricky
> to use in an rsync environment.
> -- richard
>
I initially thought this was I/O but now I
On 5/29/10 12:54 AM -0700 Matt Connolly wrote:
I'm running snv_134 on 64-bit x86 motherboard, with 2 SATA drives. The
zpool "rpool" uses whole disk of each drive.
Can't be. zfs can't boot from a whole disk pool on x86 (maybe sparc too).
You have a single solaris partition with the root pool on
On May 30, 2010, at 3:04 PM, Sandon Van Ness wrote:
> Basically for a few seconds at a time I can get very nice speeds through
> rsync (saturating a 1 gig link) which is around 112-113 megabytes/sec
> which is about as good as I can expect after overhead. The problem is
> that every 5 seconds when
On 05/30/2010 03:10 PM, Mattias Pantzare wrote:
> > On Sun, May 30, 2010 at 23:37, Sandon Van Ness wrote:
> >
>
>> >> I just wanted to make sure this is normal and is expected. I fully
>> >> expected that as the file-system filled up I would see more disk space
>> >> being used than with ot
On Sun, May 30, 2010 at 23:37, Sandon Van Ness wrote:
> I just wanted to make sure this is normal and is expected. I fully
> expected that as the file-system filled up I would see more disk space
> being used than with other file-systems due to its features but what I
> didn't expect was to lose o
Basically for a few seconds at a time I can get very nice speeds through
rsync (saturating a 1 gig link) which is around 112-113 megabytes/sec
which is about as good as I can expect after overhead. The problem is
that every 5 seconds when data is actually written to disks (physically
looking at the
On 05/30/2010 02:51 PM, Brandon High wrote:
> On Sun, May 30, 2010 at 2:37 PM, Sandon Van Ness wrote:
>
>> ZFS:
>> r...@opensolaris: 11:22 AM :/data# df -k /data
>>
> 'zfs list' is more accurate than df, since it will also show space
> used by snapshots. eg:
> bh...@basestar:~$ df -h /expo
On Sun, May 30, 2010 at 2:37 PM, Sandon Van Ness wrote:
> ZFS:
> r...@opensolaris: 11:22 AM :/data# df -k /data
'zfs list' is more accurate than df, since it will also show space
used by snapshots. eg:
bh...@basestar:~$ df -h /export/home/bhigh
Filesystem size used avail capacity
I just wanted to make sure this is normal and is expected. I fully
expected that as the file-system filled up I would see more disk space
being used than with other file-systems due to its features but what I
didn't expect was to lose out on ~500-600GB to be missing from the total
volume size right
On Sun, May 30, 2010 at 11:46 AM, Roy Sigurd Karlsbakk
wrote:
> Is there a way to report zpool/zfs stats in a fixed scale, like KiB or even
> bytes?
Some (but not all) commands use -p.
-p
Use exact (parseable) numeric output.
-B
--
Brandon High : bh...@freaks.com
___
Hi all
Using zpool/zfs list -H gives me a good overview of things, and is easy to
parse, except that the allocation and data sizes are reported in 'human
readable' form. For scripting, this is somehow non-optimal.
Is there a way to report zpool/zfs stats in a fixed scale, like KiB or even
byte
On 05/27/10 05:16 PM, Dennis Clarke wrote:
I just tried this with a UFS based filesystem just for a lark.
It never failed on UFS, regardless of the contents of /etc/dfs/dfstab.
Guess I must now try this with a ZFS fs under that iso file.
Just tried it again with b134 *with* "share /mnt" i
Reinstalling grub helped.
What is the purpose of dump slice?
On Sun, May 30, 2010 at 9:05 PM, me wrote:
> Thanks! It is exactly i was looking for.
>
>
> On Sat, May 29, 2010 at 12:44 AM, Cindy Swearingen <
> cindy.swearin...@oracle.com> wrote:
>
>> 2. Attaching a larger disk to the root pool an
Thanks! It is exactly i was looking for.
On Sat, May 29, 2010 at 12:44 AM, Cindy Swearingen <
cindy.swearin...@oracle.com> wrote:
> 2. Attaching a larger disk to the root pool and then detaching
> the smaller disk
>
> I like #2 best. See this section in the ZFS troubleshooting wiki:
>
> http://
I had empty directory /export/home created in root. It was preventing mount.
Just deleted it and all is ok.
On Sun, May 30, 2010 at 5:40 PM, me wrote:
> I was trying to expand space of rpool. I didn't done it but after removing
> one (not in use) disk from VM configuration, system doesn't start
I was trying to expand space of rpool. I didn't done it but after removing
one (not in use) disk from VM configuration, system doesn't start (no X).
After shell login i found out that there is no home:
zfs mount
rpool/ROOT/opensolaris /
Home can be mount manually correctly. What is wrong?
--
Dm
On Fri, May 28, 2010 at 10:05 AM, Gregory J. Benscoter
wrote:
> I’m primarily concerned with in the possibility of a bit flop. If this
> occurs will the stream be lost? Or will the file that that bit flop occurred
> in be the only degraded file? Lastly how does the reliability of this plan
> compa
17 matches
Mail list logo