This is what I've done, but am still a bit stuck, as it doesn't quite work!
I scan the zpool list for the drive (I created backup1/data and backup2/data on
the two USB drives)
/usr/sbin/zpool import backup2
/usr/sbin/zfs snapshot -r rp...@20090715033358
/usr/sbin/zfs destroy rpool/s...@200907150
Hello list,
Before we started changing to ZFS bootfs, we used DiskSuite mirrored ufs
boot.
Very often, if we needed to grow a cluster by another machine or two, we
would simply clone a running live server. Generally the procedure for
this would be;
1 detach the "2nd" HDD, metaclear, and d
I think a picture is emerging that if you have enough RAM, the
ARC is working very well. Which means that the ARC management
is suspect.
I propose the hypothesis that ARC misses are not prefetched. The
first time through, prefetching works. For the second pass, ARC
misses are not prefetched, so
James Lever wrote:
On 15/07/2009, at 7:18 AM, Orvar Korvar wrote:
With dedup, will it be possible somehow to identify files that are
identical but has different names? Then I can find and remove all
duplicates. I know that with dedup, removal is not really needed
because the duplicate will j
This system has 32 GB of RAM so I will probbaly need to increase the
data set size.
[r...@x tmp]#> ./zfs-cache-test.ksh nbupool
System Configuration: Sun Microsystems sun4v SPARC Enterprise T5220
System architecture: sparc
System release level: 5.10 Generic_141414-02
CPU ISA list: sparcv9+
On 15/07/2009, at 1:51 PM, Jean Dion wrote:
Do we know if this web article will be discuss at Brisbane Australia
the conference this week?
http://www.pcworld.com/article/168428/sun_tussles_with_deduplication_startup.html?tk=rss_news
I do not expect details but at least Sun position on this
Bob Friesenhahn wrote:
On Wed, 15 Jul 2009, Scott Lawson wrote:
NAME STATE READ WRITE
CKSUM
test1 ONLINE 0
0 0
mirror ONLINE 0
0
Do we know if this web article will be discuss at Brisbane Australia the
conference this week?
http://www.pcworld.com/article/168428/sun_tussles_with_deduplication_startup.html?tk=rss_news
I do not expect details but at least Sun position on this instead of letting
peoples on rumors like publis
On Wed, 15 Jul 2009, Jorgen Lundman wrote:
You have some mighty pools there. Something I find quite interesting is
that those who have "mighty pools" generally obtain about the same data
rate regardless of their relative degree of excessive "might". This causes
me to believe that the Solaris
On Tue, 14 Jul 2009, Ross wrote:
Hi Bob,
My guess is something like it's single threaded, with each file dealt with in
order and requests being serviced by just one or two disks at a time. With
that being the case, an x4500 is essentially just running off 7200 rpm SATA
drives, which really
On Wed, 15 Jul 2009, Scott Lawson wrote:
NAME STATE READ WRITE CKSUM
test1 ONLINE 0 0 0
mirror ONLINE 0 0 0
c3t600A0B80005622
You have some mighty pools there. Something I find quite interesting is
that those who have "mighty pools" generally obtain about the same data
rate regardless of their relative degree of excessive "might". This
causes me to believe that the Solaris kernel is throttling the read rate
so th
On 15/07/2009, at 7:18 AM, Orvar Korvar wrote:
With dedup, will it be possible somehow to identify files that are
identical but has different names? Then I can find and remove all
duplicates. I know that with dedup, removal is not really needed
because the duplicate will just be a referenc
On Wed, 15 Jul 2009, Jorgen Lundman wrote:
Doing initial (unmount/mount) 'cpio -C 131072 -o > /dev/null'
48000256 blocks
real3m1.58s
user0m1.92s
sys 0m56.67s
Doing initial (unmount/mount) 'cpio -C 131072 -o > /dev/null'
48000256 blocks
real3m5.51s
user0m1.70s
sys 0m29.
On 14-Jul-09, at 5:18 PM, Orvar Korvar wrote:
With dedup, will it be possible somehow to identify files that are
identical but has different names? Then I can find and remove all
duplicates. I know that with dedup, removal is not really needed
because the duplicate will just be a reference
I added a second Lun identical in size as a mirror and reran test.
Results are more in line with yours now.
./zfs-cache-test.ksh test1
System Configuration: Sun Microsystems sun4u Sun SPARC Enterprise
M3000 Server
System architecture: sparc
System release level: 5.10 Generic_139555-08
CPU IS
3 servers contained within.
Both x4500 and x4540 are setup the way Sun shipped to us. With minor
changes (nfsservers=1024 etc). I was a little disappointed that they
were identical in speed on round one, but the x4540 looked better part
2. Which I suspect is probably just OS version?
x45
This may not be the only way but I have used.
# zpoolnewpool
# zpool destroy newpool
Regards
Rodney
Joseph L. Casale wrote:
I grabbed a spare disc to make a root mirror with and it happened
to have an old rpool from another installation on it. Anyway, it
also seemed to have an EFI label, s
Hi!
Do you think that this issues will be seen on a ZVOL-s that are exported as
iSCSI tragets?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi!
I'm trying to replicate two thumpers with AVS, and I was wondering, is it
possible to put bitmaps on ZVOL-s on rpool mirror? It seems to me a much
simpler solution then to partition every disk with slice 0 as data and slice 1
as bitmap?
--
This message posted from opensolaris.org
_
On Tue, 14 Jul 2009, Richard Elling wrote:
That is because file prefetch is dynamic. benr wrote a good blog on the
subject and includes a DTrace script to monitor DMU prefetches.
http://www.cuddletech.com/blog/pivot/entry.php?id=1040
Apparently not dynamic enough. The provided DTrace script
With dedup, will it be possible somehow to identify files that are identical
but has different names? Then I can find and remove all duplicates. I know that
with dedup, removal is not really needed because the duplicate will just be a
reference to an existing file. But nevertheless I want to kee
Bob Friesenhahn wrote:
On Tue, 14 Jul 2009, Ross wrote:
My guess is something like it's single threaded, with each file dealt
with in order and requests being serviced by just one or two disks at
a time. With that being the case, an x4500 is essentially just
running off 7200 rpm SATA drives,
I grabbed a spare disc to make a root mirror with and it happened
to have an old rpool from another installation on it. Anyway, it
also seemed to have an EFI label, so I removed the partition, re-labeled
it to SMI and created a Solaris II partition etc and made my mirror.
While importing a pool fr
Thanks Mark. I ran the script and found references in the output to 'aclmode'
and 'aclinherit'. I had in the back of my mind that I've had to mess on with
ZFS ACL's in the past, aside from using chmod with the usual numeric values.
That's given me something to go on. I'll post to cifs-discuss if
On Tue Jul 14, 2009 at 11:09:32AM -0500, Bob Friesenhahn wrote:
> On Tue, 14 Jul 2009, Jorgen Lundman wrote:
>
>> I have no idea. I downloaded the script from Bob without modifications and
>> ran it specifying only the name of our pool. Should I have changed
>> something to run the test?
>
> If y
Le 14 juil. 09 à 18:09, Bob Friesenhahn a écrit :
On Tue, 14 Jul 2009, Jorgen Lundman wrote:
I have no idea. I downloaded the script from Bob without
modifications and ran it specifying only the name of our pool.
Should I have changed something to run the test?
If your system has quite a
On Tue, 14 Jul 2009, Ross wrote:
My guess is something like it's single threaded, with each file
dealt with in order and requests being serviced by just one or two
disks at a time. With that being the case, an x4500 is essentially
just running off 7200 rpm SATA drives, which really is nothing
Just FYI. I ran a slightly different version of the test. I used SSD
(for log & cache)! 3 x 32GB SSDs. 2 mirrored for log and one for
cache. The systems is a 4150 with 12 GB of RAM. Here are the results
$ pfexec ./zfs-cache-test.ksh sdpool
System Configuration:
System architecture: i386
Syste
Chris Murray wrote:
Hello,
Hopefully a quick and easy permissions problem here, but I'm stumped and
quickly reached the end of my Unix knowledge.
I have a ZFS filesystem called "fs/itunes" on pool "zp". In it, the "iTunes
music" folder contained a load of other folders - one for each artist.
The plot thickens ... I had a brainwave and tried accessing a 'missing' folder
with the following on Windows:
explorer "\\mammoth\itunes\iTunes music\Dubfire"
I can open files within it and can rename them too. So .. still looks like a
permissions problem to me, but in what way, I'm not quite s
Hi Bob,
My guess is something like it's single threaded, with each file dealt with in
order and requests being serviced by just one or two disks at a time. With
that being the case, an x4500 is essentially just running off 7200 rpm SATA
drives, which really is nothing special.
A quick summary
Hello,
Hopefully a quick and easy permissions problem here, but I'm stumped and
quickly reached the end of my Unix knowledge.
I have a ZFS filesystem called "fs/itunes" on pool "zp". In it, the "iTunes
music" folder contained a load of other folders - one for each artist.
During a resilver ope
On Tue, Jul 14, 2009 at 11:09:32AM -0500, Bob Friesenhahn wrote:
> On Tue, 14 Jul 2009, Jorgen Lundman wrote:
>
>> I have no idea. I downloaded the script from Bob without modifications
>> and ran it specifying only the name of our pool. Should I have changed
>> something to run the test?
>
> If
On Tue, 14 Jul 2009, Jorgen Lundman wrote:
I have no idea. I downloaded the script from Bob without modifications and
ran it specifying only the name of our pool. Should I have changed something
to run the test?
If your system has quite a lot of memory, the number of files should
be increase
Ross,
Please refresh your test script from the source. The current script
tells cpio to use 128k blocks and mentions the proper command in its
progress message. I have now updated it to display useful information
about the system being tested, and to dump the pool configuration.
It is real
Rather bizarrely, after that second failure I pulled the disk, cleared the
pool, re-inserted it and forced it online. This time, ZFS resilvered fine with
zero errors:
# zpool status
pool: rc-pool
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
For what it's worth, I just repeated that test. The timings are suspiciously
similar. This is very definitely a reproducible bug:
zfs unmount rc-pool/zfscachetest
zfs mount rc-pool/zfscachetest
Doing initial (unmount/mount) 'cpio -o > /dev/null'
48000247 blocks
real4m45.69s
user0m10.2
I also ran this on my future RAID/NAS. Intel Atom 330 (D945GCLF2) dual
core 1.6ghz, on a single HDD pool. svn_114, 64 bit, 2GB RAM.
bash-3.23 ./zfs-cache-test.ksh zboot
zfs create zboot/zfscachetest
creating data file set (3000 files of 8192000 bytes) under
/zboot/zfscachetest ...
done1
zfs
I have just tried again confirming that I used attach and not add. It still
gives the same message ("new device must be a single disk") even though it is a
single disk.
I've tried reformatting it and wiping it a few times now too.
--
This message posted from opensolaris.org
___
Hi, this drive doesn't have U3. Just to be sure I even found a windows
computer and tried it out, but nothing popped up. I also tried a U3 removal
utility but it didn't detect the drive as U3.
--
This message posted from opensolaris.org
___
zfs-discu
On Tue, Jul 14, 2009 at 08:54:36AM +0200, Ross wrote:
> Ok, build 117 does seem a lot better. The second run is slower,
> but not by such a huge margin.
Hm, I can't support this:
SunOS fred 5.11 snv_117 sun4u sparc SUNW,Sun-Fire-V440
The system has 16GB of Ram, pool is mirrored over two FUJITSU-M
Ah yes, my apologies! I haven't quite worked out why OsX VNC server
can't handle keyboard mappings. I have to copy'paste "@" even. As I
pasted the output into my mail over VNC, it would have destroyed the
(not very) "unusual" characters.
Ross wrote:
Aaah, nevermind, it looks like there's j
Aaah, nevermind, it looks like there's just a rogue 9 appeared in your output.
It was just a standard run of 3,000 files.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/m
I have no idea. I downloaded the script from Bob without modifications
and ran it specifying only the name of our pool. Should I have changed
something to run the test?
We have two kinds of x4500/x4540, those with Sol 10 10/08, and 2 running
svn117 for ZFS quotas. Worth trying on both?
Lun
Jorgen,
Am I right in thinking the numbers here don't quite work. 48M blocks is just
9,000 files isn't it, not 93,000?
I'm asking because I had to repeat a test earlier - I edited the script with
vi, but when I ran it, it was still using the old parameters. I ignored it as
a one off, but I'm
46 matches
Mail list logo