On Mon, Sep 29, 2008 at 9:28 PM, Richard Elling <[EMAIL PROTECTED]> wrote:
> Ahmed Kamal wrote:
>> Hi everyone,
>>
>> We're a small Linux shop (20 users). I am currently using a Linux
>> server to host our 2TBs of data. I am considering better options for
>> our data storage needs. I mostly need in
0n Mon, Sep 29, 2008 at 09:28:53PM -0700, Richard Elling wrote:
>EMC does not, and cannot, provide end-to-end data validation. So how
>would measure its data reliability? If you search the ZFS-discuss
archives,
>you will find instances where people using high-end storage also
Ahmed Kamal wrote:
> Hi everyone,
>
> We're a small Linux shop (20 users). I am currently using a Linux
> server to host our 2TBs of data. I am considering better options for
> our data storage needs. I mostly need instant snapshots and better
> data protection. I have been considering EMC NS20
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
hi,
i have an X4500 running Solaris 10 Update 5 (with all current patches). it has
a stripe-mirror ZFS pool over 44 disks with 2 hot spare. the system is
entirely idle, except that every 60 seconds, a 'zfs recv' is run. a couple of
days ago, while
Hi everyone,
We're a small Linux shop (20 users). I am currently using a Linux server to
host our 2TBs of data. I am considering better options for our data storage
needs. I mostly need instant snapshots and better data protection. I have
been considering EMC NS20 filers and Zfs based solutions. F
Do you have dedicated iSCSI ports from your server to your NetApp?
iSCSI requires dedicated network and not a shared network or even VLAN. Backup
cause large I/O that fill your network quickly. Like ans SAN today.
Backup are extremely demanding on hardware (CPU, Mem, I/O ports, disk etc).
Miles Nordin wrote:
>> "jcm" == James C McPherson <[EMAIL PROTECTED]> writes:
>
>jcm> Can I assume that my "2008-07-26 post" was in fact two
>jcm> messages that were sent to you and cc'd to zfs-discuss:
>jcm>
> http://mail.opensolaris.org/pipermail/zfs-discuss/2008-July/049605.htm
Ross,
No need to apologize...
Many of us work hard to make sure good ZFS information is available so a
big thanks for bringing this wiki page to our attention.
Playing with UFS on ZFS is one thing but even inexperienced admins need
to know this kind of configuration will provide poor performance
Oh, ok. So /dev/rdsk is never going to work then. Mind if I pick your brain a
little more then while I try to understand this properly.
The man pages for the nvram card state that /dev/rdsk will normally be the
preferred way to access these devices, since /dev/dsk is cached by the kernel,
whi
On Tue, 30 Sep 2008, Ian Collins wrote:
Mark J Musante wrote:
On Sat, 27 Sep 2008, Marcin Woźniak wrote:
After successful upgrade from snv_95 to snv_98 ( ufs boot -> zfs
boot). After luactive new BE with zfs. I am not able to ludelete old
BE with ufs. problem is, I think that zfs boot is /rpo
Mark J Musante wrote:
> On Sat, 27 Sep 2008, Marcin Woźniak wrote:
>
>> After successful upgrade from snv_95 to snv_98 ( ufs boot -> zfs
>> boot). After luactive new BE with zfs. I am not able to ludelete old
>> BE with ufs. problem is, I think that zfs boot is /rpool/boot/grub.
>
> This is due to
/dev/rdsk/* devices are character based devices, not block based. In general,
character based devices have to be accessed serially (and don't do buffering),
versus block devices which buffer and allow random access to the data. If you
use:
ls -lL /dev/*dsk/c3d1p0
you should see that the /dev/ds
> "jcm" == James C McPherson <[EMAIL PROTECTED]> writes:
jcm> Can I assume that my "2008-07-26 post" was in fact two
jcm> messages that were sent to you and cc'd to zfs-discuss:
jcm> http://mail.opensolaris.org/pipermail/zfs-discuss/2008-July/049605.html
jcm> and
jcm> http://mai
I have to come back and face the shame; this was a total newbie mistake by
myself.
I followed the ZFS shortcuts for noobs guide off bigadmin;
http://wikis.sun.com/display/BigAdmin/ZFS+Shortcuts+for+Noobs
What that had me doing was creating a UFS filesystem on top of a ZFS volume, so
I was usi
Volker A. Brandt wrote:
>>> So they only work on and off. I never bothered to find out what the
>>> problem was (in fact, I hadn't even tried the ramdiskadm cmd in that
>>> version of Solaris before this email thread showed up).
>>>
>> AIUI, the memory assigned to a ramdisk must be contiguous.
>>
> > So they only work on and off. I never bothered to find out what the
> > problem was (in fact, I hadn't even tried the ramdiskadm cmd in that
> > version of Solaris before this email thread showed up).
> >
>
> AIUI, the memory assigned to a ramdisk must be contiguous.
> This makes some sense in
Hey folks,
Can anybody help me out with this. I've finally gotten my hands on a Micro
Memory nvram card, but I'm struggling to get it working with ZFS. The drivers
appeared to install fine, and it works with ZFS if I use the /dev/dsk device,
but whenever I try to use rdsk I get the error:
#
Volker A. Brandt wrote:
>>> [most people don't seem to know Solaris has ramdisk devices]
>>>
>> That is because only a select few are able to unravel the enigma wrapped in
>> a clue that is solaris :)
>>
>
> Hmmm... very enigmatic, your remark. :-)
>
>
> However, in this case I suspect
On Sat, 27 Sep 2008, Marcin Woźniak wrote:
After successful upgrade from snv_95 to snv_98 ( ufs boot -> zfs boot).
After luactive new BE with zfs. I am not able to ludelete old BE with
ufs. problem is, I think that zfs boot is /rpool/boot/grub.
This is due to a bug in the /usr/lib/lu/lulib sc
> Note this from vmstat(1M):
>
> Without options, vmstat displays a one-line summary of the
> virtual memory activity since the system was booted.
Oops, you're correct. I was only trying to demonstrate that there
was ample free memory and ramdiskadm just didn't work. Usually I do
tha
On Mon, Sep 29, 2008 at 2:12 AM, Volker A. Brandt <[EMAIL PROTECTED]> wrote:
> kthr memorypagedisk faults cpu
> r b w swap free re mf pi po fr de sr lf lf lf s0 in sy cs us sy id
> 0 0 0 33849968 2223440 2 14 1 0 0 0 0 0 21 0 21 813 1
Richard,
thanks alot for that answer. It can be argued back and forth what is right, but
it helps knowing the reason behind the problem. Again, thanks alot...
//Mike
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@o
Hi,
it was actually shared both as a dataset and a NFS-share.
we had zonedata/prodlogs set up as a dataset and then
we had zonedata/tmp mounted as a NFS filesystem within the zone.
//Mike
--
This message posted from opensolaris.org
___
zfs-discuss mail
Adam Leventhal wrote:
> For a root device it doesn't matter that much. You're not going to be
> writing to the device at a high data rate so write/erase cycles don't
> factor much (MLC can sustain about a factor of 10 more). With MLC
> you'll get 2-4x the capacity for the same price, but agai
Miles Nordin wrote:
> Ralf, aren't you missing this obstinence-error:
>
> sc> the following errors must be manually repaired:
> sc> /dev/dsk/c0t2d0s0 is part of active ZFS pool export_content.
>
> and he used the -f flag.
No, I saw it. My understanding has been that the drive was unavai
I had exactly the same problem and have not been able to find a resolution yet.
Marcin Woźniak wrote:
After successful upgrade from snv_95 to snv_98 ( ufs boot -> zfs boot).
After luactive new BE with zfs. I am not able to ludelete old BE with ufs.
problem is, I think that zfs boot is /rpool/boo
> > [most people don't seem to know Solaris has ramdisk devices]
>
> That is because only a select few are able to unravel the enigma wrapped in a
> clue that is solaris :)
Hmmm... very enigmatic, your remark. :-)
However, in this case I suspect it is because ramdisks don't really
work well on
27 matches
Mail list logo