On 6/8/2010 6:33 PM, Bob Friesenhahn wrote:
On Tue, 8 Jun 2010, Miles Nordin wrote:
"re" == Richard Elling writes:
re> Please don't confuse Ethernet with IP.
okay, but I'm not. seriously, if you'll look into it.
Did you misread where I said FC can exert back-pressure? I was
contrastin
On Jun 8, 2010, at 20:17, Moazam Raja wrote:
One of the major concerns I have is what happens when the primary
storage server fails. Will the secondary take over automatically
(using some sort of heartbeat mechanism)? Once the secondary node
takes over, can it fail-back to the primary node
On Tue, 8 Jun 2010, Miles Nordin wrote:
"re" == Richard Elling writes:
re> Please don't confuse Ethernet with IP.
okay, but I'm not. seriously, if you'll look into it.
Did you misread where I said FC can exert back-pressure? I was
contrasting with Ethernet.
You're really confused, tho
On 6-Jun-10, at 7:11 AM, Thomas Maier-Komor wrote:
On 06.06.2010 08:06, devsk wrote:
I had an unclean shutdown because of a hang and suddenly my pool is
degraded (I realized something is wrong when python dumped core a
couple of times).
This is before I ran scrub:
pool: mypool
state: DE
In my case, snapshot creation time and atime don't matter. I think rsync can
preserve mtime and ctime, though. I'll have to double check that.
I'd love to enable dedup. Trying to stay on "stable" releases of OpenSolaris
for whatever that's worth, and I can't seem to find a link to download 20
On Tue, Jun 8, 2010 at 4:29 PM, BJ Quinn wrote:
> Ugh, yeah, I've learned by now that you always want at least that one
> snapshot in common to keep the continuity in the dataset. Wouldn't I be able
> to recreate effectively the same thing by rsync'ing over each snapshot one by
> one? It may
Hi all, I'm trying to accomplish server to server storage replication
in synchronous mode where each server is a Solaris/OpenSolaris machine
with its own local storage.
For Linux, I've been able to achieve what I want with DRBD but I'm
hoping I can find a similar solution on Solaris so that I can
Hi Brandon,
Thanks for providing update on this.
We at KQInfotech, initially started on an independent port of ZFS to linux.
When we posted our progress about port last year, then we came to know about
the work on LLNL port. Since then we started working on to re-base our
changing on top Brian's
Ugh, yeah, I've learned by now that you always want at least that one snapshot
in common to keep the continuity in the dataset. Wouldn't I be able to
recreate effectively the same thing by rsync'ing over each snapshot one by one?
It may take a while, and I'd have to use the --inplace and --no-
Not exactly sure how to do what you're recommending -- are you suggesting I go
ahead with using rsync to bring in each snapshot, but to bring it into to a
clone of the old set of snapshots? Is there another way to bring my recent
stuff in to the clone?
If so, then as for the storage savings, I
I have seen this too
I 'm guessing you have SATA disks which are on a iSCSI target.
I'm also guessing you have used something like
iscsitadm create target --type raw -b /dev/dsk/c4t0d00 c4t0d0
ie you are not using a zfs shareiscsi property on a zfs volume but creating
the target from the devi
On Tue, Jun 8, 2010 at 12:52 PM, BJ Quinn wrote:
> Is there any way to merge them back together? I really need the history data
> going back as far as possible, and I'd like to be able to access it from the
> same place . I mean, worst case scenario, I could rsync the contents of each
> snaps
Brandon High wrote:
On Tue, Jun 8, 2010 at 11:27 AM, Joe Auty
wrote:
things. I've also read this on a VMWare forum,
although I don't know if
this correct? This is in context to me questioning why I don't seem to
have these same load average problems ru
Brandon High wrote:
On Tue, Jun 8, 2010 at 10:33 AM, besson3c wrote:
On heavy reads or writes (writes seem to be more problematic) my load averages on my VM host shoot up and overall performance is bogged down. I suspect that I do need a mirrored SLOG, but I'm wondering what the b
Cindy Swearingen wrote:
Hi Joe,
The REMOVED status generally means that a device was physically removed
from the system.
If necessary, physically reconnect c0t7d0 or if connected, check
cabling, power, and so on.
If the device is physically connected, see what cfgadm say
Cindy Swearingen wrote:
Joe,
Yes, the device should resilver when its back online.
You can use the fmdump -eV command to discover when this device was
removed and other hardware-related events to help determine when this
device was removed.
I would recommend exporting (n
You might bring over all of your old data and snaps, then clone that into a new
volume. Bring your recent stuff into the clone. Since the clone only updates
blocks that are different than the underlying snap, you may see a significant
storage savings.
Two clones could even be made - one for you
On Tue, Jun 8, 2010 at 12:04 PM, Joe Auty wrote:
>
> Cool, so maybe this guy was going off of earlier information? Was there
> a time when there was no way to enable cache flushing in Virtualbox?
>
The default is to ignore cache flushes, so he was correct for the default
setting. The IgnoreFlus
Is there any way to merge them back together? I really need the history data
going back as far as possible, and I'd like to be able to access it from the
same place . I mean, worst case scenario, I could rsync the contents of each
snapshot to the new filesystem and take a snapshot for each one
> "re" == Richard Elling writes:
re> Please don't confuse Ethernet with IP.
okay, but I'm not. seriously, if you'll look into it.
Did you misread where I said FC can exert back-pressure? I was
contrasting with Ethernet.
Ethernet output queues are either FIFO or RED, and are large com
Joerg Schilling wrote:
> This viedo is not interesting, it is wrong.
> Danese Cooper claims incorrect things and her claims have already been
> verified wrong by Simon Phipps.
>
> http://www.opensolaris.org/jive/message.jspa?messageID=55013#55008
>
> Hope this helps.
>
>Jörg
I see it's a pretty
According to this report, I/O to this device caused a probe failure
because the device isn't available on May 31.
I was curious if this device had any previous issues over a longer
period of time.
Failing or faulted drives can also kill your pool's performance.
Thanks,
Cindy
On 06/08/10 11:39
Hillel Lubman wrote:
> A very interesting video from DebConf, which addresses CDDL and GPL
> incompatibility issues, and some original reasoning behind CDDL usage:
>
> http://caesar.acc.umu.se/pub/debian-meetings/2006/debconf6/theora-small/2006-05-14/tower/OpenSolaris_Java_and_Debian-Simon_Phipp
On Tue, Jun 8, 2010 at 11:27 AM, Joe Auty wrote:
> things. I've also read this on a VMWare forum, although I don't know if
> this correct? This is in context to me questioning why I don't seem to have
> these same load average problems running Virtualbox:
>
> The problem with the comparison Virtu
Hi,
yesterday I changed the /etc/system file and ran:
zdb -e -bcsvL tank1
without an output and without a prompt (prozess hangs up)
and the same result of running:
zdb -eC tank1
Regards
Ron
--
This message posted from opensolaris.org
___
zfs-discuss m
Hello all,
We have 2 Solaris 10u8 boxes in a small cluster (active/passive)
serving up a ZFS-formatted shared SAS tray as an NFS share. We are going to be
adding a few SSDs into our disk pool and have determined that we need a
SATA/SAS Interposer AAMUX card. Currently the storage tray
On Tue, Jun 8, 2010 at 10:51 AM, BJ Quinn wrote:
> 3. Take a snapshot on the new server and call it the same thing as the
> snapshot that I copied the data from (i.e. datap...@nightly20090715)
It won't work, because the two snapshots are different. It doesn't
matter if they have same name, the
On Tue, Jun 8, 2010 at 10:33 AM, besson3c wrote:
> On heavy reads or writes (writes seem to be more problematic) my load
> averages on my VM host shoot up and overall performance is bogged down. I
> suspect that I do need a mirrored SLOG, but I'm wondering what the best way is
The load that you
I have a series of daily snapshots against a set of data that go for several
months, but then the server crashed. In a hurry, we set up a new server and
just copied over the live data and didn't bother with the snapshots (since zfs
send/recv was too slow and would have taken hours and hours to
Joe,
Yes, the device should resilver when its back online.
You can use the fmdump -eV command to discover when this device was
removed and other hardware-related events to help determine when this
device was removed.
I would recommend exporting (not importing) the pool before physically
changin
It would be helpful if you posted more information about your
configuration.
Numbers *are* useful too, but minimally, describing your setup, use case,
the hardware and other such facts would provide people a place to start.
There are much brighter stars on this list than myself, but if you are
A very interesting video from DebConf, which addresses CDDL and GPL
incompatibility issues, and some original reasoning behind CDDL usage:
http://caesar.acc.umu.se/pub/debian-meetings/2006/debconf6/theora-small/2006-05-14/tower/OpenSolaris_Java_and_Debian-Simon_Phipps__Alvaro_Lopez_Ortega.ogg
--
Hi Joe,
The REMOVED status generally means that a device was physically removed
from the system.
If necessary, physically reconnect c0t7d0 or if connected, check
cabling, power, and so on.
If the device is physically connected, see what cfgadm says about this
device. For example, a device that
just find this project: http://github.com/behlendorf/zfs
Does it mean we will use ZFS as a linux kernel module in the near future :)
Look forward to it !
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.o
Richard Elling wrote:
On Jun 7, 2010, at 4:50 PM, besson3c wrote:
Hello,
I have a drive that was a part of the pool showing up as "removed". I made no changes to the machine, and there are no errors being displayed, which is rather weird:
# zpool status nm
pool: nm
state: DEGRA
35 matches
Mail list logo