So I dropped an Intel SSD in our test x4500 last week and have been playing
with it a bit.
Performance wise, it's great. A source code repository that took 18 minutes
to check out into NFS mounted ZFS space only took 3 minutes after adding
the SSD as a slog (the performance was almost as good as
So I was looking into the boot flash feature of the newer x4540, and
evidently it is simply a CompactFlash slot, with all of the disadvantages
and limitations of that type of media. The sun deployment guide recommends
minimizing writes to a CF boot device, in particular by NFS mounting /var
from a
My research into recovering from a pool whose slog goes MIA while the pool
is off-line resulted in two possible methods, one requiring prior
preparation and the other a copy of the zpool.cache including data for the
failed pool.
The first method is to simply dump a copy of the slog device right a
recv -vFd
>> pdxfilu02
>> bjones 13727 13722 0 14:21:20 ? 0:00 /sbin/zfs recv -vFd
>> pdxfilu02
>>
>> And so on, one for each file system.
>>
>> On the receiving end, 'zfs list' shows one filesystem attempting to
>> receive a snap
recv -vFd
>> pdxfilu02
>> bjones 13727 13722 0 14:21:20 ? 0:00 /sbin/zfs recv -vFd
>> pdxfilu02
>>
>> And so on, one for each file system.
>>
>> On the receiving end, 'zfs list' shows one filesystem attempting to
>> receive a snap
Tim Haley wrote:
Brent Jones wrote:
On the sending side, I CAN kill the ZFS send process, but the remote
side leaves its processes going, and I CANNOT kill -9 them. I also
cannot reboot the receiving system, at init 6, the system will just
hang trying to unmount the file systems.
I have to phys
I cannot stop it:
$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
pdxfilu02/data/fs01/%20090605-00:30:00 1.74G 27.2T 208G
/pdxfilu02/data/fs01/%20090605-00:30:00
On the sending side, I CAN kill the ZFS send process, but the remote
side leaves its processes
Brent Jones wrote:
On Fri, Jun 5, 2009 at 3:25 PM, Ian Collins wrote:
Brent Jones wrote:
On the sending side, I CAN kill the ZFS send process, but the remote
side leaves its processes going, and I CANNOT kill -9 them. I also
cannot reboot the receiving system, at init 6, the system wil
On Fri, Jun 5, 2009 at 3:25 PM, Ian Collins wrote:
> Brent Jones wrote:
>>
>> On the sending side, I CAN kill the ZFS send process, but the remote
>> side leaves its processes going, and I CANNOT kill -9 them. I also
>> cannot reboot the receiving system, at init 6, the system will just
>> hang tr
Brent Jones wrote:
On the sending side, I CAN kill the ZFS send process, but the remote
side leaves its processes going, and I CANNOT kill -9 them. I also
cannot reboot the receiving system, at init 6, the system will just
hang trying to unmount the file systems.
I have to physically cut power t
On Fri, Jun 5, 2009 at 2:49 PM, Rick Romero wrote:
> On Fri, 2009-06-05 at 14:45 -0700, Brent Jones wrote:
>> On Fri, Jun 5, 2009 at 2:28 PM, Mike La Spina
>> wrote:
>> > Hi,
>> >
>> > I have replications between hosts and they are working fine with zfs
>> > send/recv's after upgrading to India
On Fri, Jun 5, 2009 at 2:28 PM, Mike La Spina wrote:
> Hi,
>
> I have replications between hosts and they are working fine with zfs
> send/recv's after upgrading to Indiana snv_111b (2009.06).
>
> Have you run the commands manually to see any messages/prompts are occurring?
>
> It sounds like its
Hi Frank,
This bug was filed with bugster, but I see that the opensolaris bug
database is currently unavailable. I sent a note about this problem.
When a root cause is determined for 6844090, then we'll see whether
this particular issue is a ZFS problem or a format/fdisk problem.
In any case, im
I cannot stop it:
$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
pdxfilu02/data/fs01/%20090605-00:30:00 1.74G 27.2T 208G
/pdxfilu02/data/fs01/%20090605-00:30:00
On the sending side, I CAN kill the ZFS send process, but the remote
side leaves its processes going, and I CANNOT kil
Hello Mark, Darren,
Thank you guys for suggesting "zpool history", upon which we stumbled before
receiving your comments. Nonetheless, the history results are posted above.
Still no luck trying to dig out the dataset data, so far.
As I get it, there are no (recent) backups which is a poor practi
"zpool history" has shed a little light. Lots actually.
The sub-dataset in question was indeed created, and at the time ludelete was run
there are some entries along the lines of "zfs destroy -r pond/zones/zonename".
There's no precise details (names, mountpoints) about the destroyed datasets -
an
Tobias Exner wrote:
Hi list,
I'm thinking about to put the MS Exchange storage on a zfs volume via
iscsi...
I guess that's not a problem, but the more interesting question is if
it's possible to get more performance using the l2arc function with a
FusionIO Flash card...
In general, yes. But
Jim Klimov wrote:
1) Is it possible to find (with zdb or any other means) whether a specific zfs
dataset has ever existed on the importable valid pool?
'zpool history -il' should tell you that, plus it should tell you who
deleted them and when.
I don't know how to go about recovering a delet
Hi Jim,
See if 'zpool history' gives you what you're looking for.
Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I was asked by a coworker about recovering destroyed datasets on ZFS - and
whether it is possible at all? As a related question, if a filesystem dataset
was
recursively destroyed along with all its snapshots, is there some means to at
least find some pointers whether it existed at all?
I remem
I've been dealing with this at an unusually high frequency these days.
It's even dodgier on SPARC. My recipe has been to run format -e and
first try to label as SMI. Solaris PCs sometimes complain that the disk
needs fdisk partitioning and I always delete *all* partitions, exit
fdisk, enter f
Hi,
I'm pretty sure it will work.
You should be able to easiyl test it - create an L2ARC on some device or a
file and see if it is being used.
--
Robert Milkowski
http://milek.blogspot.com
On Fri, 5 Jun 2009, Tobias Exner wrote:
Hi list,
I'm thinking about to put the MS Exchange storage
Hi list,
I'm thinking about to put the MS Exchange storage on a zfs volume via
iscsi...
I guess that's not a problem, but the more interesting question is if
it's possible to get more performance using the l2arc function with a
FusionIO Flash card...
I understood the idea of the l2arc cache
23 matches
Mail list logo