Darren J Moffat wrote:
> [EMAIL PROTECTED] wrote:
>
>>> As others have noted, the COW nature of ZFS means that there is a
>>> good chance that on a mostly-empty pool, previous data is still intact
>>> long after you might think it is gone. A utility to rec
A Darren Dunham wrote:
>
> If the most recent uberblock appears valid, but doesn't have useful
> data, I don't think there's any way currently to see what the tree of an
> older uberblock looks like. It would be nice to see if that data
> appears valid and try to create a view that would be
> read
Hi Robert, et.al.,
I have blogged about a method I used to recover a removed file from a
zfs file system
at http://mbruning.blogspot.com.
Be forewarned, it is very long...
All comments are welcome.
max
Robert Milkowski wrote:
> Hello max,
>
> Sunday, August 17, 2008, 1:02:05 PM, you wrote:
>
> m
Hi Cyril,
Cyril ROUHASSIA wrote:
>
> Dear all,
> please find below test that I have run:
>
> #zdb -v unxtmpzfs3<--uberblock for unxtmpzfs3 spool
> Uberblock
>
> magic = 00bab10c
> version = 4
> txg = 86983
> guid_sum = 98604897931072281
Hi,
Victor Latushkin wrote:
> Hi Ben,
>
> Ben Rockwood пишет:
>
>> Is there some hidden way to coax zdb into not just displaying data
>> based on a given DVA but rather to dump it in raw usable form?
>>
>> I've got a pool with large amounts of corruption. Several
>> directories are toast and I
Victor Latushkin wrote:
> [EMAIL PROTECTED] пишет:
>> Hi,
>> Victor Latushkin wrote:
>>> Hi Ben,
>>>
>>> Ben Rockwood пишет:
>>>
>>>> Is there some hidden way to coax zdb into not just displaying data
>>>> based on a gi
Hi Derek,
Derek Cicero wrote:
> Victor Latushkin wrote:
>> [EMAIL PROTECTED] пишет:
>>> Hi,
>>> Victor Latushkin wrote:
>>>>
>>> I have decided to file an RFE so that zdb with the -R option will
>>> allow one to decompress data before dumpi
Hi Blake,
Blake Irvin wrote:
> I'm having a very similar issue. Just updated to 10 u6 and upgrade my
> zpools. They are fine (all 3-way mirors), but I've lost the machine around
> 12:30am two nights in a row.
>
>
> What I'd really like is a way to force a core dump when the machine hangs
> li
Hi Blake,
Blake Irvin wrote:
> Thanks - however, the machine hangs and doesn't even accept console input
> when this occurs. I can't get into the kernel debugger in these cases.
>
Are you directly on the console, or is the console on a serial port? If
you are
running over X windows, the inp
Hi Blake,
Blake Irvin wrote:
> I am directly on the console. cde-login is disabled, so i'm dealing
> with direct entry.
>
>>Are you directly on the console, or is the console on
>> a serial port? If you are
>> running over X windows, the input might still get in,
>> but X may not be displ
Hi All,
I have modified mdb so that I can examine data structures on disk using
::print.
This works fine for disks containing ufs file systems. It also works
for zfs file systems, but...
I use the dva block number from the uberblock_t to print what is at the
block
on disk. The problem I am hav
ehow compressed or encrypted.
Thanks for the response. I was beginning to think the only people that
read this mailing list are admins...
(Sorry guys, getting zfs configured properly is much more important than
what I'm doing here, but
this is more interesting to me).
max
>
>
>
>
t for file data, but for all meta data
as well when a pool is created? Or do I need to figure out how to hack
in the lzjb_decompress() function in
my modified mdb? (Also, I figured out that zdb is already doing the
left shift by 9 before dumping DVA values,
for anyone following this...).
thanks
Roch - PAE wrote:
> [EMAIL PROTECTED] writes:
> > Jim Mauro wrote:
> > >
> > > Hey Max - Check out the on-disk specification document at
> > > http://opensolaris.org/os/community/zfs/docs/.
> > >
> > > Page 32 illustration shows the root
Hi Roch,
Roch - PAE wrote:
> [EMAIL PROTECTED] writes:
> > Roch - PAE wrote:
> > > [EMAIL PROTECTED] writes:
> > > > Jim Mauro wrote:
> > > > >
> > > > > Hey Max - Check out the on-disk specification document at
> > >
Hi Bill,
can you guess? wrote:
>> We will be using Cyrus to store mail on 2540 arrays.
>>
>> We have chosen to build 5-disk RAID-5 LUNs in 2
>> arrays which are both connected to same host, and
>> mirror and stripe the LUNs. So a ZFS RAID-10 set
>> composed of 4 LUNs. Multi-pathing also in use fo
Hi Spencer,
spencer wrote:
> On Solaris 10 u3 (11/06) I can execute the following:
>
> bash-3.00# mdb -k
> Loading modules: [ unix krtld genunix specfs dtrace ufs sd pcipsy ip sctp
> usba nca md zfs random ipc nfs crypto cpc fctl fcip logindmux ptm sppp ]
>
>> arc::print
>>
> {
> anon
Hi All,
I have modified zdb to do decompression in zdb_read_block. Syntax is:
# zdb -R poolname:devid:blkno:psize:d,compression_type,lsize
Where compression_type can be lzjb or any other type compression that
zdb uses, and
lsize is the size after compression. I have used this with a modified
Hi Marcus,
Marcus Sundman wrote:
> Are path-names text or raw data in zfs? I.e., is it possible to know
> what the name of a file/dir/whatever is, or do I have to make more or
> less wild guesses what encoding is used where?
>
> - Marcus
>
I'm not sure what you are asking here. When a zfs file
Mark J Musante wrote:
> On Fri, 7 Mar 2008, Paul Raines wrote:
>
>
>> zfs create -o quota=131G -o reserv=131G -o recsize=8K zpool1/itgroup_001
>>
>> and this is still running now. truss on the process shows nothing. I
>> don't know how to debug it beyond that. I thought I would ask for any
>>
Hi Richard,
Richard Elling wrote:
> Occasionally the topic arises about what to do when a file is
> corrupted. ZFS will tell you about it, but what then? Usually
> the conversation then degenerates into how some people can
> tolerate broken mp3 files or whatever.
>
> Well, the other day I found a
Hi,
I am (hoping) to present a paper at osdevcon in Prague in June. I have
a draft of the paper and
am looking for a couple of people to review it. I am interested to know
the following:
1. Is it understandable?
2. Is it technically correct?
3. Any comments/suggestions to make it better?
The p
Hi,
ZFS can use block sizes up to 128k. If the data is compressed, then
this size will be larger when decompressed.
So, can the decompressed data be larger than 128k? If so, does this
also hold for metadata? In other words,
can I have a 128k block on the disk with, for instance, indirect block
Hi Mario,
Mario Goebbels wrote:
>> ZFS can use block sizes up to 128k. If the data is compressed, then
>> this size will be larger when decompressed.
>>
>
> ZFS allows you to use variable blocksizes (sized a power of 2 from 512
> to 128k), and as far as I know, a compressed block is put into
Hi Simon,
Simon Breden wrote:
> Thanks a lot Richard. To give a bit more info, I've copied my
> /var/adm/messages from booting up the machine:
>
> And @picker: I guess the 35 requests are stacked up waiting for the hanging
> request to be serviced?
>
> The question I have is where do I go from n
can get from ps.
I am curious if the cp is stuck on a specific file, or is just very
slow, or is hung in the kernel.
Also, can you kill the cp when it hangs?
thanks,
max
>
> 2008/5/1 [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
> <[EMAIL PROTECTED] <mailto:[EMAIL
Hi Simon,
Simon Breden wrote:
> Hi Max,
>
> I re-ran the cp command and when it hanged I ran 'ps -el' looked up the cp
> command, got it's PID and then ran:
>
> # truss -p PID_of_cp
>
> and it output nothing at all -- i.e. it hanged too -- just showing a flashing
> cursor.
>
> The system is stil
Hi Simon,
Simon Breden wrote:
>
> Thanks for your advice Max, and here is my reply to your suggestion:
>
>
> # mdb -k
> Loading modules: [ unix genunix specfs dtrace cpu.generic
> cpu_ms.AuthenticAMD.15 uppc pcplusmp scsi_vhci ufs ip hook neti sctp arp usba
> s1394 nca lofs zfs random md sppp smb
Simon Breden wrote:
> set sata:sata_max_queue_depth = 0x1
>
> =
>
> Anyway, after adding the line above into /etc/system, I rebooted and then
> re-tried the copy with truss:
>
> truss cp -r testdir z4
>
> It seems to hang on random files -- so it's not a
Hi Benjamin,
Benjamin Brumaire wrote:
> I 'm trying to decode a lzjb compressed blocks and I have some hard times
> regarding big/little endian. I'm on x86 working with build 77.
>
> #zdb - ztest
> ...
> rootbp = [L0 DMU objset] 400L/200P DVA[0]=<0:e0c98e00:200>
> ...
>
> ## zdb -R ztest:c0d1
Hi Simon,
Simon Breden wrote:
> The plot thickens. I replaced 'cp' with 'rsync' and it worked -- I ran it a
> few times and it didn't hang so far.
>
> So on the face of it, it appears that 'cp' is doing something that causes my
> system to hang if the files are read from and written to the same p
Hi Simon,
Simon Breden wrote:
> Thanks Max, and the fact that rsync stresses the system less would help
> explain why rsync works, and cp hangs. The directory was around 11GB in size.
>
> If Sun engineers are interested in this problem then I'm happy to run
> whatever commands they give me -- aft
With Nevada Build 98 I realize a slow zpool import of my pool which
holds my user and archive data on my laptop.
The first time it was realized during the boot if Solaris tells me to
mount zfs filesystems (1/9) and then works for 1-2 minutes until it goes
ahead. I hear the disk working but have
I have no snapshots in this zpool.
On 09/22/08 16:09, Sanjeev wrote:
> Detlef,
>
> I presume you have about 9 filesystems. How many snapshots do you have ?
>
> Thanks and regards,
> Sanjeev.
>
> On Mon, Sep 22, 2008 at 03:59:34PM +0200, Detlef [EMAIL PROTECTED] wrote:
&
Does anyone have a customer
using IBM Tivoli Storage Manager (TSM) with ZFS? I see that IBM has a
client for Solaris 10, but does it work with ZFS?
--
Dan Christensen
System Engineer
Sun Microsystems, Inc.
Des Moines, IA 50266 US
877-263-2204
35 matches
Mail list logo