On Sun, Mar 17, 2013 at 1:01 PM, Andrew Werchowiecki <
andrew.werchowie...@xpanse.com.au> wrote:
> I understand that p0 refers to the whole disk... in the logs I pasted in
> I'm not attempting to mount p0. I'm trying to work out why I'm getting an
> error attempting to mount p2, after p1 has succe
On Thu, Dec 6, 2012 at 5:11 AM, Morris Hooten wrote:
> Is there a documented way or suggestion on how to migrate data from VXFS to
> ZFS?
Not zfs-specific, but this should work for solaris:
http://docs.oracle.com/cd/E23824_01/html/E24456/filesystem-3.html#filesystem-15
For illumos-based distros,
On Mon, Dec 3, 2012 at 4:14 AM, Heiko L. wrote:
> Hallo,
>
> Howto rename zpool offline (with zdb)?
You don't.
You simply export the pool, and import it (zpool import). Something like
# zpool import old_pool_name_or_ID new_pool_name
>
> I use OpenSolaris in a VM.
> Pool rpool is to small.
> S
On Tue, Nov 27, 2012 at 5:13 AM, Eugen Leitl wrote:
> Now there are multiple configurations for this.
> Some using Linux (roof fs on a RAID10, /home on
> RAID 1) or zfs. Now zfs on Linux probably wouldn't
> do hybrid zfs pools (would it?)
Sure it does. You can even use the whole disk as zfs, with
On Wed, Nov 21, 2012 at 12:07 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
wrote:
> Why are you parititoning, then creating zpool,
The common case it's often because they use the disk for something
else as well (e.g. OS), not only for zfs
> and then creating zvol?
Because it ena
On Wed, Nov 14, 2012 at 1:35 AM, Brian Wilson
wrote:
> So it depends on your setup. In your case if it's at all painful to grow the
> LUNs, what I'd probably do is allocate new 4TB LUNs - and replace your 2TB
> LUNs with them one at a time with zpool replace, and wait for the resliver to
> fini
On Sat, Oct 27, 2012 at 9:16 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Fajar A. Nugraha
>>
>> So my
>> suggestion is actually just pr
On Sat, Oct 27, 2012 at 4:08 AM, Morris Hooten wrote:
> I'm creating a zpool that is 25TB in size.
>
> What are the recommendations in regards to LUN sizes?
>
> For example:
>
> Should I have 4 x 6.25 TB LUNS to add to the zpool or 20 x 1.25TB LUNs to
> add to the pool?
>
> Or does it depend on th
On Wed, Oct 3, 2012 at 5:43 PM, Jim Klimov wrote:
> 2012-10-03 14:40, Ray Arachelian пишет:
>
>> On 10/03/2012 05:54 AM, Jim Klimov wrote:
>>>
>>> Hello all,
>>>
>>>It was often asked and discussed on the list about "how to
>>> change rpool HDDs from AHCI to IDE mode" and back, with the
>>> mo
On Sat, Sep 29, 2012 at 3:09 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
wrote:
> I am confused, because I would have expected a 1-to-1 mapping, if you create
> an iscsi target on some system, you would have to specify which LUN it
> connects to. But that is not the case...
Nope
On Sun, Sep 16, 2012 at 7:43 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
wrote:
> There's another lesson to be learned here.
>
> As mentioned by Matthew, you can tweak your reservation (or refreservation)
> on the zvol, but you do so at your own risk, possibly putting yourself in
On Thu, Aug 30, 2012 at 11:15 PM, Nomen Nescio wrote:
>> Plus, if you look around a bit, you'll find some tutorials to back up
>> the entire OS using zfs send-receive. So even if for some reason the
>> OS becomes unbootable (e.g. blocks on some critical file is corrupted,
>> which would cause pan
On Thu, Aug 30, 2012 at 9:08 PM, Nomen Nescio wrote:
> In this specific use case I would rather have a system that's still bootable
> and runs as best it can
That's what would happen if the corruption happens on part of the disk
(e.g. bad sector).
> than an unbootable system that has detected an
On Tue, Jul 10, 2012 at 4:40 PM, Jordi Espasa Clofent
wrote:
> On 2012-07-10 11:34, Fajar A. Nugraha wrote:
>
>> compression = possibly less data to write (depending on the data) =
>> possibly faster writes
>>
>> Some data is not compressible (e.g. mpeg4 movies), so
On Tue, Jul 10, 2012 at 4:25 PM, Jordi Espasa Clofent
wrote:
> Hi all,
>
> By default I'm using ZFS for all the zones:
>
> admjoresp@cyd-caszonesrv-15:~$ zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> opt 4.77G 45.9G 285M /opt
> opt/zones
On Tue, Jul 3, 2012 at 11:08 AM, Ian Collins wrote:
I'm assuming the pool is hosed?
>>>
>>> Before making that assumption, I'd try something simple first:
>>> - reading from the imported iscsi disk (e.g. with dd) to make sure
>>> it's not iscsi-related problem
>>> - import the disk in another
On Sun, Jul 1, 2012 at 4:18 AM, Ian Collins wrote:
> On 06/30/12 03:01 AM, Richard Elling wrote:
>>
>> Hi Ian,
>> Chapter 7 of the DTrace book has some examples of how to look at iSCSI
>> target
>> and initiator behaviour.
>
>
> Thanks Richard, I 'll have a look.
>
> I'm assuming the pool is hosed
On Mon, Jun 18, 2012 at 2:19 PM, Koopmann, Jan-Peter
wrote:
> Hi Carson,
>
>
> I have 2 Sans Digital TR8X JBOD enclosures, and they work very well.
> They also make a 4-bay TR4X.
>
> http://www.sansdigital.com/towerraid/tr4xb.html
> http://www.sansdigital.com/towerraid/tr8xb.html
>
>
> looks nice!
On Wed, Apr 18, 2012 at 6:43 PM, Jim Klimov wrote:
> Hmmm, how come they have encryption and we don't?
Cause the author doesn't really try it :)
If he did, he would've known that encryption doesn't work (unless you
encrypt the underlying storage with luks, which doesn't count). And
that Ubuntu p
On Mon, Mar 26, 2012 at 12:19 PM, Richard Elling
wrote:
> Apologies to the ZFSers, this thread really belongs elsewhere.
Some of the info in it is informative for other zfs users as well though :)
> Here is the output, I changed to "tick-5sec" and "trunc(@, 5)".
>
> No.2 and No.3 is what I care
On Mon, Mar 26, 2012 at 2:13 AM, Aubrey Li wrote:
>> The problem is, every zfs vnode access need the **same zfs root**
>> lock. When the number of
>> httpd processes and the corresponding kernel threads becomes large,
>> this root lock contention
>> becomes horrible. This situation does not occurs
On Thu, Mar 8, 2012 at 10:28 AM, Bob Doolittle wrote:
> On 3/7/2012 9:04 PM, Fajar A. Nugraha wrote:
>>>
>>> Why can't I
>>> just give the old pool name to the raidz pool when I create it?
>>
>> Cause you can't have two pools with the same name.
On Thu, Mar 8, 2012 at 5:48 AM, Bob Doolittle wrote:
> Wait, I'm not following the last few steps you suggest. Comments inline:
>
>
> On 03/07/12 17:03, Fajar A. Nugraha wrote:
>>
>> - use the one new disk to create a temporary pool
>> - copy the data (&qu
On Thu, Mar 8, 2012 at 4:38 AM, Bob Doolittle wrote:
> Hi,
>
> I had a single-disk zpool (export) and was given two new disks for expanded
> storage. All three disks are identically sized, no slices/partitions. My
> goal is to create a raidz1 configuration of the three disks, containing the
> data
On Fri, Jan 6, 2012 at 12:32 PM, Jesus Cea wrote:
> So, my questions:
>
> a) Is this workflow reasonable and would work?. Is the procedure
> documented anywhere?. Suggestions?. Pitfalls?
try
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Complete_Solaris_ZFS_Root_Pool_R
On Wed, Jan 4, 2012 at 1:36 PM, Eric D. Mudama
wrote:
> On Tue, Jan 3 at 8:03, Gary Driggs wrote:
>>
>> I can't comment on their 4U servers but HP's 1&2U includwd SAS
>> controllers rarely allow JBOD discovery of drives. So I'd recommend an
>> LSI card and an external storage chassis like those
On Fri, Dec 30, 2011 at 1:31 PM, Ray Van Dolson wrote:
> Is there a non-disruptive way to undeduplicate everything and expunge
> the DDT?
AFAIK, no
> zfs send/recv and then back perhaps (we have the extra
> space)?
That should work, but it's disruptive :D
Others might provide better answer th
On Tue, Dec 20, 2011 at 9:51 AM, Frank Cusack wrote:
> If you don't detach the smaller drive, the pool size won't increase. Even
> if the remaining smaller drive fails, that doesn't mean you have to detach
> it. So yes, the pool size might increase, but it won't be "unexpectedly".
> It will be b
On Mon, Dec 19, 2011 at 12:40 AM, Jan-Aage Frydenbø-Bruvoll
wrote:
> Hi,
>
> On Sun, Dec 18, 2011 at 16:41, Fajar A. Nugraha wrote:
>> Is the pool over 80% full? Do you have dedup enabled (even if it was
>> turned off later, see "zpool history")?
>
> The
On Sun, Dec 18, 2011 at 10:46 PM, Jan-Aage Frydenbø-Bruvoll
wrote:
> The affected pool does indeed have a mix of straight disks and
> mirrored disks (due to running out of vdevs on the controller),
> however it has to be added that the performance of the affected pool
> was excellent until around
On Sun, Dec 18, 2011 at 6:52 PM, Pawel Jakub Dawidek wrote:
> BTW. Can you, Cindy, or someone else reveal why one cannot boot from
> RAIDZ on Solaris? Is this because Solaris is using GRUB and RAIDZ code
> would have to be licensed under GPL as the rest of the boot code?
>
> I'm asking, because I
On Sat, Dec 17, 2011 at 6:48 AM, Edmund White wrote:
> If you can budget 4U of rackspace, the DL370 G6
> is a good option that can accommodate 14LFF or 24 SFF disks (or a
> combination). I've built onto DL180 G6 systems as well. If you do the
> DL180 G6, you'll need a 12-bay LFF model. I'd recomme
On Wed, Nov 30, 2011 at 2:35 PM, Frank Cusack wrote:
>> The second one works on both real hardare and VM, BUT with a
>> prequisite that you have to export-import rpool first on that
>> particular system. Unless you already have solaris installed, this
>> usually means you need to boot with a live
hardware, and if necessary create live usb there (MUCH
faster than on a VM). If you mean (2), then it won't work unless you
boot with live cd/usb first.
Oh and for reference, instead of usbcopy, I prefer using this method:
http://blogs.oracle.com/jim/entry/how_to_create_a_usb
--
Fajar
>
&
On Tue, Nov 22, 2011 at 7:32 PM, Jim Klimov wrote:
>> Or maybe not. I guess this was findroot() in sol10 but in sol11 this
>> seems to have gone away.
>
> I haven't used sol11 yet, so I can't say for certain.
> But it is possible that the default boot (without findroot)
> would use the bootfs pro
On Tue, Nov 22, 2011 at 12:53 PM, Frank Cusack wrote:
> On Mon, Nov 21, 2011 at 9:31 PM, Fajar A. Nugraha wrote:
>>
>> On Tue, Nov 22, 2011 at 12:19 PM, Frank Cusack wrote:
>> >
>> > If we ignore the vbox aspect of it, and assume real hardware with real
>&g
On Tue, Nov 22, 2011 at 12:19 PM, Frank Cusack wrote:
> On Mon, Nov 21, 2011 at 9:04 PM, Fajar A. Nugraha wrote:
>>
>> So basically the question is if you install solaris on one machine,
>> can you move the disk (in this case the usb stick) to another machine
>> and bo
On Tue, Nov 22, 2011 at 11:26 AM, Frank Cusack wrote:
> I have a Sun machine running Solaris 10, and a Vbox instance running Solaris
> 11 11/11. The vbox machine has a virtual disk pointing to /dev/disk1
> (rawdisk), seen in sol11 as c0t2.
>
> If I create a zpool on the Sun s10 machine, on a USB
On Sat, Nov 12, 2011 at 9:25 AM, Edward Ned Harvey
wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Linder, Doug
>>
>> All technical reasons aside, I can tell you one huge reason I love ZFS,
> and it's
>> one that is clearly being co
On Fri, Nov 11, 2011 at 2:52 PM, darkblue wrote:
>>> I recommend buying either the oracle hardware or the nexenta on whatever
>>> they recommend for hardware.
>>>
>>> Definitely DO NOT run the free version of solaris without updates and
>>> expect it to be reliable.
>>
>> That's a bit strong. Yes
On Thu, Nov 10, 2011 at 6:54 AM, Fred Liu wrote:
>
... so when will zfs-related improvement make it to solaris-derivatives :D ?
--
FAN
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, Oct 22, 2011 at 11:36 AM, Paul Kraus wrote:
> Recently someone posted to this list of that _exact_ situation, they loaded
> an OS to a pair of drives while a pair of different drives containing an OS
> were still attached. The zpool on the first pair ended up not being able to
> be importe
2011/10/20 Jim Klimov :
> 2011-10-19 17:54, Fajar A. Nugraha пишет:
>>
>> On Wed, Oct 19, 2011 at 7:52 PM, Jim Klimov wrote:
>>>
>>> Well, just for the sake of completeness: most of our systems are using
>>> zfs-auto-snap service, includi
On Thu, Oct 20, 2011 at 4:33 PM, Albert Shih wrote:
>> > Any advise about the RAM I need on the server (actually one MD1200 so
>> > 12x2To disk)
>>
>> The more the better :)
>
> Well, my employer is not so rich.
>
> It's first time I'm going to use ZFS on FreeBSD on production (I use on my
> lapt
On Thu, Oct 20, 2011 at 7:56 AM, Dave Pooser wrote:
> On 10/19/11 9:14 AM, "Albert Shih" wrote:
>
>>When we buy a MD1200 we need a RAID PERC H800 card on the server
>
> No, you need a card that includes 2 external x4 SFF8088 SAS connectors.
> I'd recommend an LSI SAS 9200-8e HBA flashed with the
On Wed, Oct 19, 2011 at 9:14 PM, Albert Shih wrote:
> Hi
>
> Sorry to cross-posting. I don't knwon which mailing-list I should post this
> message.
>
> I'll would like to use FreeBSD with ZFS on some Dell server with some
> MD1200 (classique DAS).
>
> When we buy a MD1200 we need a RAID PERC H800
On Wed, Oct 19, 2011 at 7:52 PM, Jim Klimov wrote:
> 2011-10-13 13:27, Darren J Moffat пишет:
>>
>> On 10/13/11 09:27, Fajar A. Nugraha wrote:
>>>
>>> On Tue, Oct 11, 2011 at 5:26 PM, Darren J Moffat
>>> wrote:
>>>>
>>>> H
On Tue, Oct 18, 2011 at 7:18 PM, Edward Ned Harvey
wrote:
> I recently put my first btrfs system into production. Here are the
> similarities/differences I noticed different between btrfs and zfs:
>
> Differences:
> * Obviously, one is meant for linux and the other solaris (etc)
> * In btrfs, the
On Tue, Oct 18, 2011 at 8:38 PM, Gregory Shaw wrote:
> I came to the conclusion that btrfs isn't ready for prime time. I'll
> re-evaluate as development continues and the missing portions are provided.
For someone with @oracle.com email address, you could probably arrive
to that conclusion fast
On Tue, Oct 11, 2011 at 5:26 PM, Darren J Moffat
wrote:
> Have you looked at the time-slider functionality that is already in Solaris
> ?
Hi Darren. Is it available for Solaris 10? I just installed Solaris 10
u10 and couldn't find it.
>
> There is a GUI for configuration of the snapshots
the sc
On Sat, Oct 1, 2011 at 8:01 PM, Edward Ned Harvey
wrote:
>> On Sat, Oct 1, 2011 at 5:06 AM, Edward Ned Harvey
>> wrote:
>> > Have you looked at Sun Unified Storage, AKA the 7000 series?
>>
>> Thanks, that would be my fallback plan (along with nexentastor and
> netapp).
>
> So you're basically loo
On Sat, Oct 1, 2011 at 5:06 AM, Edward Ned Harvey
wrote:
> Have you looked at Sun Unified Storage, AKA the 7000 series?
> http://www.oracle.com/us/products/servers-storage/storage/unified-storage/in
> dex.html
>
> I've never used them myself, personally, because I prefer to have the
> control of b
On Fri, Sep 30, 2011 at 8:20 PM, Michael Sullivan
wrote:
> Maybe I'm missing something here, but Amanda has a whole bunch of bells and
> whistles, and scans the filesystem to determine what should be backed up.
> Way overkill for this task I think.
I was using Amanda as an example because it h
On Fri, Sep 30, 2011 at 7:22 PM, Edward Ned Harvey
wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Fajar A. Nugraha
>>
>> Does anyone know a good commercial zfs-based storage replication
>> software
Hi,
Does anyone know a good commercial zfs-based storage replication
software that runs on Solaris (i.e. not an appliance, not another OS
based on solaris)?
Kinda like Amanda, but for replication (not backup).
Thanks,
Fajar
___
zfs-discuss mailing list
On Wed, Sep 28, 2011 at 8:21 AM, Edward Ned Harvey
wrote:
> When a vdev resilvers, it will read each slab of data, in essentially time
> order, which is approximately random disk order, in order to reconstruct the
> data that must be written on the resilvering device. This creates two
> problems,
2011/9/22 Ian Collins
>
>> The OS is installed and working, and rpool is mirrored on the two disks.
>>
>> The question is: I want to create some ZFS file systems for sharing them via
>> CIFS. But given my limited configuration:
>>
>> * Am I forced to create the new filesystems directly on rpool?
On Tue, Sep 13, 2011 at 4:37 PM, Fajar A. Nugraha wrote:
>> here is what i have been doing i take snapshots of the 5 file systems, i zfs
>> send these into a directory gzip the the files and then tar them onto tape.
>> this takes a considerable amount of time.
>> my questi
On Tue, Sep 13, 2011 at 3:48 PM, cephas maposah wrote:
> hello team
> i have an issue with my ZFS system, i have 5 file systems and i need to take
> a daily backup of these onto tape. how best do you think i should do these?
> the smallest filesystem is about 50GB
It depends.
You can backup the
On Fri, Aug 12, 2011 at 3:05 PM, Vikash Gupta wrote:
> I use df command and its not showing the zfs file system in the list.
>
> zfs mount -a does not return any error.
First of all, please check whether you're posting to the right place.
zfs-discuss@opensolaris.org, as the name implies, mostly r
On Wed, Aug 10, 2011 at 2:56 PM, Lanky Doodle wrote:
> Can you elaborate on the dd command LaoTsao? Is the 's' you refer to a
> parameter of the command or the slice of a disk - none of my 'data' disks
> have been 'configured' yet. I wanted to ID them before adding them to pools.
For starters,
On Wed, Aug 3, 2011 at 1:10 PM, Fajar A. Nugraha wrote:
>> After my install completes on the smaller mirror, how do I access the 500G
>> mirror where I saved my data? If I simply create a tank mirror using those
>> drives will it recognize there's data there and make it ac
On Wed, Aug 3, 2011 at 7:02 AM, Nomen Nescio wrote:
> I installed a Solaris 10 development box on a 500G root mirror and later I
> received some smaller drives. I learned from this list its better to have
> the root mirror on the smaller small drives and then create another mirror
> on the origina
On Wed, Aug 3, 2011 at 8:38 AM, Anonymous Remailer (austria)
wrote:
>
> Hi Roy, things got alot worse since my first email. I don't know what
> happened but I can't import the old pool at all. It shows no errors but when
> I import it I get a kernel panic from assertion failed: zvol_get_stats(os,
On Fri, Jul 29, 2011 at 4:57 PM, Hans Rosenfeld wrote:
> On Fri, Jul 29, 2011 at 01:04:49AM -0400, Daniel Carosone wrote:
>> .. evidently doesn't work. GRUB reboots the machine moments after
>> loading stage2, and doesn't recognise the fstype when examining the
>> disk loaded from an alernate sou
On Tue, Jul 26, 2011 at 1:33 PM, Bernd W. Hennig
wrote:
> G'Day,
>
> - zfs pool with 4 disks (from Clariion A)
> - must migrate to Clariion B (so I created 4 disks with the same size,
> avaiable for the zfs)
>
> The zfs pool has no mirrors, my idea was to add the new 4 disks from
> the Clariion B
On Tue, Jul 26, 2011 at 3:28 PM, wrote:
>
>
>>Bullshit. I just got a OCZ Vertex 3, and the first fill was 450-500MB/s.
>>Second and sequent fills are at half that speed. I'm quite confident
>>that it's due to the flash erase cycle that's needed, and if stuff can
>>be TRIM:ed (and thus flash erase
On Wed, Jul 20, 2011 at 1:46 AM, Roy Sigurd Karlsbakk
wrote:
> Could you try to just boot up fbsd or linux on the box to see if zfs (native
> or fuse-based, respecively) can see the drives?
Yup, that might seem to be the best idea.
Assuming that all those drives are the original drives with ra
On Tue, Jul 19, 2011 at 4:29 PM, Brett wrote:
> Ok, I went with windows and virtualbox solution. I could see all 5 of my
> raid-z disks in windows. I encapsulated them as entire disks in vmdk files
> and subsequently offlined them to windows.
>
> I then installed a sol11exp vbox instance, attach
On Mon, Jul 18, 2011 at 3:28 PM, Tiernan OToole wrote:
> Ok, so, taking 2 300Gb disks, and 2 500Gb disks, and creating an 800Gb
> mirrored striped thing is sounding like a bad idea... what about just
> creating a pool of all disks, without using mirrors? I seen something called
> "copies", which i
On Tue, Jul 12, 2011 at 6:18 PM, Jim Klimov wrote:
> 2011-07-12 9:06, Brandon High пишет:
>>
>> On Mon, Jul 11, 2011 at 7:03 AM, Eric Sproul wrote:
>>>
>>> Interesting-- what is the suspected impact of not having TRIM support?
>>
>> There shouldn't be much, since zfs isn't changing data in place.
On Sun, Jul 10, 2011 at 10:10 PM, Gary Mills wrote:
> The `lofiadm' man page describes how to export a file as a block
> device and then use `mkfs -F pcfs' to create a FAT filesystem on it.
>
> Can't I do the same thing by first creating a zvol and then creating
> a FAT filesystem on it?
seems no
On Tue, Jul 5, 2011 at 8:03 PM, Edward Ned Harvey
wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Orvar Korvar
>>
>> Here is my problem:
>> I have an 1.5TB disk with OpenSolaris (b134, b151a) using non AHCI.
>> I then changed to AHC
On Mon, Jul 4, 2011 at 5:45 PM, Fajar A. Nugraha wrote:
> - "Used", as reported by "df", will match "Used", as reported by "zfs
> list".
Sorry, it should be
"Used", as reported by "df", will match "Refer", as reported b
On Mon, Jul 4, 2011 at 5:19 PM, Orvar Korvar
wrote:
> The problem is more clearly stated here. Look, 700GB is gone (the correct
> number is 620GB)!
Somehow you remind me of the story "the boy who cried wolf" (Look,
look! The wolf ate my disk space) :P
>
> First I do "zfs list" onto TempStorage/
On Fri, Jun 24, 2011 at 7:44 AM, David W. Smith wrote:
>> Generally, the log devices are listed after the pool devices.
>> Did this pool have log devices at one time? Are they missing?
>
> Yes the pool does have logs. I'll include a zpool status -v below
> from when I'm booted in solaris 10 U9.
On Thu, Jun 23, 2011 at 9:28 AM, David W. Smith wrote:
> When I tried out Solaris 11, I just exported the pool prior to the install of
> Solaris 11. I was lucky in that I had mirrored the boot drive, so after I had
> installed Solaris 11 I still had the other disk in the mirror with Solaris 10
>
On Tue, Jun 14, 2011 at 7:15 PM, Jim Klimov wrote:
> Hello,
>
> A college friend of mine is using Debian Linux on his desktop,
> and wondered if he could tap into ZFS goodness without adding
> another server in his small quiet apartment or changing the
> desktop OS. According to his research, the
On Wed, Jun 1, 2011 at 7:06 AM, Bill Sommerfeld wrote:
> On 05/31/11 09:01, Anonymous wrote:
>> Hi. I have a development system on Intel commodity hardware with a 500G ZFS
>> root mirror. I have another 500G drive same as the other two. Is there any
>> way to use this disk to good advantage in thi
On Tue, May 31, 2011 at 5:47 PM, Jim Klimov wrote:
> However it seems that there may be some extra data beside the zfs
> pool in the actual volume (I'd at least expect an MBR or GPT, and
> maybe some iSCSI service data as an overhead). One way or another,
> the "dcpool" can not be found in the phy
On Thu, May 12, 2011 at 8:31 PM, Arjun YK wrote:
> Thanks everyone. Your inputs helped me a lot.
> The 'rpool/ROOT' mountpoint is set to 'legacy' as I don't see any reason to
> mount it. But I am not certain if that can cause any issue in the future, or
> that's a right thing to do. Any suggestion
On Fri, Apr 8, 2011 at 2:37 PM, Stephan Budach wrote:
> You can re-name a zpool at import time by simply issueing:
>
> zpool import
Yes, I know :)
The last question from Arjun was can we "choose any name for root
pool, instead of 'rpool', during the OS install" :D
--
Fajar
__
On Fri, Apr 8, 2011 at 2:24 PM, Arjun YK wrote:
> Hi,
>
> Let me add another query.
> I would assume it would be perfectly ok to choose any name for root
> pool, instead of 'rpool', during the OS install. Please suggest
> otherwise.
Have you tried it?
Last time I try, the pool name is predetermi
On Fri, Apr 8, 2011 at 2:10 PM, Arjun YK wrote:
> Hello,
>
> I have a situation where a host, which is booted off its 'rpool', need
> to temporarily import the 'rpool' of another host, edit some files in
> it, and export the pool back retaining its original name 'rpool'. Can
> this be done ?
>
> H
On Mon, Apr 4, 2011 at 6:48 PM, For@ll wrote:
>> When I test with openindiana b148, simply setting zpool set
>> autoexpand=on is enough (I tested with Xen, and openinidiana reboot is
>> required). Again, you might need to set both "autoexpand=on" and
>> resize partition slice.
>>
>> As a first ste
On Mon, Apr 4, 2011 at 7:58 PM, Roy Sigurd Karlsbakk wrote:
>> IIRC if you pass a DISK to "zpool create", it would create
>> partition/slice on it, either with SMI (the default for rpool) or EFI
>> (the default for other pool). When the disk size changes (like when
>> you change LUN size on storag
On Mon, Apr 4, 2011 at 4:49 PM, For@ll wrote:
>>> What can I do that zpool show new value?
>>
>> zpool set autoexpand=on TEST
>> zpool set autoexpand=off TEST
>> -- richard
>
> I tried your suggestion, but no effect.
Did you modify the partition table?
IIRC if you pass a DISK to "zpool create",
On Mon, Apr 4, 2011 at 4:16 AM, Daxter wrote:
> My goal is to optimally have two 1TB drives inside of a rather small computer
> of mine, running Solaris, which can sync with and be a backup of my somewhat
> portable 2TB drive. Up to this point I have been using the 2TB drive without
> any redun
On Wed, Mar 23, 2011 at 7:33 AM, Jeff Bacon wrote:
>> I've also started conversations with Pogo about offering an
> OpenIndiana
>> based workstation, which might be another option if you prefer more of
> Sometimes I'm left wondering if anyone uses the non-Oracle versions for
> anything but file s
On Sun, Mar 20, 2011 at 4:05 AM, Pawel Jakub Dawidek wrote:
> On Fri, Mar 18, 2011 at 06:22:01PM -0700, Garrett D'Amore wrote:
>> Newer versions of FreeBSD have newer ZFS code.
>
> Yes, we are at v28 at this point (the lastest open-source version).
>
>> That said, ZFS on FreeBSD is kind of a 2nd c
On Wed, Feb 16, 2011 at 8:53 PM, Jeff liu wrote:
> Hello All,
>
> I'd like to know if there is an utility like `Filefrag' shipped with
> e2fsprogs on linux, which is used to fetch the extents mapping info of a
> file(especially a sparse file) located on ZFS?
Something like zdb - maybe?
http
On Tue, Feb 15, 2011 at 5:47 AM, Mark Creamer wrote:
> Hi I wanted to get some expert advice on this. I have an ordinary hardware
> SAN from Promise Tech that presents the LUNs via iSCSI. I would like to use
> that if possible with my VMware environment where I run several Solaris /
> OpenSolaris
On Sun, Feb 13, 2011 at 7:40 PM, Pasi Kärkkäinen wrote:
> On Sat, Feb 12, 2011 at 08:54:26PM +0100, Roy Sigurd Karlsbakk wrote:
>> > I see that Pinguy OS, an uber-Ubuntu o/s, includes native ZFS support.
>> > Any pointers to more info on this?
>>
>> There are some work in progress from http://zfso
On Mon, Jan 31, 2011 at 3:47 AM, Peter Jeremy
wrote:
> On 2011-Jan-28 21:37:50 +0800, Edward Ned Harvey
> wrote:
>>2- When you want to restore, it's all or nothing. If a single bit is
>>corrupt in the data stream, the whole stream is lost.
>>
>>Regarding point #2, I contend that zfs send is bet
On Thu, Jan 6, 2011 at 11:36 PM, Garrett D'Amore wrote:
> On 01/ 6/11 05:28 AM, Edward Ned Harvey wrote:
>> See my point? Next time I buy a server, I do not have confidence to
>> simply expect solaris on dell to work reliably. The same goes for solaris
>> derivatives, and all non-sun hardware.
On Mon, Jul 19, 2010 at 11:06 PM, Richard Jahnel wrote:
> I've tried ssh blowfish and scp arcfour. both are CPU limited long before the
> 10g link is.
>
> I'vw also tried mbuffer, but I get broken pipe errors part way through the
> transfer.
>
> I'm open to ideas for faster ways to to either zfs
On Sun, Jul 4, 2010 at 12:22 AM, Garrett D'Amore wrote:
> I am sorry you feel that way. I will look at your issue as soon as I am
> able, but I should say that it is almost certain that whatever the problem
> is, it probably is inherited from OpenSolaris and the build of NCP you were
> testing
On Thu, Mar 25, 2010 at 10:31 AM, Carson Gaspar wrote:
> Fajar A. Nugraha wrote:
>>> You will do best if you configure the raid controller to JBOD.
>>
>> Problem: HP's storage controller doesn't support that mode.
>
> It does, ish. It forces you to crea
On Thu, Mar 25, 2010 at 1:02 AM, Edward Ned Harvey
wrote:
> I think the point is to say: ZFS software raid is both faster and more
> reliable than your hardware raid. Surprising though it may be for a
> newcomer, I have statistics to back that up,
Can you share it?
> You will do best if you co
On Fri, Mar 19, 2010 at 12:38 PM, Rob wrote:
> Can a ZFS send stream become corrupt when piped between two hosts across a
> WAN link using 'ssh'?
unless the end computers are bad (memory problems, etc.), then the
answer should be no. ssh has its own error detection method, and the
zfs send strea
1 - 100 of 177 matches
Mail list logo