t;>
>> Thanks for the help!
>>
>> -Greg
>>
>>
>>
>> On Wed, Aug 10, 2011 at 9:16 AM, phil.har...@gmail.com
>> wrote:
>> > I would generally agree that dd is not a great benchmarking tool,
>> > but you
>> > could
so note that an initial run that creates files may be
> quicker because it just allocates blocks, whereas subsequent rewrites
> require copy-on-write.
>
> - Reply message -
> From: "Peter Tribble"
> To: "Gregory Durham"
> Cc:
> Subject: [zfs
Hello,
We just purchased two of the sc847e26-rjbod1 units to be used in a
storage environment running Solaris 11 express.
We are using Hitachi HUA723020ALA640 6 gb/s drives with an LSI SAS
9200-8e hba. We are not using failover/redundancy. Meaning that one
port of the hba goes to the primary front
Hey Ed,
Thanks for the comment, I have been thinking along the lines of the
same thing, I am going to continue to try to use bacula but we will
see. Out of curiosity, what version of netbackup are you using? I
would love to feel pretty well covered haha.
Thanks a lot!
Greg
On Wed, Mar 10, 2010 a
Hello all,
I need to backup some zpools to tape. I currently have two servers,
for the purpose of this conversation we will call them server1 and
server2 respectively. Server1, has several zpools which are replicated
to a single zpool on server2 through a zfs send/recv script. This part
works perfe
top of other pools can cause the
> system to deadlock or panic.
>
> This kind of configuration is just not supported or recommended
> at this time.
>
> Thanks,
>
> Cindy
>
>
>
>
> On 03/05/10 17:38, Gregory Durham wrote:
>>
>> Great...will using lofiad
ase/view_bug.do?bug_id=6929751
>
> Currently, building a pool from files is not fully supported.
>
> Thanks,
>
> Cindy
>
> On 03/05/10 16:15, Gregory Durham wrote:
>
>> Hello all,
>> I am using Opensolaris 2009.06 snv_129
>> I have a quick questi
Hello all,
I am using Opensolaris 2009.06 snv_129
I have a quick question. I have created a zpool on a sparse file, for
instance:
zpool create stage c10d0s0
mount it to /media/stage
mkfile -n 500GB /media/stage/disks/disk.img
zpool create zfsStage /media/stage/disks/disk.img
I want to be able to t
2010 at 12:01:36PM -0800, Gregory Durham wrote:
> > Hello All,
> > I read through the attached threads and found a solution by a poster and
> > decided to try it.
>
> That may have been mine - good to know it helped, or at least started to.
>
> > The solution w
zfs send to two
destination simultaneously? Or am I stuck. Any pointers would be great!
I am using OpenSolaris snv_129 and the disks are sata wd 1tb 7200rpm disks.
Thanks All!
Greg
On Mon, Jan 25, 2010 at 3:41 PM, Gregory Durham wrote:
> Well I guess I am glad I am not the only one. Thanks
Well I guess I am glad I am not the only one. Thanks for the heads up!
On Mon, Jan 25, 2010 at 3:39 PM, David Magda wrote:
> On Jan 25, 2010, at 18:28, Gregory Durham wrote:
>
> One option I have seen is zfs send zfs_s...@1 > /some_dir/some_file_name.
>> Then I can back thi
Hello all,
I have quite a bit of data transferring between two machines via snapshot
send and receive, this has been working flawlessly. I am now wanting to back
the data from the failover to tape.I was planning on using bacula as I have
a bit of experience with it. I am now trying to figure out th
Thank you so much Fajar,
You have been incredibly helpful! I will do as you said I am just glad I
have not been going down the wrong path!
Thanks,
Greg
On Thu, Jan 14, 2010 at 4:45 PM, Fajar A. Nugraha wrote:
> On Fri, Jan 15, 2010 at 12:33 AM, Gregory Durham
> wrote:
> >
snapshotting on the ESXi server?
Thanks for all the helpful information!
Greg
On Wed, Jan 13, 2010 at 9:12 PM, Gregory Durham wrote:
> Haha, Yeah that's tomorrow, I have a test vm I will be testing on. I shall
> report back! Thank you all!
>
>
> On Wed, Jan 13, 2010 at 8:26 PM, Faj
Haha, Yeah that's tomorrow, I have a test vm I will be testing on. I shall
report back! Thank you all!
On Wed, Jan 13, 2010 at 8:26 PM, Fajar A. Nugraha wrote:
> On Thu, Jan 14, 2010 at 6:40 AM, Gregory Durham
> wrote:
> > Arnaud,
> > The virtual machines coming up as
Arnaud,
The virtual machines coming up as if they were on is the least of my
worries, my biggest worry is keeping the filesystems of the vms alive i.e.
not corrupt. I have all of my virtual machines set up with raw LUNs in
physical compatibility mode. This has increased performance but sadly at the
Tim,
iSCSI was a design descision at the time. Performance was key and I wanted
to utilize being able to hand a LUN on the SAN to esxi, and use it as a raw
disk in physical compatibility mode...however what this has done is that I
can no longer take snapshots on the esxi server and must rely on zfs
17 matches
Mail list logo