> On Wed, Aug 25, 2010 at 12:29 PM, Dr. Martin
> Mundschenk
> wrote:
> > Well, I wonder what are the components to build a
> stable system without having an enterprise solution:
> eSATA, USB, FireWire, FibreChannel?
>
> If possible to get a card to fit into a MacMini,
> eSATA would be a lot
> bet
> I'm not sure I didn't have dedup enabled. I might
> have.
> As it happens, the system rebooted and is now in
> single user mode.
> I'm trying another import. Most services are not
> running which should free ram.
>
> If it crashes again, I'll try the live CD while I see
> about more RAM.
Succ
> Tom,
>
> If you freshly installed the root pool, then those
> devices
> should be okay so that wasn't a good test. The other
> pools
> should remain unaffected by the install, and I hope,
> from
> the power failure.
Yes. I was able to import them and have since exported them.
>
> We've seen
helped others in
> past.
>
I've thought of that. I think the motherboard can only go to 4GB though.
That's why I exported the other zpools - to free up RAM.
The "rule" is 1GB/TB right? I have about 4.5 TB with 3 GB RAM so I'm a bit
over that rule.
> Thanks,
>
My power supply failed. After I replaced it, I had issues staying up after
doing zpool import -f.
I reinstalled OpenSolaris 134 on my rpool and still had issues.
I have 5 pools:
rpool - 1*37GB
data - RAIDZ, 4*500GB
data1 - RAID1 2*750GB
data2 - RAID1 2*750GB
data3 - RAID1 2*2TB - WD20EARS
The s
I'm running ClearCase on a Solaris 10u4 system. Views & vobs.
I lock the vob, snapshot /var/adm/rational, vobs, views, then unlock the vobs.
We've been able to copy the snapshot to another server & restore.
I believe ClearCase is supported by Rational on ZFS also. We would not have
done it oth
I'm running OpenSolaris 10/08 snv_101b with the auto snapshot packages.
I'm getting this error:
/usr/lib/time-slider-cleanup -y
Traceback (most recent call last):
File "/usr/lib/time-slider-cleanup", line 10, in
main(abspath(__file__))
File "/usr/lib/../share/time-slider/lib/time_slider/
What, no VirtualBox image?
This VMware image won't run on VMware Workstation 5.5 either :-(
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I've found that SFU NFS is pretty poor in general. I setup Samba on the host
system. Let the client stay native & have the server adapt.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
>
> time gdd if=/dev/zero bs=1048576 count=10240
> of=/data/video/x
>
> real 0m13.503s
> user 0m0.016s
> sys 0m8.981s
As someone pointed out, this is a compressed file system :-)
I'll have to get a copy of Bonnie++ or some such to get more accurate numbers
This message posted from openso
> On Fri, Jun 6, 2008 at 16:23, Tom Buskey
> <[EMAIL PROTECTED]> wrote:
> > I have an AMD 939 MB w/ Nvidea on the motherboard
> and 4 500GB SATA II drives in a RAIDZ.
> ...
> > I get 550 MB/s
> I doubt this number a lot. That's almost 200
> (550/N-1 = 1
>**pci or pci-x. Yes, you might see
> *SOME* loss in speed from a pci interface, but
> let's be honest, there aren't a whole lot of
> users on this list that have the infrastructure to
> use greater than 100MB/sec who are asking this sort
> of question. A PCI bus should have no issues
> pushing t
> (2) You want a 64-bit CPU. So that probably rules
> out your P4 machines,
> unless they were extremely late-model P4s with the
> EM64T features.
> Given that file-serving alone is relatively low-CPU,
> you can get away
> with practically any 64-bit capable CPU made in the
> last 4 years.
A
> Justin,
>
> Thanks for the reply
>
> In the environment I currently work in, the "powers
> that be" are almost
> completely anti unix. Installing the nfs client on
> all machines would take
> a real good sales pitch. None the less I am still
I've pro unix & I'm against putting NFS on all the P
I've always done a disksuite mirror of the boot disk. It's been easry to do
after the install in Solaris. WIth Linux I had do do it during the install.
OpenSolaris 2008.05 didn't give me an option.
How do I add my 2nd drive to the boot zpool to make it a mirror?
This message posted from op
> > On May 18, 2008, at 14:01, Mario Goebbels wrote:
> > ZFS on Linux on
> > humper would actually be very interesting to many
> of
> > them. I think
> > that's good for Sun. Of course, ZFS on Linux on
>
> Umm, how many Linux shops buy support and/or HW from
> Sun ?
>
> It it's a Linux sho
Are you using the Supermicro in Solaris or OpenSolaris? Which version?
64 bit or 32 bits?
I'm asking because I recently went through a number of SCSI cards that are in
the HCL as supported, but do not have 64 bit drivers. So they only work in 32
bit mode.
This message posted from opensolar
Where do you get an 8 port SATA card that works with Solaris for around $100?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I never said I was a typical consumer. After all, I bought a $1600 DSLR.
If you look around photo forums, you'll see an interest the digital workflow
which includes long term storage and archiving. A chunk of these users will
opt for an external RAID box (10%? 20%?). I suspect ZFS will change
> Getting back to 'consumer' use for a moment, though,
> given that something like 90% of consumers entrust
> their PC data to the tender mercies of Windows, and a
> large percentage of those neither back up their data,
> nor use RAID to guard against media failures, nor
> protect it effectively fr
If you have disks to experiment on & corrupt (and you will!) try this:
System A mounts the SAN [b]disk[/b] and format w/ UFS
System A umounts [b]disk[/b]
System B mounts [b]disk[/b]
B runs [i]touch x[/i] on [b]disk[/b].
System A mounts [b]disk[/b]
System A and B umount [b]disk[/b]
System B [i]fsck
> On Wed, May 23, 2007 at 08:03:41AM -0700, Tom Buskey
> wrote:
> >
> > Solaris is 64 bits with support for 32 bits. I've
> been running 64 bit Solaris since Solaris 7 as I
> imagine most Solaris users have. I don't think any
> other major 64 bit OS h
> Sorry about that, the specific processor in question
> is the Pentium D 930 which supports 64 bit computing
> through the Extended Memory 64 Technology. It was my
> initial reaction to say I'd go with 32 bit computing
> because my general experience with 64-bit is Windows,
> Linux, and some Free
I did this on Solaris 10u3. 4 120GB -> 4 500GB drives. Replace, resilver;
repeat until all all drives replaced.
On 5/14/07, Alec Muffett <[EMAIL PROTECTED]> wrote:
Hi All,
My mate Chris posed me the following; rather than flail about with
engineering friends trying to get a "definitive-de-
I've been using long SATA cables routed out through the case to a home built
chassis with its own power supply for a year now. Not even eSATA. That part
works well.
Substitute this for USB/Firewire/SCSI/USB thumb drives. It's really the same
problem.
Ok, now you want to deal with a ZFS zpoo
> No 'home user' needs shrink.
> Every professional datacenter needs shrink.
I can think of a scenario. I have a n disk RAID that I built with n newly
purchased disks that are m GB. One dies. I buy a replacement disk, also m GB
but when I put it in, it's really ( m - x ) GB. I need to shrink
Sorry, that's dd from /dev/zero to /dev/null
I think there's an issue with my SATA card
On 2/7/07, Bart Smaalders <[EMAIL PROTECTED]> wrote:
Tom Buskey wrote:
>> Tom Buskey wrote:
>>> As a followup, the system I'm trying to use this on
>> is a
> Tom Buskey wrote:
> > As a followup, the system I'm trying to use this on
> is a dual PII 400 with 512MB. Real low budget.
>
> Hmm... that's lower than I would have expected.
> Something is
> ikely wrong. These machines do have very limited
> memor
[i]
I got an Addonics eSata card. Sata 3.0. PCI *or* PCI-X. Works right off the bat
w/ 10u3. No firmware update needed. It was $130. But I don't pull out my hair
and I can use it if I upgrade my server for pci-x
[/i]
And I'm finding the throughput isn't there. < 2MB/s in ZFS RAIDZ and worse
wi
That's good to know.
It's a new Addonics 4 port card. Specifically:
ADS3GX4R5-ERAID5/JBOD 4-port ext. SATA II PCI-X
prtconf -v output:
pci1095,7124, instance #0
Driver properties:
name='sata' type=int items=1 dev=none
.
name='compatible' type
As a followup, the system I'm trying to use this on is a dual PII 400 with
512MB. Real low budget.
2 500 GB drives with 2 120 GB in a RAIDZ. The idea is that I can get 2 more
500 GB drives later to get full capacity. I tested going from a 20GB to a
120GB and that worked well.
I'm finding th
>However, I don't think OpenSolaris/Solaris support these unless the
>Addonics eSATA PCI-X adapter supports them. I have not figured that
>one out yet. All I know is I want ZFS.
I'm not using the multiplier, but I am using the 4 port Addonics eSATA PCI-X
card in a PCI slot,
btw - eSATA == SATA w
I've been using the syba 4port card on linux and it works well. I bricked
another one trying to downgrade the bios so it was just disks, no RAID. Ah,
$20 gone.
So I got an Addonics eSata card. Sata 3.0. PCI *or* PCI-X. Works right off
the bat w/ 10u3. No firmware update needed. It was $1
[i]I think the original poster, was thinking that non-enterprise users
would be most interested in only having to *purchase* one drive at a time.
Enterprise users aren't likely to balk at purchasing 6-10 drives at a
time, so for them adding an additional *new* RaidZ to stripe across is
easier.
[/i
[i]Enterprise feature questions), but it's possible now to expand a pool
containing raidz devs-- and this is the more likely case with
enterprise users:
# ls -lh /var/tmp/fakedisk/
total 1229568
-rw--T 1 root root 100M Jan 9 20:22 disk1
-rw--T 1 root root 100M Jan 9 20:22 disk2
-rw--T
[i]* Maximizing the use of different disk sizes[/i]
[i]If such capabilities exist, you could start with a single disk vdev and grow
it to consume a large disk farm with any number of parity drives, all while the
system is fully available.[/i]
Now you're just teasing me ;-)
This message post
I want to setup a ZFS server with RAID-Z. Right now I have 3 disks. In 6
months, I want to add a 4th drive and still have everything under RAID-Z
without a backup/wipe/restore scenario. Is this possible?
I've used NetApps in the past (1996 even!) and they do it. I think they're
using RAID4.
37 matches
Mail list logo