James McPherson wrote:
On 10/12/06, Steve Goldberg <[EMAIL PROTECTED]> wrote:
Where is the ZFS configuration (zpools, mountpoints, filesystems,
etc) data stored within Solaris? Is there something akin to vfstab
or perhaps a database?
Have a look at the contents of /etc/zfs for an in-filesyst
John Sonnenschein wrote:
I *just* figgured out this problem, looking for a potential solution
(or at the very least some validation that i'm not crazy)
Okay, so here's the deal. I've been using this terrible horrible
no-good very bad hackup of a couple partitions spread across 3 drives
as a zpoo
On 10/11/06, John Sonnenschein <[EMAIL PROTECTED]> wrote:
As it turns out now, something about the drive is causing the machine to hang
on POST. It boots fine if the drive isn't connected, and if I hot plug the
drive after the machine boots, it works fine, but the computer simply will not
boo
Hi Darren,
Coments inline
Darren Dunham wrote:
ZFS creates a unique FSID for every filesystem (called a object set in
ZFS terminology).
The unique id is saved (ondisk) as part of dsl_dataset_phys_t in
ds_fsid_guid.
And this id is a random number generated when the FS is created.
This i
On Oct 12, 2006, at 12:23 AM, Frank Cusack wrote:
On October 11, 2006 11:14:59 PM -0400 Dale Ghent
<[EMAIL PROTECTED]> wrote:
Today, in 2006 - much different story. I even had Linux AND Solaris
problems with my machine's MCP51 chipset when it first came out. Both
forcedeth and nge croaked on i
On October 11, 2006 11:14:59 PM -0400 Dale Ghent <[EMAIL PROTECTED]>
wrote:
Today, in 2006 - much different story. I even had Linux AND Solaris
problems with my machine's MCP51 chipset when it first came out. Both
forcedeth and nge croaked on it. Welcome to the bleeding edge. You're
unfortunately
I *just* figgured out this problem, looking for a potential solution (or at the
very least some validation that i'm not crazy)
Okay, so here's the deal. I've been using this terrible horrible no-good very
bad hackup of a couple partitions spread across 3 drives as a zpool.
I got sick of having
On 10/11/06, Dale Ghent <[EMAIL PROTECTED]> wrote:
On Oct 11, 2006, at 7:36 PM, David Dyer-Bennet wrote:
> I've been running Linux since kernel 0.99pl13, I think it was, and
> have had amazingly little trouble. Whereas I'm now sitting on $2k of
> hardware that won't do what I wanted it to do un
On Oct 11, 2006, at 7:36 PM, David Dyer-Bennet wrote:
I've been running Linux since kernel 0.99pl13, I think it was, and
have had amazingly little trouble. Whereas I'm now sitting on $2k of
hardware that won't do what I wanted it to do under Solaris, so it's a
bit of a hot-button issue for me r
Well thats probably because both windows and Linux were designed with the intel/x86/cheap crap market in mind. A more valid comparison would be OSX, since it is also designed to run on a somewhat specific set of hardware.
Solaris will get there, but the open aspect of solaris on intel is still fair
On 10/12/06, Steve Goldberg <[EMAIL PROTECTED]> wrote:
Where is the ZFS configuration (zpools, mountpoints, filesystems,
etc) data stored within Solaris? Is there something akin to vfstab
or perhaps a database?
Have a look at the contents of /etc/zfs for an in-filesystem artefact
of zfs. Apar
Hi All,
Where is the ZFS configuration (zpools, mountpoints, filesystems, etc) data
stored within Solaris? Is there something akin to vfstab or perhaps a database?
Thanks,
Steve
This message posted from opensolaris.org
___
zfs-discuss mailing lis
Artem Kachitchkine wrote:
# fstyp c3t0d0s0
zfs
s0? How is this disk labeled? From what I saw, when you put EFI label
on a USB disk, the "whole disk" device is going to be d0 (without
slice). What do these commands print:
# fstyp /dev/dsk/c3t0d0
unknown_fstyp (no matches)
# fdisk -W -
On 10/11/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> The more I learn about Solaris hardware support, the more I see it as
> a minefield.
I've found this to be true for almost all open source platforms where
you're trying to use something that hasn't been explicitly used and
tested by t
# fstyp c3t0d0s0
zfs
s0? How is this disk labeled? From what I saw, when you put EFI label on a USB
disk, the "whole disk" device is going to be d0 (without slice). What do these
commands print:
# fstyp /dev/dsk/c3t0d0
# fdisk -W /dev/rdsk/c3t0d0
# fdisk -W /dev/rdsk/c3t0d0p0
-Artem.
_
David Dyer-Bennet wrote:
On 10/11/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
There are tools around that can tell you if hardware is supported by
Solaris.
One such tool can be found at:
http://www.sun.com/bigadmin/hcl/hcts/install_check.html
Beware of this tool. It reports "Y" for bo
On 10/11/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
There are tools around that can tell you if hardware is supported by
Solaris.
One such tool can be found at:
http://www.sun.com/bigadmin/hcl/hcts/install_check.html
Beware of this tool. It reports "Y" for both 32-bit and 64-bit on the
i'm replacing the stock HD in my vaio notebook with 2 100GB 7200 RPM hitachi--
yes it can hold 2 HDs. ;) i was thinking about doing some sort of striping
setup to get even more performance, but i am hardly a storage expert, so i'm
not sure if it is better to set them up to do sofware RAID or t
> On Mon, Oct 09, 2006 at 11:08:14PM -0700, Matthew
> Ahrens wrote:
> You may also want to try 'fmdump -eV' to get an idea
> of what those
> faults were.
I am not sure how to interpret the results, maybe you can help me. It looks
like the following with many more similar pages following:
% fmdu
Dick Davies wrote:
On 11/10/06, Peter van Gemert <[EMAIL PROTECTED]> wrote:
Hi There,
You might want to check the HCL at http://www.sun.com/bigadmin/hcl to
find out which hardware is supported by Solaris 10.
Greetings,
Peter
I tried that myself - there really isn't very much on there.
I
On Wed, Oct 11, 2006 at 08:24:13PM +0200, Joerg Schilling wrote:
> Before we start defining the first offocial functionality for this Sun
> feature,
> we should define a mapping for Mac OS, FreeBSD and Linux. It may make sense,
> to
> define a sub directory for the attribute directory for keepi
Nicolas Williams <[EMAIL PROTECTED]> wrote:
> On Mon, Oct 09, 2006 at 12:44:34PM +0200, Joerg Schilling wrote:
> > Nicolas Williams <[EMAIL PROTECTED]> wrote:
> >
> > > You're arguing for treating FV as extended/named attributes :)
> > >
> > > I think that'd be the right thing to do, since we hav
I have a zfs pool on a USB hard drive attached to my system.
I had unplugged it and when I reconnect it, zpool import does
not see the pool.
# cd /dev/dsk
# fstyp c3t0d0s0
zfs
When I truss zpool import, it looks everywhere (seemingly) *but*
c3t0d0s0 for the pool...
The relevant portion...
sta
> ZFS creates a unique FSID for every filesystem (called a object set in
> ZFS terminology).
>
> The unique id is saved (ondisk) as part of dsl_dataset_phys_t in
> ds_fsid_guid.
> And this id is a random number generated when the FS is created.
>
> This id is used to populate the zfs_t structur
Al Hopper wrote:
> On Wed, 11 Oct 2006, Dana H. Myers wrote:
>
>> Al Hopper wrote:
>>
>>> Memory: DDR-400 - your choice but Kingston is always a safe bet. 2*512Mb
>>> sticks for a starter, cost effective, system. 4*512Mb for a good long
>>> term solution.
>> Due to fan-out considerations, every
On Wed, 11 Oct 2006, Dana H. Myers wrote:
> Al Hopper wrote:
>
> > Memory: DDR-400 - your choice but Kingston is always a safe bet. 2*512Mb
> > sticks for a starter, cost effective, system. 4*512Mb for a good long
> > term solution.
>
> Due to fan-out considerations, every BIOS I've seen will ru
Al Hopper wrote:
> Memory: DDR-400 - your choice but Kingston is always a safe bet. 2*512Mb
> sticks for a starter, cost effective, system. 4*512Mb for a good long
> term solution.
Due to fan-out considerations, every BIOS I've seen will run DDR400
memory at 333MHz when connected to more than 1
On Oct 11, 2006, at 10:10 AM, [EMAIL PROTECTED] wrote:
So are there any pci-e SATA cards that are supported ? I was hoping
to go with a sempron64. Using old-pci seems like a waste.
Yes.
I wrote up a little review of the SIIG SC-SAE412-S1 card which is a
two port PCIe card based on the Sili
Followup - if you also want to also use the machine as a workstation:
Graphics card (PCI Express): Pick a Nvidia based board to take advantage
fo the excellent Solaris native driver[0]. The 7600GS has a great
price/performance ratio. This ref [1] also mentions the 7600GT - altough
I'm (almost)
On Tue, 10 Oct 2006 [EMAIL PROTECTED] wrote:
> All,
> So I have started working with Solaris 10 at work a bit (I'm a Linux
> guy by trade) and I have a dying nfs box at home. So the long and short of
> it is as follows: I would like to setup a SATAII whitebox that uses ZFS as
> its filesystem
So are there any pci-e SATA cards that are supported ? I was hoping to go with a sempron64. Using old-pci seems like a waste.Regards.On 10/11/06, Dick Davies
<[EMAIL PROTECTED]> wrote:On 11/10/06, Peter van Gemert <
[EMAIL PROTECTED]> wrote:> Hi There,>> You might want to check the HCL at http://w
On 11/10/06, Peter van Gemert <[EMAIL PROTECTED]> wrote:
Hi There,
You might want to check the HCL at http://www.sun.com/bigadmin/hcl to find out
which hardware is supported by Solaris 10.
Greetings,
Peter
I tried that myself - there really isn't very much on there.
I can't believe Solaris r
Generally, I've found the way to go is to get a 4-port SATA PCI
controller (something based on the Silicon Image stuff seems to be
cheap, common, and supported), and then plunk it into any old PC you can
find (or get off of eBay).
The major caveat here is that I'd recommend trying to find a PC
Hi Luke,
Luke Schwab wrote:
Hi,
In migrating from **VM to ZFS am I going to have an issue with Major/Minor
numbers with NFS mounts? Take the following scenario.
1. NFS clients are connected to an active NFS server that has SAN shared
storage between the active and standby nodes in a cluster
Hi There,
You might want to check the HCL at http://www.sun.com/bigadmin/hcl to find out
which hardware is supported by Solaris 10.
Greetings,
Peter
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
35 matches
Mail list logo