On 07/09/2012 07:21 AM, Dan Swartzendruber wrote:
> Unless I am misunderstanding the above, we are almost never hitting on
> prefetched data, and barely ever on prefetched metadata. Given that, is
> there even a reason to leave prefetch on? I mean, it does generate extra
> reads, no?
My experi
On 07/05/2012 03:49 PM, Jon Tibble wrote:
> prestable3 == oi_151a4 == 0.151.1.4
>
> The versioning was brought into line in prestable5 as the release notes for
> a5 state.
So is there a method by which I can get zone installation to work again
on oi_151a4 hosts?
Cheers,
--
Saso
__
thanks, saso. i will try that out... most of the I/O is random in nature,
and read-heavy, since it is feeding an ESXi datastore on behalf of 6 or so
VMs...
-Original Message-
From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
Sent: Monday, July 09, 2012 3:58 AM
To: Discussion list for
On 07/09/2012 01:31 PM, Sašo Kiselkov wrote:
> On 07/05/2012 03:49 PM, Jon Tibble wrote:
>> prestable3 == oi_151a4 == 0.151.1.4
>>
>> The versioning was brought into line in prestable5 as the release notes for
>> a5 state.
>
> So is there a method by which I can get zone installation to work agai
I'd also suggest integration with automount, and not make an
unnecessary boot-time dependency via /etc/vfstab.
Make sure that in /etc/auto_master you have such a line:
/- auto_direct
This makes sure that arbitrary paths (unlike fixed /home or
/net defined in other lines) can be aut
2012-07-06 13:41, John McEntee пишет:
The inbuilt kernel CIFS server only does file sharing and therefore cannot
be a master browser hence your windows 7 machine will be it. The solution is
to either, 1) live with the problem. 2) use samba instead. 3) run a virtual
machine on the Solaris system t
I upgraded a machine to oi_151a5 from oi_151a4 last week, and when its
weekly scrub rolled around, /var/adm/messages gathered a lot of these,
in groups of dozens at a time:
Jul 7 01:15:21 myelin2 scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci8086
,340a@3/pci1000,30c0@0 (mpt_sas0):
Jul 7 01
I've got a number of mpt_sas-using Supermicro-hardware-running OI
machines, and have never seen that error, so I'm impressed.
That said, I don't think I'd call that "plenty of memory", depending
on your dataset size. How many disks and how large are the pools? It's
quite possible to eat up 24 GB v
> And zpool status showed a lot of failed reads, and decided to drop all
> the disks on one of the two HBAs. Under oi_151a4, I am fairly certain
> these messages did not show up (there are none in /var/adm/messages.*,
> which has entries from June 11, the upgrade was on July 3). A zpool
> clear lat
The pool is 24 3TB disks, 23 Hitachi Deskstar, and 1 Seagate, arranged
as 2 groups of raidz2, the group that dropped included the seagate.
It is a backup for our other NFS server (conducted nightly via rsync),
and has a single gigabit connection, so it doesn't get used heavily,
and it doesn't need
> It is a backup for our other NFS server (conducted nightly via rsync),
ok - not the same thing - out of interest, why don't you just use zfs
send/receive?
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 98013356
r...@karlsbakk.net
http://blogg.karlsbakk.net/
GPG Public key:
zfs send/receive over network would require a little more work to set
up (managing snapshots manually, netcat or ssh tunnel). In theory,
yes, it would be better for data integrity (though I am unclear as to
what it does if a transmission error does occur, since the
communication isn't 2-way, just
On 07/09/12 13:17, Timothy Coalson wrote:
> I upgraded a machine to oi_151a5 from oi_151a4 last week, and when its
> weekly scrub rolled around, /var/adm/messages gathered a lot of these,
> in groups of dozens at a time:
>
> Jul 7 01:15:21 myelin2 scsi: [ID 107833 kern.warning] WARNING:
> /pci@0
Well, I don't have the same symptoms as him, his swap was almost entirely used:
tim@myelin2:/var/adm$ swap -lh
swapfile devswaplo blocks free
/dev/zvol/dsk/rpool/swap 96,24K 12G 12G
Though I am not logged into a desktop (it is sitting at the gdm
greeter).
> zfs send/receive over network would require a little more work to set
> up (managing snapshots manually, netcat or ssh tunnel). In theory,
> yes, it would be better for data integrity (though I am unclear as to
> what it does if a transmission error does occur, since the
> communication isn't 2-w
> there are configurable services for automatic snapshotting and zfs
> send/receive, so it should be quite doable.
>
> svc:/system/filesystem/zfs/auto-snapshot:daily
> svc:/system/filesystem/zfs/auto-snapshot:frequent
> svc:/system/filesystem/zfs/auto-snapshot:hourly
> svc:/system/filesystem/zfs/a
16 matches
Mail list logo