On 06/05/2010, at 12:33, Steve Polyack wrote:
> It may not be something you can try on a production system, but if you can
> experiment, it's worth a shot. Note that your device names WILL change to
> adaX instead of adX. I would definitely recommend you glabel(8) and create
> the zpool/zdev
Oppps, forgot about the lem case, sorry, and here I made lem so i wouldnt
have to touch that code, lol :)
As for WOL not working at all, that's something I'll have to check on.
Jack
On Wed, May 5, 2010 at 2:19 PM, Harald Schmalzbauer <
h.schmalzba...@omnilan.de> wrote:
> Jack Vogel schrieb am
Jack Vogel schrieb am 27.04.2010 23:58 (localtime):
Thanks Harald,
Have already been made aware of this, its due to the broadcast WOL being
enabled, I will be
fixing the problem shortly. Sorry for the inconvenience.
Hello Jack,
I saw your RELENG_8 change and recompiled one kernel today. It
Would you be so kind to try to revert this patch?
I'm just guessing
You have to pass -R flag to patch program to apply the patch
=
--- head/sys/dev/acpica/acpi_acad.c 2009/06/05 18:44:36 193530
+++ head/sys/dev/acpica/acpi_acad.c 2009/09/30 17:07:49 197649
On Wed, May 5, 2010 at 9:28 AM, Freddie Cash wrote:
> On Wed, May 5, 2010 at 9:20 AM, Tom Evans wrote:
>
>> On Wed, May 5, 2010 at 4:56 PM, Freddie Cash wrote:
>> > On Wed, May 5, 2010 at 8:44 AM, Tom Evans
>> wrote:
>> >
>> >> When looking at the size of a pool, this information can be got fro
On Wed, May 5, 2010 at 9:20 AM, Tom Evans wrote:
> On Wed, May 5, 2010 at 4:56 PM, Freddie Cash wrote:
> > On Wed, May 5, 2010 at 8:44 AM, Tom Evans
> wrote:
> >
> >> When looking at the size of a pool, this information can be got from
> >> both zpool list and zfs list:
> >>
> >> > $ zfs list
>
On Wed, May 5, 2010 at 4:56 PM, Freddie Cash wrote:
> On Wed, May 5, 2010 at 8:44 AM, Tom Evans wrote:
>
>> When looking at the size of a pool, this information can be got from
>> both zpool list and zfs list:
>>
>> > $ zfs list
>> NAME USED AVAIL REFER MOUNTPOINT
>> tank
On Wed, May 05, 2010 at 08:58:33AM -0700, Jeremy Chadwick wrote:
> On Wed, May 05, 2010 at 04:44:32PM +0100, Tom Evans wrote:
> > When looking at the size of a pool, this information can be got from
> > both zpool list and zfs list:
> >
> > > $ zfs list
> > NAME USED AVAIL
On Wed, May 05, 2010 at 04:44:32PM +0100, Tom Evans wrote:
> When looking at the size of a pool, this information can be got from
> both zpool list and zfs list:
>
> > $ zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> tank 5.69T 982G 36.5K /tank
>
>
On Wed, May 5, 2010 at 8:44 AM, Tom Evans wrote:
> When looking at the size of a pool, this information can be got from
> both zpool list and zfs list:
>
> > $ zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> tank 5.69T 982G 36.5K /tank
>
> > $ zpool
Hi all
When looking at the size of a pool, this information can be got from
both zpool list and zfs list:
> $ zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 5.69T 982G 36.5K /tank
> $ zpool list
NAME SIZE USED AVAILCAP HEALTH ALTROOT
t
On Tue, May 4, 2010 at 11:32 PM, Giulio Ferro wrote:
> Giulio Ferro wrote:
>
>> Thanks, I'll try these settings.
>>
>> I'll keep you posted.
>>
>
> Nope, it's happened again... Now I've tried to rise vm.kmem_size to 6G...
> I'm really astounded at how unstable zfs is, it's causing me a lot of
> p
On 05/05/10 10:56, Harald Schmalzbauer wrote:
Harald Schmalzbauer schrieb am 05.05.2010 14:41 (localtime):
Hello,
one drive of my mirror failed today, but 'zpool staus' shows it
"online".
Every process using a ZFS mount hangs. Also 'zpool offline /dev/ad1'
hangs infinitely.
...
Sorry, I made
On Wed, May 05, 2010 at 04:56:41PM +0200, Harald Schmalzbauer wrote:
> Harald Schmalzbauer schrieb am 05.05.2010 14:41 (localtime):
> >Hello,
> >
> >one drive of my mirror failed today, but 'zpool staus' shows it "online".
> >Every process using a ZFS mount hangs. Also 'zpool offline
> >/dev/ad1' h
On Wed, May 5, 2010 at 3:24 PM, Dominic Fandrey wrote:
> I'm wondering how geom_sched influences soft-update consistency.
>
> To my understanding it's very important to SU, that the file system
> controls writing sequences. Because geom_sched is transparent,
> i.e. UFS does not know about access s
Harald Schmalzbauer schrieb am 05.05.2010 14:41 (localtime):
Hello,
one drive of my mirror failed today, but 'zpool staus' shows it "online".
Every process using a ZFS mount hangs. Also 'zpool offline /dev/ad1'
hangs infinitely.
...
Sorry, I made an error with zpool create. Somehow the little
On 05/05/10 10:12, Ben Kelly wrote:
On May 5, 2010, at 9:33 AM, Pawel Jakub Dawidek wrote:
On Wed, May 05, 2010 at 10:46:31AM +0200, Giulio Ferro wrote:
On 05.05.2010 09:52, Jeremy Chadwick wrote:
Nope, it's happened again... Now I've tried to rise vm.kmem_size to 6G...
D
On May 5, 2010, at 9:33 AM, Pawel Jakub Dawidek wrote:
> On Wed, May 05, 2010 at 10:46:31AM +0200, Giulio Ferro wrote:
>> On 05.05.2010 09:52, Jeremy Chadwick wrote:
>>
>> Nope, it's happened again... Now I've tried to rise vm.kmem_size to 6G...
>>
>>
>>> Did you set both vm.kmem_size and vfs.
On Wed, May 05, 2010 at 12:45:46PM +0100, Rui Paulo wrote:
> Please try this patch:
>
> Index: acpi_cpu.c
> ===
> --- acpi_cpu.c(revision 207322)
> +++ acpi_cpu.c(working copy)
> @@ -997,12 +997,12 @@
> if (notify
On Wed, May 05, 2010 at 10:46:31AM +0200, Giulio Ferro wrote:
> On 05.05.2010 09:52, Jeremy Chadwick wrote:
>
> Nope, it's happened again... Now I've tried to rise vm.kmem_size to 6G...
>
>
> >Did you set both vm.kmem_size and vfs.zfs.arc_max, setting the latter to
> >something *less* than vm.km
I'm wondering how geom_sched influences soft-update consistency.
To my understanding it's very important to SU, that the file system
controls writing sequences. Because geom_sched is transparent,
i.e. UFS does not know about access scheduling, I'm afraid that
the use of geom_sched would endanger m
Hello,
one drive of my mirror failed today, but 'zpool staus' shows it "online".
Every process using a ZFS mount hangs. Also 'zpool offline /dev/ad1'
hangs infinitely.
Here's the dmesg of the failing (and correctly detached) device:
ad1: TIMEOUT - FLUSHCACHE48 retrying (1 retry left)
ata3: por
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Please try this patch:
Index: acpi_cpu.c
===
--- acpi_cpu.c (revision 207322)
+++ acpi_cpu.c (working copy)
@@ -997,12 +997,12 @@
if (notify != ACPI_NOTIFY_CX_STATES)
return;
+ACPI_SERIAL_BEGIN(cpu);
/* Updat
On Wed, May 05, 2010 at 10:49:23AM +0100, Rui Paulo wrote:
>
> On 5 May 2010, at 08:40, Demelier David wrote:
>
> > On Wed, May 05, 2010 at 01:19:45AM +0100, Rui Paulo wrote:
> >> On 4 May 2010, at 21:38, Kostik Belousov wrote:
> >>
> >>> On Tue, May 04, 2010 at 10:27:23PM +0200, David DEMELIER
On 5 May 2010, at 08:40, Demelier David wrote:
> On Wed, May 05, 2010 at 01:19:45AM +0100, Rui Paulo wrote:
>> On 4 May 2010, at 21:38, Kostik Belousov wrote:
>>
>>> On Tue, May 04, 2010 at 10:27:23PM +0200, David DEMELIER wrote:
2010/5/4 Kostik Belousov :
> On Tue, May 04, 2010 at 06:3
On 05.05.2010 11:11, Simun Mikecin wrote:
- Original Message
I'm really astounded at how unstable zfs is, it's
causing me a lot
of problem.
Why isn't it
stated in the handbook that zfs isn't up to production yet?
Why people responsible for
On 05.05.2010 09:52, Jeremy Chadwick wrote:
Nope, it's happened again... Now I've tried to rise vm.kmem_size to 6G...
Did you set both vm.kmem_size and vfs.zfs.arc_max, setting the latter to
something *less* than vm.kmem_size?
Yes.
After your suggestion, I set
vfs.zfs.arc_max: 375809638
On Wed, May 05, 2010 at 08:32:03AM +0200, Giulio Ferro wrote:
> Giulio Ferro wrote:
> >Thanks, I'll try these settings.
> >
> >I'll keep you posted.
>
> Nope, it's happened again... Now I've tried to rise vm.kmem_size to 6G...
Did you set both vm.kmem_size and vfs.zfs.arc_max, setting the latter
On Wed, May 05, 2010 at 01:19:45AM +0100, Rui Paulo wrote:
> On 4 May 2010, at 21:38, Kostik Belousov wrote:
>
> > On Tue, May 04, 2010 at 10:27:23PM +0200, David DEMELIER wrote:
> >> 2010/5/4 Kostik Belousov :
> >>> On Tue, May 04, 2010 at 06:35:52PM +0200, David DEMELIER wrote:
> Good news
30 matches
Mail list logo