Sorry for the breakage. This should have been fixed now.
On Mon, May 3, 2010 at 8:36 PM, FreeBSD Tinderbox wrote:
> TB --- 2010-05-04 02:41:37 - tinderbox 2.6 running on freebsd-legacy.sentex.ca
> TB --- 2010-05-04 02:41:37 - starting RELENG_6 tinderbox run for
> sparc64/sparc64
> TB --- 2010-0
TB --- 2010-05-04 02:41:37 - tinderbox 2.6 running on freebsd-legacy.sentex.ca
TB --- 2010-05-04 02:41:37 - starting RELENG_6 tinderbox run for sparc64/sparc64
TB --- 2010-05-04 02:41:37 - cleaning the object tree
TB --- 2010-05-04 02:41:49 - cvsupping the source tree
TB --- 2010-05-04 02:41:49 - /
TB --- 2010-05-04 02:38:04 - tinderbox 2.6 running on freebsd-legacy.sentex.ca
TB --- 2010-05-04 02:38:04 - starting RELENG_6 tinderbox run for i386/pc98
TB --- 2010-05-04 02:38:04 - cleaning the object tree
TB --- 2010-05-04 02:38:24 - cvsupping the source tree
TB --- 2010-05-04 02:38:24 - /usr/bi
TB --- 2010-05-04 01:23:08 - tinderbox 2.6 running on freebsd-legacy.sentex.ca
TB --- 2010-05-04 01:23:08 - starting RELENG_6 tinderbox run for amd64/amd64
TB --- 2010-05-04 01:23:08 - cleaning the object tree
TB --- 2010-05-04 01:23:30 - cvsupping the source tree
TB --- 2010-05-04 01:23:30 - /usr/
TB --- 2010-05-04 01:42:28 - tinderbox 2.6 running on freebsd-legacy.sentex.ca
TB --- 2010-05-04 01:42:28 - starting RELENG_6 tinderbox run for i386/i386
TB --- 2010-05-04 01:42:28 - cleaning the object tree
TB --- 2010-05-04 01:42:43 - cvsupping the source tree
TB --- 2010-05-04 01:42:43 - /usr/bi
On Mon, May 03, 2010 at 10:16:57PM -0400, Charles Sprickman wrote:
> Just some random data. I know when I was reading about ZFS I did
> come across some vague notion that zfs wanted the entire drive to
> better deal with queueing, not sure if that was official Sun docs or
> some random blog though
On Sun, 2 May 2010, Wes Morgan wrote:
On Sun, 2 May 2010, Eric Damien wrote:
Hello list.
I am taking my first steps with ZFS. In the past, I used to have two UFS
slices: one dedicated to the o.s. partitions, and the second to data (/home,
etc.). I read on that it was possible to recreate that
On Sun, May 02, 2010 at 09:40:13PM -0500, Bryce Edwards wrote:
> I've got a new Supermicro X58 system with an Intel Core i7 930 with 6
> GB ram that is not performing nearly as fast as it should in many ways
> (compiling, network transfers).
By the way, an interesting thread you might read -- yes
On Mon, May 03, 2010 at 11:57:28PM +0200, David DEMELIER wrote:
> 2010/5/3 David DEMELIER :
> > Hi,
> >
> > I just updated my 8.0-STABLE/amd64 today around 17h CEST, and it just
> > panics when I unplug my AC. The current process = 11 (idle: cpu1) is
> > this related to the cpufreq and related stu
On Sun, May 02, 2010 at 09:40:13PM -0500, Bryce Edwards wrote:
> I've got a new Supermicro X58 system with an Intel Core i7 930 with 6
> GB ram that is not performing nearly as fast as it should in many ways
> (compiling, network transfers). To give an example, it has been
> building the gcc44 por
On Mon, May 3, 2010 at 4:57 PM, David DEMELIER wrote:
> 2010/5/3 David DEMELIER :
>> Hi,
>>
>> I just updated my 8.0-STABLE/amd64 today around 17h CEST, and it just
>> panics when I unplug my AC. The current process = 11 (idle: cpu1) is
>> this related to the cpufreq and related stuff ?
>>
>> It
On Mon, May 3, 2010 at 2:57 PM, David DEMELIER wrote:
> 2010/5/3 David DEMELIER :
>> Hi,
>>
>> I just updated my 8.0-STABLE/amd64 today around 17h CEST, and it just
>> panics when I unplug my AC. The current process = 11 (idle: cpu1) is
>> this related to the cpufreq and related stuff ?
>>
>> It
Maybe this is a ridiculous question, but did you check whether your
CPU is used and accelerated in case you use powerd/cpufreq or another
power-saving feature? I ask you because I had this problem and I
recalled that the same thing gathered my attention in the beginning,
slow compilations. Basicall
2010/5/3 David DEMELIER :
> Hi,
>
> I just updated my 8.0-STABLE/amd64 today around 17h CEST, and it just
> panics when I unplug my AC. The current process = 11 (idle: cpu1) is
> this related to the cpufreq and related stuff ?
>
> It also says cannot dump. Device not defined or unavailable so I ca
Hi,
I just updated my 8.0-STABLE/amd64 today around 17h CEST, and it just
panics when I unplug my AC. The current process = 11 (idle: cpu1) is
this related to the cpufreq and related stuff ?
It also says cannot dump. Device not defined or unavailable so I can't
give you more infos now.
King reg
I have a 12GB memory machine, with a mpt controller in it, running a ZFS
raidz2
for (test) data storage. The system also has a ZFS mirror in place for
the OS,
home directories, etc.
I manually failed one of the disks in the JBOD shelf and watched as the mpt
controller started logging errors.
I'm seeing the following panic several times a week. It happens when
we have a periodic script run that is doing a "sysctl -a | grep | sed"
to fetch information we use for logging. I'm not sure if it panics in
the same OID every time, but a little investigation with KGDB on the
core file shows it
I have tried both drives independently (two system drives currently
in ZFS mirror), but the interrupts was something that caught my
attention as well. I haven't yet tried polling yet on the em
interface, but I still have interrupts like what you are seeing (minus
the em ones) when I'm just compili
On 03.05.2010 13:01, Jeremy Chadwick wrote:
On Mon, May 03, 2010 at 12:41:50PM +0200, Giulio Ferro wrote:
NFS server amd64 Freebsd 8.0 recent (2 days ago)
This server has been running for several months without problems.
Beginning last week, however, I'm experiencing panics (about 1 per day
On Mon, May 03, 2010 at 12:41:50PM +0200, Giulio Ferro wrote:
> NFS server amd64 Freebsd 8.0 recent (2 days ago)
>
> This server has been running for several months without problems.
> Beginning last week, however, I'm experiencing panics (about 1 per day)
> with the error in the subject
>
> Curr
NFS server amd64 Freebsd 8.0 recent (2 days ago)
This server has been running for several months without problems.
Beginning last week, however, I'm experiencing panics (about 1 per day)
with the error in the subject
Current settings:
vm.kmem_size_scale: 3
vm.kmem_size_max: 329853485875
vm.kme
Am 26.04.2010 18:02, schrieb Julian Elischer:
> On 4/26/10 1:11 AM, Stefan Esser wrote:
>> I debugged this problem and prepared a patch for discussion, which
>> later was committed by Max Laier (if memory serves me right). The
>> message was added in order to identify further situations, where
>> n
22 matches
Mail list logo