Ted, it was indeed a memory issue which only occurred once the buffers
were filled to a certain degree. Memtest was getting red all over the
screen. Thanks again, and status set to invalid.
** Changed in: e2fsprogs (Ubuntu)
Status: New => Invalid
--
You received this bug notification beca
Ted, thanks for taking the time to reply to my problem in such a prompt
manner.
- Using an old 2 TB drive instead of the 6 TB drive: GPF (so it's not related
to the drives or the large capacity)
- mainline/v3.17.3 kernel: GPF
- Debian wheezy live cd: GPF
- mkfs.ext3 to a USB-stick: worked flawles
Screenshot of GPF / KP
** Attachment added: "Screenshot of GPF / KP"
https://bugs.launchpad.net/ubuntu/+source/e2fsprogs/+bug/1393151/+attachment/4261854/+files/kp.jpg
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.l
** Description changed:
- Writing data to a 3 TB ext4-formatted logical volume on a 2x6 TB mdadm
- raid 1 mirror always results in a GPF within 1 minute of writing the
- data. Disks seem to be OK smart-wise.
+ mkfs.ext3 always causes a kernel panic on both of my WD 6 TB drives
+ while writing the
More detailed SMART info:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED
WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051Pre-fail Always
- 0
3 Spin_Up_Time0x0027 241 201 021Pre-fail Always
-
Additional info:
A GPF also occurs when creating the filesystem on the LV using mkfs.ext3, while
creating the inode tables.
Since mkfs.ext4 doesn't do this, it happens later when more data is being
written to the lv.
--
You received this bug notification because you are a member of Ubuntu
Bugs,
** Description changed:
- Writing data to a 3 TB ext4-formatted logical volume on a 2x6 TB mdadm
- raid 1 mirror always results in a GPF within 1 minute of writing the
- data. Disks seem to be OK smart-wise.
-
- When the GPF occurs, the md1 raid volume gets degrated, the entire
+ Writing lots of
Public bug reported:
Writing data to a 3 TB ext4-formatted logical volume on a 2x6 TB mdadm
raid 1 mirror always results in a GPF within 1 minute of writing the
data. Disks seem to be OK smart-wise.
When the GPF occurs, the md1 raid volume gets degrated, the entire
system gets irresponsive and af