Re: Outlook problems on Apple systems(Mac os)

2020-09-08 Thread Sami Ketola


> On 8. Sep 2020, at 9.21, h...@cndns.com wrote:
> 
> Outlook version is 16.40

Latest I could find was 16.21 and it seems to work just fine.

Also if Outlook does not like the extra info after OK reply you probably should 
open a bug for microsoft not being RFC compliant on this.

Sami




Re: Btrfs RAID-10 performance

2020-09-08 Thread Miloslav Hůla

Thanks for the tips!

Dne 07.09.2020 v 15:24 Scott Q. napsal(a):
1. I assume that's a 2U format -24 bays. You only have 1 raid card for 
all 24 disks ? Granted you only have 16, but usually you should assign 1 
card per 8 drives. In our standard 2U chassis we have 3 hba's per 8 
drives. Your backplane should support that.


Exactly. And what's the reason/bottleneck? PCIe or card throughput?


2. Add more drives


We can add 2 next drives, and we actually did yesterday, but we keep 
free slots to be able replace drives by double-capacity ones.



3. Get a pci nvme ssd card and move the indexes/control/sieve files there.


It complicates current backup and restore a little bit, but I'll 
probably try that.


Thank you,
Milo



On Monday, 07/09/2020 at 08:16 Miloslav Hůla wrote:

Dne 07.09.2020 v 12:43 Sami Ketola napsal(a):
 >> On 7. Sep 2020, at 12.38, Miloslav Hůla mailto:miloslav.h...@gmail.com>> wrote:
 >>
 >> Hello,
 >>
 >> I sent this into the Linux Kernel Btrfs mailing list and I got
reply: "RAID-1 would be preferable"

(https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2...@lechevalier.se/T/).
May I ask you for the comments as from people around the Dovecot?
 >>
 >>
 >> We are using btrfs RAID-10 (/data, 4.7TB) on a physical
Supermicro server with Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz and
125GB of RAM. We run 'btrfs scrub start -B -d /data' every Sunday as
a cron task. It takes about 50 minutes to finish.
 >>
 >> # uname -a
 >> Linux imap 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20)
x86_64 GNU/Linux
 >>
 >> RAID is a composition of 16 harddrives. Harddrives are connected
via AVAGO MegaRAID SAS 9361-8i as a RAID-0 devices. All harddrives
are SAS 2.5" 15k drives.
 >>
 >> Server serves as a IMAP with Dovecot 2.2.27-3+deb9u6, 4104
accounts, Mailbox format, LMTP delivery.
 >
 > does "Mailbox format" mean mbox?
 >
 > If so, then there is your bottleneck. mbox is the slowest
possible mailbox format there is.
 >
 > Sami

Sorry, no, it is a typo. We are using "Maildir".

"doveconf -a" attached

Milo


zlib errors after upgrading to 2.3.11.3

2020-09-08 Thread Robert Nowotny
Dear Aki,
I switched to "gz" now, since "zstd" also gave some errors on writing to
files.
I dont know if "xz" compression or "zstd" shreddered my MDBOX Files, but I
lost 4 days of mail. (a couple of thousand mails).
After restoring the backup (what was made after switching to version
2.3.11.3) I still have some broken mdfiles, but not too many.
Interestingly always in the "Sent" Mailbox on a number of Users.

I just can not go back to 20.08.2020 before I updated to 2.3.11.3 - too
many emails would have been lost.

So - the current status is 2.3.11.3, with "gz" compression.

I force-synced and re-indexed all the mdbox files (250 GB), but still have
some broken - please how can I fix those ?

You stated :

(In theory you could leave the existing mails xz-compressed, but best
would be to re-compress everything via dsync so old mails can be read
when we eventually remove xz support.)


what is the optimal way to do that, expecially not loosing the index (?)
for outlook/thunderbird, for third party tools that rely on some index/hash
(dunno how exactly that works) ?


Sep  3 08:33:25 lxc-imap dovecot: imap(mpaul)<48684><2/9E5mKuAezAqKjk>:
Error: Mailbox Sent: UID=2171:
read(zlib(/home/vmail/virtualmailboxes/mpaul/storage/m.119)) failed:
read(/home/vmail/virtualmailboxes/mpaul/storage/m.119) failed: Broken pipe
(FETCH BODY[])
Sep  3 08:35:26 lxc-imap dovecot: imap(mpaul)<48089>:
Error: Mailbox Sent: UID=2171:
read(zlib(/home/vmail/virtualmailboxes/mpaul/storage/m.119)) failed:
read(/home/vmail/virtualmailboxes/mpaul/storage/m.119) failed: Broken pipe
(FETCH BODY[])
Sep  3 08:35:26 lxc-imap dovecot: imap(mpaul)<49228>:
Error: Mailbox Sent: UID=2171:
read(zlib(/home/vmail/virtualmailboxes/mpaul/storage/m.119)) failed:
read(/home/vmail/virtualmailboxes/mpaul/storage/m.119) failed: Broken pipe
(FETCH BODY[])
Sep  3 08:35:26 lxc-imap dovecot: imap(mpaul)<49229>:
Error: Mailbox Sent: UID=2171:
read(zlib(/home/vmail/virtualmailboxes/mpaul/storage/m.119)) failed:
read(/home/vmail/virtualmailboxes/mpaul/storage/m.119) failed: Broken pipe
(FETCH BODY[])
Sep  3 08:35:27 lxc-imap dovecot: imap(mpaul)<49230>:
Error: Mailbox Sent: UID=2171:
read(zlib(/home/vmail/virtualmailboxes/mpaul/storage/m.119)) failed:
read(/home/vmail/virtualmailboxes/mpaul/storage/m.119) failed: Broken pipe
(FETCH BODY[])
...
Sep  3 09:04:44 lxc-imap dovecot:
imap(cpotzinger)<49040>: Error: Mailbox Sent: UID=29534:
read(zlib(/home/vmail/virtualmailboxes/cpotzinger/storage/m.4703)) failed:
read(/home/vmail/virtualmailboxes/cpotzinger/storage/m.4703) failed: Broken
pipe (FETCH BODY[])
Sep  3 09:04:51 lxc-imap dovecot:
imap(cpotzinger)<54382>: Error: Mailbox Sent: UID=29534:
read(zlib(/home/vmail/virtualmailboxes/cpotzinger/storage/m.4703)) failed:
read(/home/vmail/virtualmailboxes/cpotzinger/storage/m.4703) failed: Broken
pipe (FETCH BODY[])
Sep  3 09:04:59 lxc-imap dovecot:
imap(cpotzinger)<54396>: Error: Mailbox Sent: UID=29534:
read(zlib(/home/vmail/virtualmailboxes/cpotzinger/storage/m.4703)) failed:
read(/home/vmail/virtualmailboxes/cpotzinger/storage/m.4703) failed: Broken
pipe (FETCH BODY[])
Sep  3 09:05:06 lxc-imap dovecot:
imap(cpotzinger)<54409>: Error: Mailbox Sent: UID=29534:
read(zlib(/home/vmail/virtualmailboxes/cpotzinger/storage/m.4703)) failed:
read(/home/vmail/virtualmailboxes/cpotzinger/storage/m.4703) failed: Broken
pipe (FETCH BODY[])
Sep  3 09:05:12 lxc-imap dovecot:
imap(cpotzinger)<54422>: Error: Mailbox Sent: UID=29534:
read(zlib(/home/vmail/virtualmailboxes/cpotzinger/storage/m.4703)) failed:
read(/home/vmail/virtualmailboxes/cpotzinger/storage/m.4703) failed: Broken
pipe (FETCH BODY[])
...
Sep  3 09:08:58 lxc-imap dovecot:
imap(cpotzinger)<54867>: Error: Mailbox Sent: UID=29534:
read(zlib(/home/vmail/virtualmailboxes/cpotzinger/storage/m.4703)) failed:
read(/home/vmail/virtualmailboxes/cpotzinger/storage/m.4703) failed: Broken
pipe (FETCH BODY[])
Sep  3 09:19:15 lxc-imap dovecot: imap(mpaul)<56030>:
Error: Mailbox Sent: UID=2171:
read(zlib(/home/vmail/virtualmailboxes/mpaul/storage/m.119)) failed:
read(/home/vmail/virtualmailboxes/mpaul/storage/m.119) failed: Broken pipe
(FETCH BODY[])
Sep  3 09:29:41 lxc-imap dovecot:
imap(cpotzinger)<57274>: Error: Mailbox Sent: UID=29534:
read(zlib(/home/vmail/virtualmailboxes/cpotzinger/storage/m.4703)) failed:
read(/home/vmail/virtualmailboxes/cpotzinger/storage/m.4703) failed: Broken
pipe (FETCH BODY[])
Sep  3 09:29:47 lxc-imap dovecot:
imap(cpotzinger)<57320>: Error: Mailbox Sent: UID=29534:
read(zlib(/home/vmail/virtualmailboxes/cpotzinger/storage/m.4703)) failed:
read(/home/vmail/virtualmailboxes/cpotzinger/storage/m.4703) failed: Broken
pipe (FETCH BODY[])
Sep  3 09:29:54 lxc-imap dovecot:
imap(cpotzinger)<57333>: Error: Mailbox Sent: UID=29534:
read(zlib(/home/vmail/virtualmailboxes/cpotzinger/storage/m.4703)) failed:
read(/home/vmail/virtualmailboxes/cpotzinger/storage/m.4703) failed: Broken
pipe (FETCH BODY[])
Sep  3 09:30:01 lxc-imap dovecot:
imap(cpotzinger)<57346>: Error: Mailbo

Re: zlib errors after upgrading to 2.3.11.3

2020-09-08 Thread Timo Sirainen
On 8. Sep 2020, at 12.35, Robert Nowotny  wrote:
> 
> 
> Dear Aki,
> I switched to "gz" now, since "zstd" also gave some errors on writing to 
> files.

What kind of errors?

> I dont know if "xz" compression or "zstd" shreddered my MDBOX Files, but I 
> lost 4 days of mail. (a couple of thousand mails).
> After restoring the backup (what was made after switching to version 
> 2.3.11.3) I still have some broken mdfiles, but not too many.
> Interestingly always in the "Sent" Mailbox on a number of Users.
> 
> I just can not go back to 20.08.2020 before I updated to 2.3.11.3 - too many 
> emails would have been lost.
> 
> So - the current status is 2.3.11.3, with "gz" compression.
> 
> I force-synced and re-indexed all the mdbox files (250 GB), but still have 
> some broken - please how can I fix those ?

Note that force-resync doesn't read through the mails to verify whether they 
are readable. It just verifies that the indexes and metadata is ok. The only 
way to verify that all mails are readable is to actually try to read all of 
their text (e.g. doveadm fetch -u user text mailbox Sent > /dev/null).

> You stated : 
> 
> (In theory you could leave the existing mails xz-compressed, but best 
> would be to re-compress everything via dsync so old mails can be read 
> when we eventually remove xz support.)
> 
> what is the optimal way to do that, expecially not loosing the index (?) for 
> outlook/thunderbird, for third party tools that rely on some index/hash 
> (dunno how exactly that works) ? 

IMAP clients use the folder's UIDVALIDITY and message UID numbers to preserve 
caches. Using dsync preserves these. See doveadm-sync man page. It's also in 
https://wiki.dovecot.org/Tools/Doveadm/Sync 


From your previous mail:

> sudo doveadm backup -D -u "${mailbox_username}" 
> "mdbox:/home/vmail/virtualmailboxes/${mailbox_username}_backup"
> sudo service dovecot stop
> sudo mv "/home/vmail/virtualmailboxes/${mailbox_username}" 
> "/home/vmail/virtualmailboxes/${mailbox_username}_original" 
> sudo mv "/home/vmail/virtualmailboxes/${mailbox_username}_backup" 
> "/home/vmail/virtualmailboxes/${mailbox_username}" 
> sudo service dovecot start

The problem is that in your config you have:

mail_location = 
mdbox:/home/vmail/virtualmailboxes/%n:DIRNAME=dbox-Mails-nocollision-random-KOKxNmMJkEBeCitBhFwS

You need to preserve the DIRNAME. So:

sudo doveadm backup -D -u "${mailbox_username}" 
"mdbox:/home/vmail/virtualmailboxes/${mailbox_username}_backup:DIRNAME=dbox-Mails-nocollision-random-KOKxNmMJkEBeCitBhFwS"
sudo service dovecot stop
sudo mv "/home/vmail/virtualmailboxes/${mailbox_username}" 
"/home/vmail/virtualmailboxes/${mailbox_username}_original" 
sudo mv "/home/vmail/virtualmailboxes/${mailbox_username}_backup" 
"/home/vmail/virtualmailboxes/${mailbox_username}" 
sudo service dovecot start

However, the dsync will likely fail as well because it can't read some of the 
mails. So you'll need to fix those in any case first. That's a bit trickier:

> Sep  3 08:33:25 lxc-imap dovecot: imap(mpaul)<48684><2/9E5mKuAezAqKjk>: 
> Error: Mailbox Sent: UID=2171: 
> read(zlib(/home/vmail/virtualmailboxes/mpaul/storage/m.119)) failed: 
> read(/home/vmail/virtualmailboxes/mpaul/storage/m.119) failed: Broken pipe 
> (FETCH BODY[])

For example here the best fix would be to try to preserve the mail as best as 
possible:

a) Preserve as much of the text as possible and expunge the broken mail:

doveadm fetch -u mpaul text mailbox Sent uid 2171 > msg.broken
doveadm expunge -u mpaul mailbox Sent uid 2171
doveadm save -u mpaul -m mailbox < msg.broken

b) You could also see if the issue is that Dovecot just can't read a properly 
compressed email, or if the issue was that it wrote broken emails. You can 
extract the raw compressed mail with:

doveadm -o mail_plugins=virtual fetch -u mpaul text mailbox Sent uid 2171 | 
tail -n +2 > msg.broken
cat msg.broken | xz -d

Also this way you can see if the broken mail is actually xz or zstd or zlib. It 
would be nice to know if there are any zstd or zlib compressed mails that have 
problems. We did a lot of stress testing with zstd and also with xz, but 
haven't been able to reproduce any problems. It's also strange that it says 
taht the error is "Broken pipe" - that doesn't indicate that the mail is 
corrupted but that there is something more strange going on. So perhaps you 
don't actually have any mails written as corrupted, but Dovecot is just somehow 
having trouble reading the mails.



Re: zlib errors after upgrading to 2.3.11.3

2020-09-08 Thread Timo Sirainen
On 8. Sep 2020, at 12.35, Robert Nowotny  wrote:


  
  Dear Aki,
I switched to "gz" now, since "zstd" also gave some errors on
writing to files.What kind of errors?Probably this? :Panic: file ostream.c: line 287 (o_stream_sendv_int): assertion failed: (!stream->blocking)Sep  3 08:33:25 lxc-imap dovecot:
imap(mpaul)<48684><2/9E5mKuAezAqKjk>: Error: Mailbox
Sent: UID=2171:
read(zlib(/home/vmail/virtualmailboxes/mpaul/storage/m.119))
failed: read(/home/vmail/virtualmailboxes/mpaul/storage/m.119)
failed: Broken pipe (FETCH BODY[])Also this way you can see if the broken mail is actually xz or zstd or zlib. It would be nice to know if there are any zstd or zlib compressed mails that have problems. We did a lot of stress testing with zstd and also with xz, but haven't been able to reproduce any problems. It's also strange that it says taht the error is "Broken pipe" - that doesn't indicate that the mail is corrupted but that there is something more strange going on. So perhaps you don't actually have any mails written as corrupted, but Dovecot is just somehow having trouble reading the mails.I managed to reproduce this. The files aren't corrupted, it's just that reading is failing. The attached patch should fix the xz code and should make your files readable again.

diff
Description: Binary data


Re: zlib errors after upgrading to 2.3.11.3

2020-09-08 Thread Timo Sirainen
On 8. Sep 2020, at 16.28, Timo Sirainen  wrote:
> 
>>> Sep  3 08:33:25 lxc-imap dovecot: imap(mpaul)<48684><2/9E5mKuAezAqKjk>: 
>>> Error: Mailbox Sent: UID=2171: 
>>> read(zlib(/home/vmail/virtualmailboxes/mpaul/storage/m.119)) failed: 
>>> read(/home/vmail/virtualmailboxes/mpaul/storage/m.119) failed: Broken pipe 
>>> (FETCH BODY[])
>> 
>> Also this way you can see if the broken mail is actually xz or zstd or zlib. 
>> It would be nice to know if there are any zstd or zlib compressed mails that 
>> have problems. We did a lot of stress testing with zstd and also with xz, 
>> but haven't been able to reproduce any problems. It's also strange that it 
>> says taht the error is "Broken pipe" - that doesn't indicate that the mail 
>> is corrupted but that there is something more strange going on. So perhaps 
>> you don't actually have any mails written as corrupted, but Dovecot is just 
>> somehow having trouble reading the mails.
> 
> I managed to reproduce this. The files aren't corrupted, it's just that 
> reading is failing. The attached patch should fix the xz code and should make 
> your files readable again.

Actually, downgrading to v2.3.10 should have fixed this as well. And v2.3.10 
especially doesn't have this same error message even. So I'm not sure why it 
didn't fix it previously for you. Maybe there is some other issue as well.



Leaked Events

2020-09-08 Thread bobby
I am going through my maillog file, and noticed these events:

Sep  8 15:56:17 mail dovecot[294462]: lmtp(391257): Warning: Event
0x55db5bc41200 leaked (parent=0x55db5bc3b830): mail-storage-service.c:1325
Sep  8 15:56:17 mail dovecot[294462]: lmtp(391257): Warning: Event
0x55db5bc3b830 leaked (parent=0x55db5bc34450): smtp-server-recipient.c:38
Sep  8 15:56:17 mail dovecot[294462]: lmtp(391257): Warning: Event
0x55db5bc34450 leaked (parent=0x55db5bc34130): connection.c:523
Sep  8 15:56:17 mail dovecot[294462]: lmtp(391257): Warning: Event
0x55db5bc34130 leaked (parent=0x55db5bc20ce0): smtp-server-connection.c:773
Sep  8 15:56:17 mail dovecot[294462]: lmtp(391257): Warning: Event
0x55db5bc20ce0 leaked (parent=(nil)): lmtp-client.c:164

Is this something to be concerned about?


Re: Leaked Events

2020-09-08 Thread Stephan Bosch




On 08/09/2020 18:01, bobby wrote:

I am going through my maillog file, and noticed these events:

Sep  8 15:56:17 mail dovecot[294462]: lmtp(391257): Warning: Event 
0x55db5bc41200 leaked (parent=0x55db5bc3b830): mail-storage-service.c:1325
Sep  8 15:56:17 mail dovecot[294462]: lmtp(391257): Warning: Event 
0x55db5bc3b830 leaked (parent=0x55db5bc34450): smtp-server-recipient.c:38
Sep  8 15:56:17 mail dovecot[294462]: lmtp(391257): Warning: Event 
0x55db5bc34450 leaked (parent=0x55db5bc34130): connection.c:523
Sep  8 15:56:17 mail dovecot[294462]: lmtp(391257): Warning: Event 
0x55db5bc34130 leaked (parent=0x55db5bc20ce0): 
smtp-server-connection.c:773
Sep  8 15:56:17 mail dovecot[294462]: lmtp(391257): Warning: Event 
0x55db5bc20ce0 leaked (parent=(nil)): lmtp-client.c:164


Is this something to be concerned about?


Usually, not.

But, it is a bug. What version is this?

Regards,

Stephan.


Re: Btrfs RAID-10 performance

2020-09-08 Thread John Stoffel
> "Miloslav" == Miloslav Hůla  writes:

Miloslav> Hello,
Miloslav> I sent this into the Linux Kernel Btrfs mailing list and I got reply: 
Miloslav> "RAID-1 would be preferable" 
Miloslav> 
(https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2...@lechevalier.se/T/).
 
Miloslav> May I ask you for the comments as from people around the Dovecot?


Miloslav> We are using btrfs RAID-10 (/data, 4.7TB) on a physical Supermicro 
Miloslav> server with Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz and 125GB of 
RAM. 
Miloslav> We run 'btrfs scrub start -B -d /data' every Sunday as a cron task. 
It 
Miloslav> takes about 50 minutes to finish.

Miloslav> # uname -a
Miloslav> Linux imap 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20) x86_64 
Miloslav> GNU/Linux

Miloslav> RAID is a composition of 16 harddrives. Harddrives are connected via 
Miloslav> AVAGO MegaRAID SAS 9361-8i as a RAID-0 devices. All harddrives are 
SAS 
Miloslav> 2.5" 15k drives.

Can you post the output of "cat /proc/mdstat" or since you say you're
using btrfs, are you using their own RAID0 setup?  If so, please post
the output of 'btrfs stats' or whatever the command is you use to view
layout info? 

Miloslav> Server serves as a IMAP with Dovecot 2.2.27-3+deb9u6, 4104 accounts, 
Miloslav> Mailbox format, LMTP delivery.

How ofter are these accounts hitting the server?  

Miloslav> We run 'rsync' to remote NAS daily. It takes about 6.5 hours to 
finish, 
Miloslav> 12'265'387 files last night.

That's sucky.  So basically you're hitting the drives hard with
random IOPs and you're probably running out of performance.  How much
space are you using on the filesystem?

And why not use brtfs send to ship off snapshots instead of using
rsync?  I'm sure that would be an improvement...

Miloslav> Last half year, we encoutered into performace
Miloslav> troubles. Server load grows up to 30 in rush hours, due to
Miloslav> IO waits. We tried to attach next harddrives (the 838G ones
Miloslav> in a list below) and increase a free space by rebalace. I
Miloslav> think, it helped a little bit, not not so rapidly.

If you're IOPs bound, but not space bound, then you *really* want to
get an SSD in there for the indexes and such.  Basically the stuff
that gets written/read from all the time no matter what, but which
isn't large in terms of space.

Also, adding in another controller card or two would also probably
help spread the load across more PCI channels, and reduce contention
on the SATA/SAS bus as well.

Miloslav> Is this a reasonable setup and use case for btrfs RAID-10?
Miloslav> If so, are there some recommendations to achieve better
Miloslav> performance?

1. move HOT data to SSD based volume RAID 1 pair.  On a seperate
   controller. 
2. add more controllers, which also means you're more redundant in
   case one controller fails.
3. Clone the system and put Dovecot IMAP director in from of the
   setup.
4. Stop using rsync for copying to your DR site, use the btrfs snap
   send, or whatever the commands are.
5. check which dovecot backend you're using and think about moving to
   one which doesn't involve nearly as many files.
6. Find out who your biggest users are, in terms of emails and move
   them to SSDs if step 1 is too hard to do at first. 


Can you also grab some 'iostat -dhm 30 60'  output, which is 30
minutes of data over 30 second intervals?  That should help you narrow
down which (if any) disk is your hotspot.

It's not clear to me if you have one big btrfs filesystem, or a bunch
of smaller ones stiched together.  In any case, it should be very easy
to get better performance here.

I think someone else mentioned that you should look at your dovecot
backend, and you should move to the fastest one you can find.

Good luck!
John


Miloslav> # megaclisas-status
Miloslav> -- Controller information --
Miloslav> -- ID | H/W Model  | RAM| Temp | BBU| Firmware
Miloslav> c0| AVAGO MegaRAID SAS 9361-8i | 1024MB | 72C  | Good   | FW: 
Miloslav> 24.16.0-0082

Miloslav> -- Array information --
Miloslav> -- ID | Type   |Size |  Strpsz | Flags | DskCache |   Status |  
OS 
Miloslav> Path | CacheCade |InProgress
Miloslav> c0u0  | RAID-0 |838G |  256 KB | RA,WB |  Enabled |  Optimal | 
Miloslav> /dev/sdq | None  |None
Miloslav> c0u1  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
Miloslav> /dev/sda | None  |None
Miloslav> c0u2  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
Miloslav> /dev/sdb | None  |None
Miloslav> c0u3  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
Miloslav> /dev/sdc | None  |None
Miloslav> c0u4  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
Miloslav> /dev/sdd | None  |None
Miloslav> c0u5  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
Miloslav> /dev/sde | None  |None
Miloslav> c0u6  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
Miloslav> /dev/sdf | None  |None
Milosl