[CentOS] firefox-29.0-5.1.el6

2014-05-17 Thread Νικόλαος Γεωργόπουλος
New compilation of firefox (v. 29.0)
build with
1) devtools-2 (http://people.centos.org/tru/devtools-2/readme)
2) python27 (from SCL)  (
http://ftp.scientificlinux.org/linux/scientific/6.5/i386/external_products/softwarecollections/
)
3) icu-last-50.1.2 from remi (http://rpms.famillecollet.com/SRPMS/)

https://drive.google.com/uc?id=0B9RlkKQB1POSOXk1OG5KODJNWFk&export=download

https://drive.google.com/file/d/0B9RlkKQB1POSNW9YaWxBVWR6UVk/edit

Sources:

https://drive.google.com/file/d/0B9RlkKQB1POSems0VXNIWXVuSjg/edit?usp=sharing


https://drive.google.com/file/d/0B9RlkKQB1POSY0RJQTFQWlk0N3M/edit?usp=sharing

Waiting for comments
ngeorgop
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Large file system idea

2014-05-17 Thread Steve Thompson
This idea is intruiging...

Suppose one has a set of file servers called A, B, C, D, and so forth, all 
running CentOS 6.5 64-bit, all being interconnected with 10GbE. These file 
servers can be divided into identical pairs, so A is the same 
configuration (diks, processors, etc) as B, C the same as D, and so forth 
(because this is what I have; there are ten servers in all). Each file 
server has four Xeon 3GHz processors and 16GB memory. File server A acts 
as an iscsi target for logical volumes A1, A2,...An, and file server B 
acts as an iscsi target for logical volumes B1, B2,...Bn, where each LVM 
volume is 10 TB in size (a RAID-5 set of six 2TB NL-SAS disks). There are 
no file systems directly built on any of the LVM volumes. Each member of a 
server pair (A,B) are in different cabinets (albeit in the same machine 
room) and are on different power circuits, and have UPS protection.

A server system called S (which has six processors and 48 GB memory, and 
is not one of the file servers), acts as iscsi initiator for all targets. 
On S, A1 and B1 are combined into the software RAID-1 volume /dev/md101. 
Similarly, A2 and B2 are combined into /dev/md102, and so forth for as 
many target pairs as one has. The initial sync of /dev/md101 takes about 6 
hours, with the sync speed being around 400 MB/sec for a 10TB volume. I 
realize that only half of the 10-gig bandwidth is available while writing, 
since the data is being written twice.

All of the /dev/md10X volumes are LVM PV's and are members of the same 
volume group, and there is one logical volume that occupies the entire 
volume group. An XFS file system (-i size=512, inode64) is built on top of 
this logical volume, and S NFS-exports that to the world (an HPC cluster 
of about 200 systems). In my case, the size of the resulting file system 
will ultimately be around 80 TB. The I/O performance of the xfs file 
system is most excellent, and exceeds by a large amount the performance of 
the equivalent file systems built with such packages as MooseFS and 
GlusterFS: I get about 350 MB/sec write speed through the file system, and 
up to 800 MB/sec read.

I have built something like this, and by performing tests such as sending 
a SIGKILL to one of the tgtd's, I have been unable to kill access to the 
file system. Obviously one has to manually intervene on the return of the 
tgtd in order to fail/hot-remove/hot-add the relevent target(s) to the md 
device. Presumably this will be made easier by using persistent device 
names for the targets on S.

One could probably expand this to supplement the server S with a second 
server T to allow the possibility of failover of the service should S 
croak. I haven't tackled that part yet.

So, what failure scenarios can take out the entire file system, assuming 
that both members of a pair (A,B) or (C,D) don't go down at the same time? 
There's no doubt that I haven't thought of something.

Steve
-- 

Steve Thompson E-mail:  smt AT vgersoft DOT com
Voyager Software LLC   Web: http://www DOT vgersoft DOT com
39 Smugglers Path  VSW Support: support AT vgersoft DOT com
Ithaca, NY 14850
   "186,282 miles per second: it's not just a good idea, it's the law"

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Large file system idea

2014-05-17 Thread SilverTip257
On Sat, May 17, 2014 at 10:30 AM, Steve Thompson  wrote:

> This idea is intruiging...
>
> Suppose one has a set of file servers called A, B, C, D, and so forth, all
> running CentOS 6.5 64-bit, all being interconnected with 10GbE. These file
> servers can be divided into identical pairs, so A is the same
> configuration (diks, processors, etc) as B, C the same as D, and so forth
> (because this is what I have; there are ten servers in all). Each file
> server has four Xeon 3GHz processors and 16GB memory. File server A acts
> as an iscsi target for logical volumes A1, A2,...An, and file server B
> acts as an iscsi target for logical volumes B1, B2,...Bn, where each LVM
> volume is 10 TB in size (a RAID-5 set of six 2TB NL-SAS disks). There are
> no file systems directly built on any of the LVM volumes. Each member of a
> server pair (A,B) are in different cabinets (albeit in the same machine
> room) and are on different power circuits, and have UPS protection.
>
> A server system called S (which has six processors and 48 GB memory, and
> is not one of the file servers), acts as iscsi initiator for all targets.
> On S, A1 and B1 are combined into the software RAID-1 volume /dev/md101.
>

Sounds like you might be reinventing the wheel.
DRBD [0] does what it sounds like you're trying to accomplish [1].

Especially since you have two nodes A+B or C+D that are RAIDed over iSCSI.

It's rather painless to set up two-nodes with DRBD.
But once you want to sync three [2] or more nodes with each other, the
number of resources (DRBD block devices) becomes exponentially larger.
 Linbit, the developers behind DRBD, call it resource stacking.

[0] http://www.drbd.org/
[1] http://www.drbd.org/users-guide-emb/ch-configure.html
[2] http://www.drbd.org/users-guide-emb/s-three-nodes.html


> Similarly, A2 and B2 are combined into /dev/md102, and so forth for as
> many target pairs as one has. The initial sync of /dev/md101 takes about 6
> hours, with the sync speed being around 400 MB/sec for a 10TB volume. I
> realize that only half of the 10-gig bandwidth is available while writing,
> since the data is being written twice.
>
> All of the /dev/md10X volumes are LVM PV's and are members of the same
> volume group, and there is one logical volume that occupies the entire
> volume group. An XFS file system (-i size=512, inode64) is built on top of
> this logical volume, and S NFS-exports that to the world (an HPC cluster
> of about 200 systems). In my case, the size of the resulting file system
> will ultimately be around 80 TB. The I/O performance of the xfs file
> system is most excellent, and exceeds by a large amount the performance of
> the equivalent file systems built with such packages as MooseFS and
> GlusterFS: I get about 350 MB/sec write speed through the file system, and
> up to 800 MB/sec read.
>
> I have built something like this, and by performing tests such as sending
> a SIGKILL to one of the tgtd's, I have been unable to kill access to the
> file system. Obviously one has to manually intervene on the return of the
> tgtd in order to fail/hot-remove/hot-add the relevent target(s) to the md
> device. Presumably this will be made easier by using persistent device
> names for the targets on S.
>
> One could probably expand this to supplement the server S with a second
> server T to allow the possibility of failover of the service should S
> croak. I haven't tackled that part yet.
>
> So, what failure scenarios can take out the entire file system, assuming
> that both members of a pair (A,B) or (C,D) don't go down at the same time?
> There's no doubt that I haven't thought of something.
>
> Steve
> --
>
> 
> Steve Thompson E-mail:  smt AT vgersoft DOT com
> Voyager Software LLC   Web: http://www DOT vgersoft DOT
> com
> 39 Smugglers Path  VSW Support: support AT vgersoft DOT com
> Ithaca, NY 14850
>"186,282 miles per second: it's not just a good idea, it's the law"
>
> 
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>



-- 
---~~.~~---
Mike
//  SilverTip257  //
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Large file system idea

2014-05-17 Thread Steve Thompson
On Sat, 17 May 2014, SilverTip257 wrote:

> Sounds like you might be reinventing the wheel.

I think not; see below.

> DRBD [0] does what it sounds like you're trying to accomplish [1].
> Especially since you have two nodes A+B or C+D that are RAIDed over iSCSI.
> It's rather painless to set up two-nodes with DRBD.

I am familiar with DRBD, having used it for a number of years. However, I 
don't think this does what I am describing. With a conventional two-node 
DRBD setup, the drbd block device appears on both storage nodes, one of 
which is primary. In this case, writes to the block device are done from 
the client to the primary, and the storage I/O is done locally on the 
primary and is forwarded across the network by the primary to the 
secondary.

What I am describing in my experiment is a setup in which the block device 
(/dev/mdXXX) appears on neither of the storage nodes, but on a third node. 
Writes to the block device are done from the client to the third node and 
are forwarded over the network to both storage servers. The whole setup 
can be done with only packages from the base repo.

I don't see how this can be accomplished with DRBD, unless the DRBD 
two-node setup then iscsi-exports the block device to the third node. With 
provision for failover, this is surely a great deal more complex than the 
setup that I have described.

If DRBD had the ability for the drbd block device to appear on a third 
node (one that *does not have any storage*), then it would perhaps be 
different.

Steve
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Large file system idea

2014-05-17 Thread Eero Volotinen
How about glusterfs?
17.5.2014 20.01 kirjoitti "Steve Thompson" :

> On Sat, 17 May 2014, SilverTip257 wrote:
>
> > Sounds like you might be reinventing the wheel.
>
> I think not; see below.
>
> > DRBD [0] does what it sounds like you're trying to accomplish [1].
> > Especially since you have two nodes A+B or C+D that are RAIDed over
> iSCSI.
> > It's rather painless to set up two-nodes with DRBD.
>
> I am familiar with DRBD, having used it for a number of years. However, I
> don't think this does what I am describing. With a conventional two-node
> DRBD setup, the drbd block device appears on both storage nodes, one of
> which is primary. In this case, writes to the block device are done from
> the client to the primary, and the storage I/O is done locally on the
> primary and is forwarded across the network by the primary to the
> secondary.
>
> What I am describing in my experiment is a setup in which the block device
> (/dev/mdXXX) appears on neither of the storage nodes, but on a third node.
> Writes to the block device are done from the client to the third node and
> are forwarded over the network to both storage servers. The whole setup
> can be done with only packages from the base repo.
>
> I don't see how this can be accomplished with DRBD, unless the DRBD
> two-node setup then iscsi-exports the block device to the third node. With
> provision for failover, this is surely a great deal more complex than the
> setup that I have described.
>
> If DRBD had the ability for the drbd block device to appear on a third
> node (one that *does not have any storage*), then it would perhaps be
> different.
>
> Steve
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Large file system idea

2014-05-17 Thread Steve Thompson
On Sat, 17 May 2014, Eero Volotinen wrote:

> How about glusterfs?

I have tried glusterfs; the large file performance is reasonable, but
the small file performance is too low to be useable.

Steve
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] firefox-29.0-5.1.el6

2014-05-17 Thread ngeorgop
New compilation of firefox (v. 29.0) 
build with 
1) devtools-2 (http://people.centos.org/tru/devtools-2/readme) 
2) python27 (from SCL) 
(http://ftp.scientificlinux.org/linux/scientific/6.5/i386/external_products/softwarecollections/)
 
3) icu-last-50.1.2 from remi (http://rpms.famillecollet.com/SRPMS/) 

firefox-29.0-5.1.el6.i686.rpm 

https://drive.google.com/uc?id=0B9RlkKQB1POSOXk1OG5KODJNWFk&export=download 

ibicu-last-50.1.2-10.el6.i686.rpm 

https://drive.google.com/file/d/0B9RlkKQB1POSNW9YaWxBVWR6UVk/edit 

Sources: 

firefox-29.0-5.1.el6.src.rpm 

https://drive.google.com/file/d/0B9RlkKQB1POSems0VXNIWXVuSjg/edit?usp=sharing 


icu-last-50.1.2-10.remi.src.rpm 

https://drive.google.com/file/d/0B9RlkKQB1POSY0RJQTFQWlk0N3M/edit?usp=sharing 

Waiting for comments 

ngeorgop 



--
View this message in context: 
http://centos.1050465.n5.nabble.com/CentOS-firefox-29-0-5-1-el6-tp5726640.html
Sent from the CentOS mailing list archive at Nabble.com.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Large file system idea

2014-05-17 Thread SilverTip257
On Sat, May 17, 2014 at 1:00 PM, Steve Thompson  wrote:

> On Sat, 17 May 2014, SilverTip257 wrote:
>
> > Sounds like you might be reinventing the wheel.
>
> I think not; see below.


> > DRBD [0] does what it sounds like you're trying to accomplish [1].
> > Especially since you have two nodes A+B or C+D that are RAIDed over
> iSCSI.
> > It's rather painless to set up two-nodes with DRBD.
>
> I am familiar with DRBD, having used it for a number of years. However, I
> don't think this does what I am describing. With a conventional two-node
> DRBD setup, the drbd block device appears on both storage nodes, one of
> which is primary. In this case, writes to the block device are done from
> the client to the primary, and the storage I/O is done locally on the
> primary and is forwarded across the network by the primary to the
> secondary.


> What I am describing in my experiment is a setup in which the block device
> (/dev/mdXXX) appears on neither of the storage nodes, but on a third node.
> Writes to the block device are done from the client to the third node and
> are forwarded over the network to both storage servers. The whole setup
> can be done with only packages from the base repo.
>

Right, DRBD is no longer available from the CentOS Extras repo (like it was
in EL5).


>
> I don't see how this can be accomplished with DRBD, unless the DRBD
> two-node setup then iscsi-exports the block device to the third node. With
> provision for failover, this is surely a great deal more complex than the
> setup that I have described.
>
> If DRBD had the ability for the drbd block device to appear on a third
> node (one that *does not have any storage*), then it would perhaps be
> different.
>

Ah, good point.


-- 
---~~.~~---
Mike
//  SilverTip257  //
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Sorry

2014-05-17 Thread Always Learning

Top posting ALWAYS makes sense when the poster has included nearly 200
lines of redundant and time-wasting waffle from previous posters.

Scrolling down - all the way down - to read a few words is time wasting
and irritating.

Until posters ruthlessly exclude all redundant material, top posting
makes sense because it is the fastest and most efficient method of
conveying a response to others on the mail list.

There is an art to replying intelligently to a previous posting -
interspersing replies to the previous poster's comments BUT ALWAYS
EXCLUDING SURPLUS TEXT.

I blame M$ for introducing TOP POSTING.


-- 
Paul.
England,
EU.

   Our systems are exclusively Centos. No Micro$oft Windoze here.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Sorry

2014-05-17 Thread Steve Clark
On 05/16/2014 06:40 PM, Original Woodchuck wrote:
> On Fri, May 16, 2014 at 03:27:23PM -0400, Steve Clark wrote:
>
 Could someone explain again why we are not suppose to top post?
> It's polite and shows you are a gentleman.  It's in the same category of
> "consideration for others" as keeping to your locale's preferred side of
> roads, hallways and stairways, restricting flatus in elevators, dressing
> in clean clothes that cover your locale's taboo parts of the body, chewing
> with closed lips, cleaning teeth, ears, noses and butts in private,
> moderating the urge to scratch every single itch, not speaking in foul
> language in front of decent people, using correct spelling and grammar,
> not spitting, especially on carpets, and suchlike meaningless niceties.
>
> In other words, it's part of pretending that one is not a baboon.
>
> It is true we are apes.  We are the apes who pretend to be better
> than that.
>
>> Well I find people get very upset about it, and to me in the grand scheme of 
>> things it
>> seems pretty low on the totem pole.
> It's almost as annoying as using funny fonts and failing to use fmt(1)
> to wrap lines at 72 characters. (So called flowed text.)
>
> Even worse is failing to trim posts of extraneous verbiage.
>
>> Regards,
>>
>> -- 
>> Stephen Clark
>> *NetWolves Managed Services, LLC.*
>> Director of Technology
>> Phone: 813-579-3200
>> Fax: 813-882-0209
>> Email: steve.cl...@netwolves.com
>> http://www.netwolves.com
>> ___
>> CentOS mailing list
>> CentOS@centos.org
>> http://lists.centos.org/mailman/listinfo/centos
> And .sigs longer than the message.
>
> In the last few months, I've done some top posting in order to conform
> to the local norms of certain mailing lists (not this one), which I have
> noticed consist mostly of lamers.  Today, I take the "never again" oath.
>
> BTW, the "totem pole" figure of speech here is inappropriate. "Low
> on the totem pole" refers to low social status, not low priority or
> importance, unless your intention was to accuse people who format their
> email according to the received standards as being low-class individuals.
>
> I point out to you that in the area of manners, it matters not a whit
> that you consider some behavior inappropriate, vulgar or even vicious.
> It matters what the other person feels; that is why there are no rules
> of polite behavior for when you are alone.  Your goal (in the area of
> manners and etiquette) is to cater to what pleases others, not yourself.
>
> I'm not telling anyone what to do.  I'm saying what is expected of them;
> meeting the expectations of others is one's own choice.
>
> Dave
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
All I can say to that rant is Wow!!!


-- 
Stephen Clark
*NetWolves Managed Services, LLC.*
Director of Technology
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Sorry

2014-05-17 Thread Alexander Dalloz
Am 17.05.2014 23:22, schrieb Always Learning:
>
> Top posting ALWAYS makes sense when the poster has included nearly 200
> lines of redundant and time-wasting waffle from previous posters.

False argument.

Top-posting is nearly always combined with fully quoting the previous 
mailing. That is bsolutely unnecessary on a mailinglist and even a waste 
of resources.

Strip off redundant content!

> Scrolling down - all the way down - to read a few words is time wasting
> and irritating.

Then why not just erasing all the rubbish you don't care about?

> Until posters ruthlessly exclude all redundant material, top posting
> makes sense because it is the fastest and most efficient method of
> conveying a response to others on the mail list.

No, it just demonstrates that you as the top-poster and full quoter are 
not caring for the previous communication and not caring enough for a 
sane readable thread. If the top-poster just cares for his quick and 
"easy" action, then why does he reply at all?

> There is an art to replying intelligently to a previous posting -
> interspersing replies to the previous poster's comments BUT ALWAYS
> EXCLUDING SURPLUS TEXT.

full ack!

> I blame M$ for introducing TOP POSTING.

It makes no sense to blame a company, it is the people who don't make 
enough effort to help everyone on a mailinglist to follow the 
discussions in an efficient way by seeing the questions and answers in a 
quick way.

Have you ever searched for something in a mailing list archive and then 
stumbled about a thread where proper quoting and stripping the context 
is wildly mixed with top-poster and full-quoter messages? It is a mess 
to find the helpful arguments and content.

Alexander



___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Sorry

2014-05-17 Thread Keith Keller
On 2014-05-17, Always Learning  wrote:
>
> Top posting ALWAYS makes sense when the poster has included nearly 200
> lines of redundant and time-wasting waffle from previous posters.

No, it doesn't.  Just trim the excess.

--keith

-- 
kkel...@wombat.san-francisco.ca.us


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Sorry

2014-05-17 Thread Russell Miller

On May 17, 2014, at 3:29 PM, Alexander Dalloz  wrote:

> Am 17.05.2014 23:22, schrieb Always Learning:
>> 
>> Top posting ALWAYS makes sense when the poster has included nearly 200
>> lines of redundant and time-wasting waffle from previous posters.
> 
> False argument.

In reading through this perennial and ultimately time-wasting argument, I will 
simply say this.

One of the adages that drove the creation of the Internet is thus:  "Be 
conservative in what you
send, and liberal in what you accept".

This could also be stated in the terms of another great piece of literature:  
"Take the beam out
of your own eye before you worry about the mote in your brother's".

Put another way, if people would just spend the time worrying about what they 
do and stop
worrying about the behavior of others, this would be a much nicer world to live 
in.  Even if it annoys
you.

Now I think I'm just going to filter out this thread, because in arguing back 
and forth about this,
you're just wasting MY space and time.  Have a nice day.

I almost both top posted AND bottom posted on this thread just to be annoying, 
but not worth it.

--Russell
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Sorry

2014-05-17 Thread Stephen Harris
On Sat, May 17, 2014 at 03:36:16PM -0700, Russell Miller wrote:
> One of the adages that drove the creation of the Internet is thus:  "Be 
> conservative in what you
> send, and liberal in what you accept".

... says the person sending 100 character width emails :-)

-- 

rgds
Stephen
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Large file system idea

2014-05-17 Thread Dennis Jacobfeuerborn
On 17.05.2014 19:00, Steve Thompson wrote:
> On Sat, 17 May 2014, SilverTip257 wrote:
> 
>> Sounds like you might be reinventing the wheel.
> 
> I think not; see below.
> 
>> DRBD [0] does what it sounds like you're trying to accomplish [1].
>> Especially since you have two nodes A+B or C+D that are RAIDed over iSCSI.
>> It's rather painless to set up two-nodes with DRBD.
> 
> I am familiar with DRBD, having used it for a number of years. However, I 
> don't think this does what I am describing. With a conventional two-node 
> DRBD setup, the drbd block device appears on both storage nodes, one of 
> which is primary. In this case, writes to the block device are done from 
> the client to the primary, and the storage I/O is done locally on the 
> primary and is forwarded across the network by the primary to the 
> secondary.
> 
> What I am describing in my experiment is a setup in which the block device 
> (/dev/mdXXX) appears on neither of the storage nodes, but on a third node. 
> Writes to the block device are done from the client to the third node and 
> are forwarded over the network to both storage servers. The whole setup 
> can be done with only packages from the base repo.
> 
> I don't see how this can be accomplished with DRBD, unless the DRBD 
> two-node setup then iscsi-exports the block device to the third node. With 
> provision for failover, this is surely a great deal more complex than the 
> setup that I have described.
> 
> If DRBD had the ability for the drbd block device to appear on a third 
> node (one that *does not have any storage*), then it would perhaps be 
> different.

Why specifically do you care about that? Both with your solution and the
DRBD one the clients only see a NFS endpoint so what does it matter that
this endpoint is placed on one of the storage systems?
Also while with you solution streaming performance may be ok latency is
going to be fairly terrible due to the round-trips and synchronicity
required so this may be a nice setup for e.g. a backup storage system
but not really suited as a more general purpose solution.

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Large file system idea

2014-05-17 Thread Steve Thompson
On Sun, 18 May 2014, Dennis Jacobfeuerborn wrote:

> Why specifically do you care about that? Both with your solution and the
> DRBD one the clients only see a NFS endpoint so what does it matter that
> this endpoint is placed on one of the storage systems?

The whole point of the exercise is to end up with multiple block devices 
on a single system so that I can combine them into one VG using LVM, and 
then build a single file system that covers the lot. On a budget, of 
course.

> Also while with you solution streaming performance may be ok latency is
> going to be fairly terrible due to the round-trips and synchronicity
> required so this may be a nice setup for e.g. a backup storage system
> but not really suited as a more general purpose solution.

Yes, I hear what you are saying. However, I have investigated MooseFS and 
GlusterFS using the same resources, and my experimental iscsi-based setup 
gives a file system that is *much* faster than either in practical use, 
latency notwithstanding.

Steve
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Sorry

2014-05-17 Thread Always Learning

On Sun, 2014-05-18 at 00:29 +0200, Alexander Dalloz wrote:

> Am 17.05.2014 23:22, schrieb Always Learning:
> >
> > Top posting ALWAYS makes sense when the poster has included nearly 200
> > lines of redundant and time-wasting waffle from previous posters.
> 
> False argument.

I am against TOP POSTING. But I write truthfully that it does make sense
when the person, who incorporates 200 lines of redundant text in their
reply, posts.

I am responding to reality. Leider das "reality is not always perfect". 

> Top-posting is nearly always combined with fully quoting the previous 
> mailing. That is bsolutely unnecessary on a mailinglist and even a waste 
> of resources.
> 
> Strip off redundant content!

I wholly agree.

> > Scrolling down - all the way down - to read a few words is time wasting
> > and irritating.
> 
> Then why not just erasing all the rubbish you don't care about?

I do with my postings.

> > Until posters ruthlessly exclude all redundant material, top posting
> > makes sense because it is the fastest and most efficient method of
> > conveying a response to others on the mail list.
> 
> No, it just demonstrates that you as the top-poster and full quoter are 
> not caring for the previous communication and not caring enough for a 
> sane readable thread. If the top-poster just cares for his quick and 
> "easy" action, then why does he reply at all?

I am not a "top" poster. I am an "insert" poster.

> > There is an art to replying intelligently to a previous posting -
> > interspersing replies to the previous poster's comments BUT ALWAYS
> > EXCLUDING SURPLUS TEXT.
> 
> full ack!

Wunderbar :-)

> Have you ever searched for something in a mailing list archive and then 
> stumbled about a thread where proper quoting and stripping the context 
> is wildly mixed with top-poster and full-quoter messages? It is a mess 
> to find the helpful arguments and content.

I have experienced the same difficulties.

Mfg,

Paul.

-- 
Paul.
England,
EU.

   Our systems are exclusively Centos. No Micro$oft Windoze here.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Sorry

2014-05-17 Thread Always Learning

On Sat, 2014-05-17 at 15:33 -0700, Keith Keller wrote:

> On 2014-05-17, Always Learning  wrote:
> >
> > Top posting ALWAYS makes sense when the poster has included nearly 200
> > lines of redundant and time-wasting waffle from previous posters.
> 
> No, it doesn't.  Just trim the excess.

Please tell those who incorporate the junk. 

-- 
Paul.
England,
EU.

   Our systems are exclusively Centos. No Micro$oft Windoze here.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Large file system idea

2014-05-17 Thread Andrew Holway
Have you looked at parallel filesystems such as Lustre and fhgfs?


On 18 May 2014 01:14, Steve Thompson  wrote:

> On Sun, 18 May 2014, Dennis Jacobfeuerborn wrote:
>
> > Why specifically do you care about that? Both with your solution and the
> > DRBD one the clients only see a NFS endpoint so what does it matter that
> > this endpoint is placed on one of the storage systems?
>
> The whole point of the exercise is to end up with multiple block devices
> on a single system so that I can combine them into one VG using LVM, and
> then build a single file system that covers the lot. On a budget, of
> course.
>
> > Also while with you solution streaming performance may be ok latency is
> > going to be fairly terrible due to the round-trips and synchronicity
> > required so this may be a nice setup for e.g. a backup storage system
> > but not really suited as a more general purpose solution.
>
> Yes, I hear what you are saying. However, I have investigated MooseFS and
> GlusterFS using the same resources, and my experimental iscsi-based setup
> gives a file system that is *much* faster than either in practical use,
> latency notwithstanding.
>
> Steve
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Sorry

2014-05-17 Thread Dave Stevens
Quoting Alexander Dalloz :

> Am 17.05.2014 23:22, schrieb Always Learning:
>>
>> Top posting ALWAYS makes sense when the poster has included nearly 200
>> lines of redundant and time-wasting waffle from previous posters.
>
> False argument.
>

+1

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos