> I tought of that, and I will be needing something like this, since I have
> some services that need to be restarted in the event of them dying or being
> killed.
>
> But I'm not that much confortable scripting a modification of the initab to
> activate / deactivate services on a server-by-serve
From: Nicolas Ross
> >> > while true; do
> >> >your stuff
> >> >sleep 60
> >> > done;
> >>
> >> Sure, but you also need to start the loop and make sure it doesn't die.
> > Put in /etc/inittab
> > ms:2345:respawn:/path/to/my/loop_script
> > (where "ms" is un
>> > while true; do
>> > your stuff
>> > sleep 60
>> > done;
>>
>> Sure, but you also need to start the loop and make sure it doesn't die.
>
> Put in /etc/inittab
> ms:2345:respawn:/path/to/my/loop_script
>
> (where "ms" is unique).
>
> If the loop dies then init wil
On Thu, Nov 11, 2010 at 05:39:15PM -0800, Gordon Messmer wrote:
> On 11/11/2010 03:45 PM, John R Pierce wrote:
> > put the job in a loop like...
> >
> > while true; do
> > your stuff
> > sleep 60
> > done;
>
> Sure, but you also need to start the loop and make sure i
> Sure, but you also need to start the loop and make sure it doesn't die.
> You could use a script like this to repeat a script and then wait:
>
> ---
> #!/bin/sh
>
> delay="$1"
> shift
>
> "$...@}"
>
> at now + "$delay" < "$0" "$delay" "$...@}"
> EOF
> ---
>
> Run "repeat.sh 5m /path/to/wh
On 11/11/2010 03:45 PM, John R Pierce wrote:
> put the job in a loop like...
>
> while true; do
> your stuff
> sleep 60
> done;
Sure, but you also need to start the loop and make sure it doesn't die.
You could use a script like this to repeat a script and then wai
On 11/11/10 12:32 PM, Nicolas Ross wrote:
> We even have a job that is scheduled to run every 60 seconds, but can take 2
> hours to complete.
>
> Is there any scheduler under linux that approch this ?
don't even really need a scheduler for that.
put the job in a loop like...
while true; do
On Thu, Nov 11, 2010, Nicolas Ross wrote:
>On another note, on the same subject (xServes being disontinued), one
>feature we use heavily on our os-x server is the ability to load / unload
>periodic jobs with launchd.
>
>With it we're able to schedule jobs let's say every 5 minutes, and so on.
>O
On 11/11/2010 2:32 PM, Nicolas Ross wrote:
> On another note, on the same subject (xServes being disontinued), one
> feature we use heavily on our os-x server is the ability to load / unload
> periodic jobs with launchd.
>
> With it we're able to schedule jobs let's say every 5 minutes, and so on.
On another note, on the same subject (xServes being disontinued), one
feature we use heavily on our os-x server is the ability to load / unload
periodic jobs with launchd.
With it we're able to schedule jobs let's say every 5 minutes, and so on.
One could say I could do something like "*/5 * *
>> The linux-cluster mailing list is super friendly, has both developers
>> and consumers of the entire RHCS & associated packages - and CentOS
>> friendly :) I seriously recommend anyone looking to do any sort of work
>> with this toolchain should be on that list.
>
> Thanks, I'll surely make a vi
> The linux-cluster mailing list is super friendly, has both developers
> and consumers of the entire RHCS & associated packages - and CentOS
> friendly :) I seriously recommend anyone looking to do any sort of work
> with this toolchain should be on that list.
Thanks, I'll surely make a visit
Cost is per TB. Would kill me here when one user occupies 150TB just
themselves.
- Original Message -
| On 11/8/10 6:29 PM, James A. Peltier wrote:
| >
| > I have a solution that is currently centered around commodity
| > storage bricks (Dell R510), flash PCI-E controllers, 1 or 10GbE (o
On 11/9/2010 2:32 PM, Nicolas Ross wrote:
>> Have you looked at Red Hat's GFS? That seems to fit at least a portion of
>> your needs (I don't use it, so I don't know all that it does).
>
> I've spent better part of the last day to read documentation on gfs2 on
> redhat's site.
>
> My god, that's p
On Tue, 2010-11-09 at 15:32 -0500, Nicolas Ross wrote:
> > Have you looked at Red Hat's GFS? That seems to fit at least a portion of
> > your needs (I don't use it, so I don't know all that it does).
>
> I've spent better part of the last day to read documentation on gfs2 on
> redhat's site.
>
On 11/09/2010 08:32 PM, Nicolas Ross wrote:
> The documentation is very technichal, I'm ok with that, but it seems to miss
> some starting point. For instance, there's a part about the required number
> of journal to create and the size of those. But I cannot find suggested size
> or any thumb-rule
> Have you looked at Red Hat's GFS? That seems to fit at least a portion of
> your needs (I don't use it, so I don't know all that it does).
I've spent better part of the last day to read documentation on gfs2 on
redhat's site.
My god, that's pretty much what I'm looking for... To the point tha
> KB, I think the OP is looking for a nice set of userland tools which
> was included in xServer
Pretty much.
Since we were about to purchase about 8 new xserve to build a new xSan on
top of an active raid 16 1 tb disk enclosure as our new production
environement, we are exploring other possibili
> On 11/9/10 2:53 AM, rai...@ultra-secure.de wrote:
>>
>>> Did you look at Nexentastor for this? You might need the commercial
>>> version for
>>> a fail-over set but I think the basic version is free up to a fairly
>>> large
>>> size.
>>
>> 12T, IIRC.
>> That's not exactly great IMO.
>> You get t
On 11/9/10 2:53 AM, rai...@ultra-secure.de wrote:
>
>> Did you look at Nexentastor for this? You might need the commercial
>> version for
>> a fail-over set but I think the basic version is free up to a fairly large
>> size.
>
> 12T, IIRC.
> That's not exactly great IMO.
> You get that with a RAID
On 11/09/2010 12:40 PM, rai...@ultra-secure.de wrote:
>> I was reading this thread and wondering how come no one brought up the
>> fact that you can achieve the entire desired feature set just using the
>> components already included in CentOS-5.
> But there is no GFS for OSX, IIRC.
The last comme
On Mon, 8 Nov 2010 at 9:36pm, Nicolas Ross wrote
> Thanks for the suggestions (others also), but I don't beleivee it'll do. We
> need to be able to access the file system directly via FC so we can lock
> files across systems. Pretty much like xSan, but not on apple. xSan is
> really StorNext from
> On 11/09/2010 12:13 PM, Joshua Baker-LePain wrote:
>> Have you looked at Red Hat's GFS? That seems to fit at least a portion
>> of
>> your needs (I don't use it, so I don't know all that it does).
>>
>
> Good point Joshua,
>
> I was reading this thread and wondering how come no one brought up th
On Tue, Nov 9, 2010 at 2:35 PM, Karanbir Singh wrote:
> On 11/09/2010 12:13 PM, Joshua Baker-LePain wrote:
>> Have you looked at Red Hat's GFS? That seems to fit at least a portion of
>> your needs (I don't use it, so I don't know all that it does).
>>
>
> Good point Joshua,
>
> I was reading thi
On 11/09/2010 12:13 PM, Joshua Baker-LePain wrote:
> Have you looked at Red Hat's GFS? That seems to fit at least a portion of
> your needs (I don't use it, so I don't know all that it does).
>
Good point Joshua,
I was reading this thread and wondering how come no one brought up the
fact that y
> On 11/8/10 6:29 PM, James A. Peltier wrote:
>>
> Did you look at Nexentastor for this? You might need the commercial
> version for
> a fail-over set but I think the basic version is free up to a fairly large
> size.
12T, IIRC.
That's not exactly great IMO.
You get that with a RAID10 over two p
On Tue, Nov 9, 2010 at 4:36 AM, Nicolas Ross wrote:
>> Perhaps FreeNAS would fit the bill?
>>
>> http://freenas.org/features
>>
>
> Thanks for the suggestions (others also), but I don't beleivee it'll do. We
> need to be able to access the file system directly via FC so we can lock
> files across
> Perhaps FreeNAS would fit the bill?
>
> http://freenas.org/features
>
Thanks for the suggestions (others also), but I don't beleivee it'll do. We
need to be able to access the file system directly via FC so we can lock
files across systems. Pretty much like xSan, but not on apple. xSan is
rea
On 11/8/10 6:29 PM, James A. Peltier wrote:
>
> I have a solution that is currently centered around commodity storage bricks
> (Dell R510), flash PCI-E controllers, 1 or 10GbE (on separate Jumbo Frame
> Data Tier) and Solaris + ZFS.
>
> So far it has worked out really well. Each R510 is a box wi
On 11/08/10 4:29 PM, James A. Peltier wrote:
> You then need a method for dealing with the high availability aspect. You
> need to be able to fence the storage while a fail-over is taking place. You
> need to (maybe) move MAC addresses and other storage IP bits. This is the
> hard part! Gett
- Original Message -
| On 11/09/2010 12:58 AM, Tim Dunphy wrote:
| > Perhaps FreeNAS would fit the bill?
| >
| > http://freenas.org/features
| >
| >
| > Sent from my iPhone
| >
| > On Nov 8, 2010, at 6:52 PM, Gordon Messmer wrote:
| >
| >> On 11/07/2010 03:33 AM, Nicolas Ross wrote:
| >>>
|
On 11/08/2010 04:06 PM, Patrick Lists wrote:
> On 11/09/2010 12:58 AM, Tim Dunphy wrote:
>> Perhaps FreeNAS would fit the bill?
>> http://freenas.org/features
>
> How about openfiler: http://www.openfiler.com/
I don't believe either of those support exporting volumes over Fibre
Channel. You coul
On 11/09/2010 12:58 AM, Tim Dunphy wrote:
> Perhaps FreeNAS would fit the bill?
>
> http://freenas.org/features
>
>
> Sent from my iPhone
>
> On Nov 8, 2010, at 6:52 PM, Gordon Messmer wrote:
>
>> On 11/07/2010 03:33 AM, Nicolas Ross wrote:
>>>
>>> Is there any other solution for building a SAN un
Perhaps FreeNAS would fit the bill?
http://freenas.org/features
Sent from my iPhone
On Nov 8, 2010, at 6:52 PM, Gordon Messmer wrote:
> On 11/07/2010 03:33 AM, Nicolas Ross wrote:
>>
>> Is there any other solution for building a SAN under linux ?
>
> None of my customers use a SAN right now
On 11/07/2010 03:33 AM, Nicolas Ross wrote:
>
> Is there any other solution for building a SAN under linux ?
None of my customers use a SAN right now. I have some friends who speak
pretty highly of their Dell SAN gear (re-branded EMC CX300) with Qlogic
HBAs.
Nicolas Ross wrote:
Thanks,
On 11/05/2010 04:34 PM, Nicolas Ross wrote:
Now with this said, I am searching for documentation on operating a SAN
under linux. We are looking at Quantum StorNext FS2 product for the SAN
itselft.
I'm not sure how much help you'll get from the community. StorNext i
On 11/07/10 3:33 AM, Nicolas Ross wrote:
> Thanks,
>
>> On 11/05/2010 04:34 PM, Nicolas Ross wrote:
>>> Now with this said, I am searching for documentation on operating a SAN
>>> under linux. We are looking at Quantum StorNext FS2 product for the SAN
>>> itselft.
>> I'm not sure how much help you'
Thanks,
> On 11/05/2010 04:34 PM, Nicolas Ross wrote:
>> Now with this said, I am searching for documentation on operating a SAN
>> under linux. We are looking at Quantum StorNext FS2 product for the SAN
>> itselft.
>
> I'm not sure how much help you'll get from the community. StorNext is a
> p
On 11/05/2010 04:34 PM, Nicolas Ross wrote:
> Now with this said, I am searching for documentation on operating a SAN
> under linux. We are looking at Quantum StorNext FS2 product for the SAN
> itselft.
I'm not sure how much help you'll get from the community. StorNext is a
proprietary product t
On Nov 5, 2010, at 7:34 PM, "Nicolas Ross" wrote:
> Hi !
>
> As some of you might know, Apple has discontinued it's xServes server as of
> january 31st 2011.
>
> We have a server rack with 12 xserves ranging from dual G5's to dual
> quand-core xeon lastest generation, 3 xserve-raid and one ac
Hi !
As some of you might know, Apple has discontinued it's xServes server as of
january 31st 2011.
We have a server rack with 12 xserves ranging from dual G5's to dual
quand-core xeon lastest generation, 3 xserve-raid and one activeraid 16 TB
disk enclosure. We also use xSan to access a share
41 matches
Mail list logo