We still have some trusty servers out there, so I’d say no :)
> On Apr 10, 2018, at 3:40 PM, Renan DelValle wrote:
>
> Hi all,
>
> We're almost exactly one year away from the end of life of Ubuntu 14.04.[1]
> Taking this into consideration, I wanted to see how the community felt about
> 0.21
As strong users of aurora but weak contributors, we at Chartbeat apologize for
our lack of participation. We’re several versions behind on mesos/aurora
upgrades and that’s honestly because it works for us :)
Going forward we’re hoping to be able to participate more, at least with
testing new r
t; in me and I hope folks will join me in continuing to keep this awesome
> project alive.
> >>
> >> Looking forward to our release of 0.22.0.
> >>
> >> -Renan
> >
>
--
[image: chartbeat-gmail-...@2x.png]
Rick Mangi
Senior Director of Data Engineering
Chartbeat
917.848.3619 | @rmangi | r...@chartbeat.com
826 Broadway, 6th Fl., New York, NY 10003
+1
We love Aurora :-)
On Mon, Feb 3, 2020 at 3:54 PM Bill Farner wrote:
> +1
>
> Aurora has always been about pragmatism, and right now, this is the best
> route for new and existing users.
>
> On Fri, Jan 31, 2020 at 5:13 PM Renan DelValle wrote:
>
> > +1 (with a fair bit of sadness but hope
oject into the Attic has passed.
>
> +1 (Binding)
> --
> Renan DelValle
> Stephan Erb
> Bill Farner
> Mauricio Garavaglia
> Dave Lester
> John Sirois
>
> +1 (Non-Binding)
> --
> Se Choi
> Rick Mangi
>
Hey all,
Is there a link to a list of jira tickets to be included? I couldn’t find it
browsing around. Sorry if this is a silly question.
Best,
Rick
> On Sep 6, 2016, at 2:52 PM, Jake Farrell wrote:
>
> Please take a second to review the draft board report below and let me
> know if there a
Sorry for the late reply, but I wanted to chime in here as wanting to see this
feature. We run a medium size cluster (around 1000 cores) in EC2 and I think we
could get better usage of the cluster with more control over the distribution
of job instances. For example it would be nice to limit the
ision will
> basically be permanent until the host you're on goes down. At least with
> how things work now, with each scheduling attempt the job has a fresh
> chance of being put in an ideal slot.
>
> On Thu, Mar 30, 2017 at 8:12 AM, Rick Mangi wrote:
>
>> Sorry for t
fka consumers span multiple jobs? Otherwise host
> constraints solve that problem right?
>
>> On Mar 30, 2017, at 10:34 AM, Rick Mangi wrote:
>>
>> I think the complexity is a great rationale for having a pluggable
>> scheduling layer. Aurora is very flexible a
pretty strong.
> On Mar 30, 2017, at 2:35 PM, Zameer Manji wrote:
>
> Rick,
>
> Can you share why it would be nice to spread out these different jobs on
> different hosts? Is it for reliability, performance, utilization, etc?
>
> On Thu, Mar 30, 2017 at 11:31 AM, Rick Mang
performance reasons.
>
> On Thu, Mar 30, 2017 at 12:16 PM, Rick Mangi wrote:
>
>> Performance and utilization mostly. The kafka consumers are CPU bound (and
>> sometimes network) and the rest of our jobs are mostly memory bound. We’ve
>> found that if too many consume
> I understand the push for job anti-affinity (ie don't put too many kafka
> workers in general on one host), but I would imagine it would be for
> reliability reasons not for performance reasons.
>
> On Thu, Mar 30, 2017 at 12:16 PM, Rick Mangi wrote:
>
>> Performanc
12 matches
Mail list logo