Circling back, this should now be fixed.  Please let us know if you find
otherwise!

-=Bill


On Tue, Feb 18, 2014 at 10:26 AM, Bill Farner <wfar...@apache.org> wrote:

> Thanks for checking out Aurora, Sebastien!  Sorry you hit the issue, we're
> all reproducing the same, and are tracking it at 
> AURORA-213<https://issues.apache.org/jira/browse/AURORA-213>.
>  It's a top priority at the moment, so hopefully we'll be fixed up right
> quick!
>
> -=Bill
>
>
> On Mon, Feb 17, 2014 at 5:27 AM, sebgoa <run...@gmail.com> wrote:
>
>> Hi folks,
>>
>> First off thanks for open sourcing Aurora. I have a background in high
>> performance computing where we used schedulers like moab and condor...it's
>> nice to see things like mesos/aurora bring a new twist on things.
>>
>> I followed: http://tepid.org/tech/01-aurora-mesos-cluster/ and got the
>> devtools/zookeeper/master/slave and scheduler up in virtual box.
>>
>> I then ran into a problem (fresh git clone) on the aurora-scheduler node
>>
>> vagrant@precise64:~$ aurora
>> Traceback (most recent call last):
>>  File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
>>    "__main__", fname, loader, pkg_name)
>>  File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
>>    exec code in run_globals
>>  File "/usr/local/bin/aurora/__main__.py", line 24, in <module>
>>  File
>> "/usr/local/bin/aurora/.bootstrap/_twitter_common_python/pex_bootstrapper.py",
>> line 54, in bootstrap_pex
>>  File "/usr/local/bin/aurora/.bootstrap/_twitter_common_python/pex.py",
>> line 145, in execute
>>  File "/usr/local/bin/aurora/.bootstrap/_twitter_common_python/pex.py",
>> line 403, in activate
>>  File "/usr/local/bin/aurora/.bootstrap/_twitter_common_python/util.py",
>> line 52, in maybe_locally_cache
>>  File "/usr/local/bin/aurora/.bootstrap/_twitter_common_python/util.py",
>> line 41, in walk
>>  File "/usr/local/bin/aurora/.bootstrap/_twitter_common_python/util.py",
>> line 25, in walk_metadata
>> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position
>> 9053: ordinal not in range(128)
>>
>> Running the end to end test:
>> src/test/sh/org/apache/aurora/e2e/test_end_to_end.sh
>>
>> it got stuck at:
>>
>> INFO] Updating job: flask_example
>> INFO] Starting job update.
>> INFO] Examining instances: [0]
>> INFO] Killing instances: [0]
>> INFO] Instances killed
>> INFO] Adding instances: [0]
>> INFO] Instances added
>> INFO] Watching instances: [0]
>> INFO] Detected RUNNING instance 0
>> INFO] Instance 0 has been up and healthy for at least 30 seconds
>> INFO] Examining instances: [1]
>> INFO] Killing instances: [1]
>> INFO] Instances killed
>> INFO] Adding instances: [1]
>> INFO] Instances added
>> INFO] Watching instances: [1]
>> INFO] Detected RUNNING instance 1
>> INFO] Instance 1 has been up and healthy for at least 30 seconds
>> INFO] Update successful
>> INFO] Response from scheduler: OK (message: Lock has been released.)
>> Connection to 127.0.0.1 closed.
>> ++ run_sched '/vagrant/deploy_test/aurora_client.pex run
>> example/vagrant/test/flask_example '\''pwd'\'''
>> ++ wc -l
>> ++ vagrant ssh aurora-scheduler -c
>> '/vagrant/deploy_test/aurora_client.pex run
>> example/vagrant/test/flask_example '\''pwd'\'''
>>
>> It never gave me a job url back like on the review:
>> https://reviews.apache.org/r/17457/diff/#index_header
>>
>>
>> thoughts ?
>>
>> -Sebastien
>
>
>

Reply via email to