I was using the wrong subnet
> mask. All working now.
>
> Thanks,
> -Chris
>
> On Mon, Jul 2, 2018 at 12:09 AM, John Meinel
> wrote:
>
>> It should still be available for AWS. I think I saw a case where you
>> might get that error just when it can't find
It should still be available for AWS. I think I saw a case where you might
get that error just when it can't find the subnet you asked for (so
'subnet=' is known, but the 172.32.* couldn't be found).
I'm not positive about it, but I do still see the subnet matching code in
place.
John
=:->
On Mo
IIRC older Juju used the default EC2 settings, which gave 8GB hard drives,
but newer should default to 32GB disks. I'm not sure how that varies across
all providers, though.
Note that you should always be able to bootstrap with a custom root-disk
constraint. eg "juju bootstrap --bootstrap-constrai
btw, including a copy of your /etc/netplan/*.yaml files once you've
bootstrapped would probably be useful to make sure we're addressing the
issues that you're seeing.
John
=:->
On Tue, Apr 17, 2018 at 4:18 PM, John Meinel wrote:
> We should get a bug following this. B
We should get a bug following this. But essentially Bionic changed how
networking configuration is being done, from /etc/network/interfaces to
/etc/netplan/*.yaml files.
When we started implementing support for netplan, they hadn't finished
support for bonds, so we didn't implement support for rea
I think the actual question is how to create a service that responds to
metadata requests. In that case, it is a bit hard to say from the outside,
because it is very dependent on what software-defined-networking you're
using.
169.254.169.254 is a link-local address, which means it can't be routed
I agree with breaking up utils. I've certainly read similar comments
elsewhere that calling something 'utils' inherently causes issues like
this, and it seems to fit with our experience.
I also agree that X stable depending on Y unstable doesn't seem right.
Though we're also very bad at ever makin
We've done some digging on this. It turns out that we can fairly easily
request VSphere to deploy the best version it knows about (ESXi 5.5
supports 10, 6.0 supports 11, 6.5 supports 13). It is fairly easy to change
our descriptor to request any of them, and we saw that 6.0 picked 11 and
ignored 13
We've done some digging on this. It turns out that we can fairly easily
request VSphere to deploy the best version it knows about (ESXi 5.5
supports 10, 6.0 supports 11, 6.5 supports 13). It is fairly easy to change
our descriptor to request any of them, and we saw that 6.0 picked 11 and
ignored 13
That sounds more like the 'ubuntu' user inside the container is having
trouble? Juju connects to initially launched machines as the ubuntu user,
not as your username.
John
=:->
On Thu, Feb 22, 2018 at 5:32 AM, fengxia wrote:
> Verified by login to the container. I could `apt update`, `apt inst
it is finished
> with it.
>
> I suspect that my only option is to build a new container/controller
> and redeploy my machines on it before removing the old
> container/controller.
>
> On Tue, 2018-01-30 at 14:40 +0400, John Meinel wrote:
> > I'm a bit curious what
it is finished
> with it.
>
> I suspect that my only option is to build a new container/controller
> and redeploy my machines on it before removing the old
> container/controller.
>
> On Tue, 2018-01-30 at 14:40 +0400, John Meinel wrote:
> > I'm a bit curious what
I'm a bit curious what you mean by "manually provisioned machine". Are you
saying that you used the VMWare APIs directly to launch an instance, and
then "juju bootstrap IP" to start using that machine, and then "juju
add-machine ssh:IP" to add the second machine?
You could try doing "juju upgrade-
I'm a bit curious what you mean by "manually provisioned machine". Are you
saying that you used the VMWare APIs directly to launch an instance, and
then "juju bootstrap IP" to start using that machine, and then "juju
add-machine ssh:IP" to add the second machine?
You could try doing "juju upgrade-
so with juju-run you could run 'opened-ports' in the hook context of
> each unit on that machine, and this get all the opened+ports on the machine.
>
> On 2 Dec 2017 04:26, "John Meinel" wrote:
>
>> I'm pretty sure that opened-ports only reports th
I'm pretty sure that opened-ports only reports the ports that Juju had
opened for the charm that is making the request. I don't think we list all
ports opened on the machine for all other applications.
So you might need to have a relation that can report it's opened ports to
the subordinate
John
Do you know what terms you need to accept? You should be able to 'juju
accept termname'.
The other thing to check is if your local clock time is fairly accurate.
I believe we filed a bug recently that the term server gives out temporary
auth tokens that are only valid for 1 minute. (and we end up e
So models shared with you are currently possible, but I believe everything
is still owned as an individual user, rather than at a team scope. You
should be able to add other users as 'admin' on an existing model, but
we're still working through a backlog of places that treat "owner"
differently tha
So models shared with you are currently possible, but I believe everything
is still owned as an individual user, rather than at a team scope. You
should be able to add other users as 'admin' on an existing model, but
we're still working through a backlog of places that treat "owner"
differently tha
It does seem like Juju operating the LXD provider, but spanning multiple
machines would be an answer. I do believe that LXD itself is introducing
clustering into their API in the 18.04 cycle. Which would probably need
some updates on the Juju side to handle their targeted provisioning (create
a con
It does seem like Juju operating the LXD provider, but spanning multiple
machines would be an answer. I do believe that LXD itself is introducing
clustering into their API in the 18.04 cycle. Which would probably need
some updates on the Juju side to handle their targeted provisioning (create
a con
heers
> Tilman
>
> On 23.11.2017 11:39, Tilman Baumann wrote:
> > Cool. Thanks
> >
> > The two fields I was interested in, username and password are mising
> > though. :D
> > But I'm thinking right now if I even want to go that route...
> >
> >
I had thought of that, but it only works if you then 'juju expose
postgresql' which would also expose the other ports of postgres
John
=:->
On Nov 23, 2017 23:03, "Tim Penhey" wrote:
> I think you might be able to use:
>
> juju run postgresql/0 'open-port 80'
>
> Tim
>
> On 24/11/17 06:54, Ak
I believe there is an nginx charm, which you could have installed with
"juju deploy nginx --to X" (where X is the machine id of the postgres
charm), and then used "juju expose nginx".
Is there a reason you prefer to install it manually?
One other option would be to co-locate the "ubuntu" charm and
I did start working on a Cassandra interface for something I was working
on. I don't know that it is complete but
https://github.com/jameinel/interface-cassandra
Was my attempt at it.
John
=:->
On Nov 23, 2017 02:26, "Haw Loeung" wrote:
> Hi Tilman,
>
> On Wed, Nov 22, 2017 at 04:02:08PM +0100
It seems like you would tend to use:
import os
os.makedirs(os.path.dirname(config_file))
Which doesn't treat a directory that already exists as a failure.
However, I'm guessing that there is some other expectation that the
directory should be created at some other time.
John
=:->
On Tue, Nov 7
...
> > Perhaps just:
> >
> > juju deploy --map-machines A=B,C=D
> >
> > ... or some variant of that?
> >
> > Let's use the betas to refine and condense and clarify.
>
> +1 to that. I'm wondering if use-existing-machines is ever appropriate
> on its own, as the machine numbers in a model are ep
(sent too soon)
Summary:
before:
1043 local.oplog.rs
868 juju.txns
470 juju.leases
after:
802 local.oplog.rs
625 juju.txns
267 juju.leases
So there seems to be a fairly noticeable decrease in load on the system
around leases (~70%). Again, not super scientific because I didn't measure
over
So I think its fine for giving feedback from client against a controller
(new client, old controller). Though how often we want to warn, have a way
to disable the warning (for how long, etc)?
The other side seems more difficult, as far as 2.0 client talking to a 2.4
controller. We could start assu
So I think its fine for giving feedback from client against a controller
(new client, old controller). Though how often we want to warn, have a way
to disable the warning (for how long, etc)?
The other side seems more difficult, as far as 2.0 client talking to a 2.4
controller. We could start assu
So at the moment, I don't think Juju supports what you're looking for,
which is cross model relations without public addresses. We've certainly
discussed supporting all private for cross model. The main issue is that we
often drive parts of the firewalls (security groups) but without
understanding
So at the moment, I don't think Juju supports what you're looking for,
which is cross model relations without public addresses. We've certainly
discussed supporting all private for cross model. The main issue is that we
often drive parts of the firewalls (security groups) but without
understanding
se the existing "store my data in
postgres, 'pgsql' relation"
?
John
=:->
On Tue, Oct 17, 2017 at 11:16 PM, John Meinel
wrote:
> Why is the subordinate container scoped? The specific scope of container
> is that you only see the single instance that you share a mac
Why is the subordinate container scoped? The specific scope of container is
that you only see the single instance that you share a machine with.
Typically subordinates will use something like the juju-info relation
because all they really care about is to be colocated on the same machine.
I can't
:
>
> ERROR cannot destroy model: context deadline exceeded
>
> Is there an open bug that I can paste error messages and logs to?
>
> ~ PeteVG
>
>
>
> On Mon, Oct 9, 2017 at 4:18 PM John Meinel wrote:
>
>> The 6 accessible models is gone in 2.3 (IIRC), because i
The 6 accessible models is gone in 2.3 (IIRC), because it was actually just
reflecting some locally cached information about model numbers, but wasn't
actually being kept up to date properly.
I think the inability to remove a model that is half-dead might be fixed
already in 2.3 but has to do with
We could use better progress information but once connected we also update
the packages installed on the machine and download and install a few
packages.
Having the latest Ubuntu image often means we have fewer packages to
install.
John
=:->
On Oct 6, 2017 7:00 AM, "fengxia" wrote:
> Hi Juju,
>
It appears to be a more general Canonical outage, as irc.canonical.com is
also affected.
John
=:->
On Sat, Sep 9, 2017 at 6:03 PM, Tom Barber wrote:
> Okay, own up, who killed Jujucharms.com whilst i'm trying to catch up on
> work? :)
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modi
If you're seeing "ubuntu-trusty" then you're using a version of Juju that
doesn't support the pressed. I believe reusing the cached image first was
added in 2.1, when we switched the naming scheme to be more specific.
John
=:->
On Sep 1, 2017 18:10, "Alex Kavanagh" wrote:
>
>
> On Fri, Sep 1, 2
The main reasons we don't directly offer cross cloud support is mostly
about user experience. There are potentially tricky things around getting
routing to work correctly between machines. Also there are several things
that are cached on controllers, which only helps if the controller is
"local" to
Supporting multiple 'projects' in Openstack should already be supported by
passing different credentials during "juju add-model".
John
=:->
On Thu, Aug 31, 2017 at 1:55 AM, Giuseppe Attardi
wrote:
> I have a slightly different requirement.
> Currently in our OpenStack cloud, if a user wants to
I don't think you intended to send the email as PGP encrypted.
John
=:->
2017-08-28 12:06 GMT+04:00 Tim Penhey :
> -BEGIN PGP MESSAGE-
> Version: GnuPG v2
>
> hQIMA233D38ktbXXAQ/+P+Wl6YvVE3PVo1tsN/ynVPSeR5Xu3SfKLoRmXvxM0om/
> /LXYkelYeD5xuJCuLP87gNqiDqJnDZi4I0kOWkLjeG9xvAAJY/57tZLbv29qu
If you set up a space in MAAS and you declare that the VM is in that space,
then when you deploy something into a container, MAAS and Juju will
coordinate to get a Container set up onto a bridge connected to the
interface in the VM that is connected to the same space, with an IP address
from the ra
I just wanted to note that some of the reason for 128GB was because 2.0 and
2.1 did leak memory over time. And if you have a leak you will always
eventually run out. In 2.2 we've fixed all the ones we've found so far and
we're actively doing some performance measuring to give better guidelines.
(If
I'm pretty sure hooks execute in the parent directory.
So if you have
charm/
hooks/
db-relation-changed
myfiles/
test.yaml
you need to
open("hooks/myfiles/test.yaml")
John
=:->
On Jul 20, 2017 04:58, "fengxia" wrote:
> Hi Juju,
>
> I think I read at some point in the official doc
>
> ...
>
> Current known limitations:
> Only works in the same model
> You need to bootstrap with the feature flag to test it out
> Does not currently work with relations to subordinates. Work is in progress
>
I'm guessing you mean "only works in the same controller". If cross model
relations w
I'd really like to see us split apart the facades-by-purpose. So we'd
collect the facades for Agents separately from facades for Users (and
possibly also facades for Controller).
I'm not sure if moving things just into 'facades' just moves the problem
around and leaves us with just a *different* di
For what its worth, I use "apt-cacher-ng" to do this very thing and have
~/.local/share/juju/clouds.yaml:
clouds:
lxd:
type: lxd
config:
apt-http-proxy: http://192.168.0.106:3142
apt-https-proxy: http://192.168.0.106:3142
enable-os-upgrade: false
As long as you're keepi
Generally you have to configure the profile of the containers (either by
configuring the default profiles or by altering the configuration of a
single container and then restarting that container).
If there are particular modules that you know you will need then you can
use "linux.kernel_modules" t
Are these 3 LXD containers on one machine, or 3 different host machines
where you want to run LXD containers?
At present we don't support scheduling across LXD hosts, so the easier way
would probably be to treat 3 host machines as separate 'manually
provisioned' machines, and then deploy to contai
Gotcha. Though I will say that looks more like its a bogus comment about a
file that this isn't. But I see what you're doing.
John
=:->
On Wed, Jun 21, 2017 at 5:29 PM, James Beedy wrote:
> It's in the comments at the top of the gist.
>
> On Jun 20, 2017, at 1
Gotcha. Though I will say that looks more like its a bogus comment about a
file that this isn't. But I see what you're doing.
John
=:->
On Wed, Jun 21, 2017 at 5:29 PM, James Beedy wrote:
> It's in the comments at the top of the gist.
>
> On Jun 20, 2017, at 1
This definitely sounds interesting.
You included the layer python code, but not the "daemon.json.j2" file.
Isn't that part of getting the networking config in place?
John
=:->
On Wed, Jun 21, 2017 at 6:16 AM, James Beedy wrote:
> On integrating docker and lxd deployed apps ...
>
> My charm in
This definitely sounds interesting.
You included the layer python code, but not the "daemon.json.j2" file.
Isn't that part of getting the networking config in place?
John
=:->
On Wed, Jun 21, 2017 at 6:16 AM, James Beedy wrote:
> On integrating docker and lxd deployed apps ...
>
> My charm in
If you have started the machine yourself, you should be able to "juju
add-machine ssh:IP_ADDRESS" and then use that as a "juju deploy --to X"
target.
However, you will still need to tear down the machine when you're done. We
don't yet support multi-provider models. Likely we won't, but we will
sup
"juju show-machine 10" is likely to tell you why we are failing to
provision the machine.
My guess is that we acctually need the alias to be "juju/centos7/amd64" for
Juju to recognize that it is the container image we want to be starting.
John
=:->
On Thu, Jun 15, 2017 at 8:37 PM, Daniel Bidwel
In 2.1 we did not chunk, in 2.2 we switch to gorilla/websocket which does
support chunking into frames. I don't think we do any internal "well that
would be too much information so we wont send it all".
John
=:->
On Wed, Jun 14, 2017 at 7:11 PM, Cory Johns
wrote:
> https://github.com/juju/pyth
If the machines are just gone (you manually destroyed them via 'lxc
stop/delete'). You can just do:
juju unregister lxd-test
It will remove it from the local registry without trying to tear anything
down.
John
=:->
On Tue, Jun 13, 2017 at 8:17 AM, Daniel Bidwell wrote:
> I have a machine wit
If the machines are just gone (you manually destroyed them via 'lxc
stop/delete'). You can just do:
juju unregister lxd-test
It will remove it from the local registry without trying to tear anything
down.
John
=:->
On Tue, Jun 13, 2017 at 8:17 AM, Daniel Bidwell wrote:
> I have a machine wit
'ssh ubuntu@IPADDRESS' generally works, as the 'ubuntu' user is the one
that is configured with .ssh/authorized_keys that match the keys described
by the model. I don't know what ssh keys you added to the model/what it was
bootstrapped with, but as long as you still have access to one of those ssh
...
>
>> Which is why using something like the "lxd provider" would be a more
>> natural use case, but according to James the sticking point is having to
>> set up a controller in the first place. So "--to lxd:0" is easier for them
>> to think about than setting up a provider and letting it decid
...
>
>> Which is why using something like the "lxd provider" would be a more
>> natural use case, but according to James the sticking point is having to
>> set up a controller in the first place. So "--to lxd:0" is easier for them
>> to think about than setting up a provider and letting it decid
Doesn't the equivalent of ^A ^D (from screen) also work to just disconnect
all sessions? (http://www.dayid.org/comp/tm.html says it would be ^B d). Or
switching to session 0 and exiting that one first?
I thought we had a quick way to disconnect, but its possible you have to
exit 2x and that fast f
Doesn't the equivalent of ^A ^D (from screen) also work to just disconnect
all sessions? (http://www.dayid.org/comp/tm.html says it would be ^B d). Or
switching to session 0 and exiting that one first?
I thought we had a quick way to disconnect, but its possible you have to
exit 2x and that fast f
ntrol things from
>> inside the vm vs from their host. For some reason this small change made
>> some of my users who were previously not really catching on, far more apt
>> to jump in. I personally like it because these little vms go further when
>> they don't have the c
ntrol things from
>> inside the vm vs from their host. For some reason this small change made
>> some of my users who were previously not really catching on, far more apt
>> to jump in. I personally like it because these little vms go further when
>> they don't have the c
I'll note that if you're generating a password, there really isn't a reason
to then pbkdf2 it, is there? I thought the reason to use pbkdf2 was because
it is too easy to generate SHA hashes for common *human* passwords. But as
the brute-force search spaces increases exponentially with more bits, ju
Interesting. I wouldn't have thought to use a manually added machine to use
JAAS to deploy applications to your local virtualbox. Is there a reason
this is easier than just "juju bootstrap lxd" from inside the VM?
I suppose our default lxd provider puts the new containers on a NAT bridge,
though y
Interesting. I wouldn't have thought to use a manually added machine to use
JAAS to deploy applications to your local virtualbox. Is there a reason
this is easier than just "juju bootstrap lxd" from inside the VM?
I suppose our default lxd provider puts the new containers on a NAT bridge,
though y
I'm pretty sure 'charm' tools have moved over to using 'snap install charm'
as the preferred method for getting the charm tools. I'm not sure that
there is a way to deprecate/remove the versions that are in the archives.
For something like Zesty, it probably would have been possible to just
remove
It depends if your hook goes into 'error' state or 'blocked'.
Error should generally be avoided because it is a signal to Juju that you
can no longer make forward progress (generally meant to mean there is a
logic bug/typo/etc in your charm). With Error Juju may retry the hook that
failed but it wi
so, my immediate thought is that charm build does nothing
> special for series then, since I'm coding the "if series=='ubuntu'...
> else:", the build script cannot touch my code, right?
>
> On 05/20/2017 03:17 AM, John Meinel wrote:
>
> I would guess the ch
juju run --application runs as every unit of the application, thus we have
individual units (if you have 2 units of an application on a machine it
will run twice on that machine). 'juju run --unit" obviously runs as a
unit. 'juju machine' can't, because we don't have any unit associated with
it.
J
So you're running the 'install' hook directly, are you currently in a 'juju
debug-hooks' session, or are you just changing into that directory?
Juju sets it during the run of a charm hook, but it is not set globally on
the machine (we can't set UNIT globally anyway, because you can colocate
many u
>
> ...
>
> We have most of the responsive nature of Juju is driven off the watchers.
> These watchers watch the mongo oplog for document changes. What happened
> was that there were so many mongo operations, the capped collection of the
> oplog was completely replaced between our polled watcher
I believe Tim Penhey makes active use of the python-django charm, but it's
possible he uses it in a different fashion.
John
=:->
On May 21, 2017 14:25, "Erik Lönroth" wrote:
> Hello!
>
> I'm trying out the django charm to deploy a django website I was going to
> try to create with juju.
>
> * I
I would guess the charm can define series specific build instructions, but
that layer-basic doesn't, itself, have different instructions for centos7.
John
=:->
On Sat, May 20, 2017 at 10:07 AM, fengxia wrote:
> Hi Juju,
>
> I made a hello world charm based on charm tutorial, which includes
> l
a user's
> least surprise in not-great situations where the deployment is wedged.
>
> We would not know that from the status right? Only from the debug-log.
>
> On May 19, 2017 3:46 AM, "John Meinel" wrote:
>
>> All agents start up in DEBUG until they can t
a user's
> least surprise in not-great situations where the deployment is wedged.
>
> We would not know that from the status right? Only from the debug-log.
>
> On May 19, 2017 3:46 AM, "John Meinel" wrote:
>
>> All agents start up in DEBUG until they can t
All agents start up in DEBUG until they can talk to the controller and read
what the current logging config is set to. Otherwise you wouldn't be able
to debug startup issues.
That said, I think there was a request to cache the last-known value in
agent.conf which would let restarts be less noisy.
All agents start up in DEBUG until they can talk to the controller and read
what the current logging config is set to. Otherwise you wouldn't be able
to debug startup issues.
That said, I think there was a request to cache the last-known value in
agent.conf which would let restarts be less noisy.
Also, while agents can be built for CentOS we don't support Controllers on
CentOS at this point. So bootstrap I believe only supports Ubuntu.
John
=:->
On May 10, 2017 11:44, "Andrew Wilkins"
wrote:
> On Wed, May 10, 2017 at 3:08 PM fengxia wrote:
>
>> I have followed dev instruction and can b
You can already do that without the extra flag:
juju deploy app [name]
You can name any app any name, we just default to the app name if it is not
supplied.
John
=:->
On Apr 21, 2017 18:12, "Patrizio Bassi" wrote:
> Dear All
>
> this conversation led me to an interesting (for me at least!) sp
:->
On Wed, Apr 12, 2017 at 9:56 AM, John Meinel wrote:
> It sounds like the docs are out of date. The key should be
> "agent-metadata-url" 'tools' was a much older name.
>
> John
> =:->
>
>
> On Wed, Apr 12, 2017 at 7:32 AM, Daniel Bidwell
&g
It sounds like the docs are out of date. The key should be
"agent-metadata-url" 'tools' was a much older name.
John
=:->
On Wed, Apr 12, 2017 at 7:32 AM, Daniel Bidwell wrote:
> This gets me down to:
>
> juju bootstrap acauits acauits-controller --config tools-metadata-url=h
> ttp://10.20.9.13
We're trying to look at reasons why we end up with some stale transaction
ids in the txn-queue. I did find a machine with 18 entries in its txn
queue, all of which are '6: applied'.
I ended up discovering that SetLinkLayerDevices and SetDeviceAddresses are
both causes of a fair number of assert-on
We're trying to look at reasons why we end up with some stale transaction
ids in the txn-queue. I did find a machine with 18 entries in its txn
queue, all of which are '6: applied'.
I ended up discovering that SetLinkLayerDevices and SetDeviceAddresses are
both causes of a fair number of assert-on
On Mon, Mar 27, 2017 at 3:31 PM, Dmitrii Shcherbakov <
dmitrii.shcherba...@canonical.com> wrote:
> Hi everybody,
>
> As far as I can see, there is no way to change a network space binding or
> add a new one after a charm has been deployed.
>
The one caveat here is that in 2.1.2+ if you used a 'de
Did you "juju expose mediawiki" ?
That should have Juju update the security group on the mediawiki machines
to expose port 80.
John
=:->
On Sun, Mar 26, 2017 at 10:58 PM, Giuseppe Attardi wrote:
> I solved the problem by using Chrome instead of Safari.
>
> I was able then to deploy a mediawiki
Preferring a VPC if there is a single one that exists, even if it isn't
flagged as "default" for a region would probably be reasonable, and
probably not a lot of effort. If there are multiple, I wonder if we could
refuse to bootstrap/add-model unless one is specified.
John
=:->
On Tue, Mar 21, 2
Preferring a VPC if there is a single one that exists, even if it isn't
flagged as "default" for a region would probably be reasonable, and
probably not a lot of effort. If there are multiple, I wonder if we could
refuse to bootstrap/add-model unless one is specified.
John
=:->
On Tue, Mar 21, 2
Correct. Putting the constraints on the machines is equivalent to doing:
juju add-machine --constraints mem=4gb
juju deploy mysql --to $MACHINE
Rather than doing:
juju deploy mysql --constraints mem=4gb
Constraints on an Application will affect all new units of an application,
but constraint
Correct. Putting the constraints on the machines is equivalent to doing:
juju add-machine --constraints mem=4gb
juju deploy mysql --to $MACHINE
Rather than doing:
juju deploy mysql --constraints mem=4gb
Constraints on an Application will affect all new units of an application,
but constraint
Zones are generally meant to be used to provide fault domains such that you
should be spreading your applications across zones. I can see how zones
could be used differently but it does feel like it would be going against
the grain.
John
=:->
On Mar 15, 2017 13:01, "Menno Smits" wrote:
Hi Chris
;0/lxd/3", host machine has spaces:
> "management",
> "private"), retrying in 10s (3 more attempts)'
>
> This is blocking everything for me.
>
> Patrizio
>
>
>
> 2017-03-10 14:35 GMT+01:00 John Meinel :
>
>> The Addres
; My snap is pending review. In the meantime you can try with:
>
> https://launchpad.net/~ivoks/+archive/ubuntu/ppa
>
> On Thu, Mar 9, 2017 at 6:38 PM John Meinel wrote:
>
>> We should as soon as I have it landed in the 2.1 branch and CI starts to
>> run we can use it
We should as soon as I have it landed in the 2.1 branch and CI starts to
run we can use it's deb.
John
=:->
On Mar 9, 2017 09:48, "Patrizio Bassi" wrote:
> Fantastic job John!
>
> do you have a .deb i can already test on my environment?
>
> Patrizio
>
>
straten :
>
>> Where do we find which bindings a charm has so they can be specified
>> directly?
>> According to the docs on the metadata (https://jujucharms.com/docs/s
>> table/authors-charm-metadata) there's a section called extra-bindings
>> but that only s
In the meantime, you can work around it by specifying the bindings
directly: so in the case of mysql that would be:
juju deploy mysql --bind "db=db-space monitors=db-space ha=db-space ..."
John
=:->
On Thu, Mar 9, 2017 at 7:25 AM, John Meinel wrote:
> "juju deploy mysq
lt in a provisioning failure."
>>>
>>> This is exactly my case: a machine with 2 eth ports, two different
>>> subnets in two different spaces.
>>>
>>> the doc says i may do (c/p): $ juju deploy mysql --bind db-space
>>>
>>
1 - 100 of 681 matches
Mail list logo