We have four servers and one SAS data store for the openstack deployment.
All servers have SAS interfaces. We are going to put one controller node
and 3 compute nodes. We need to run the system with live migration enabled.
So the option is mount the SAS partition to the controller and share it
wit
On Wed, Sep 17, 2014 at 10:40 PM, foss geek wrote:
> 2014-09-18 03:21:56.903 | ++ neutron net-create --tenant-id
> db7fd8d208db41ea8622be52b520d44e private
> 2014-09-18 03:21:56.904 | ++ grep ' id '
> 2014-09-18 03:21:56.905 | ++ get_field 2
> 2014-09-18 03:21:56.905 | ++ read data
> 2014-09-18 0
Dear All,
I am getting below error while deploying all in one openstack using
Devstack Icehouse stable version.
2014-09-18 03:21:56.897 | + TENANT_ID=db7fd8d208db41ea8622be52b520d44e
2014-09-18 03:21:56.897 | + die_if_not_set 374 TENANT_ID 'Failure
retrieving TENANT_ID for demo'
2014-09-18 03:21
Dear George
Thank you for the reply.
I'm a little confused about your reply.
Can be the same tag number assigned to different tenant? For example, I
assume the situation where a subnet 1 assigned tag number 1 and it belongs
to tenant A, and a subnet b is also assigned tag number 1 and it belongs
Hi,
I'd like to add you to my professional network on LinkedIn.
- Raghavendra
Accept:
http://www.linkedin.com/blink?simpleRedirect=0Udz4Pd3AVcP8PdPsOej4UcjARfkh9rCZFt65QqnpKqipyk4dbdSx5i3RVpkJApn9xq7cCt7dBtmtvpnhFtCVFfmJB9C5QoORBt6BSrCAJt7dBtmsJr6RBfmtKqmJzon9Q9ClQqnpKimtBkClOs3Rx9CoRbmVWpztEdP
The internal VLAD ID is indeed limited to 4096 but this internal tag number is
used to isolate different neutron subnets, not tenants.
A tenant could create 10 neutron networks each with its own subnet and then
start 10 instances each attached to a separate net/subnet. If these instances
would
Hello.
I have a question about the VXLAN support on OpenStack.
As far as I know, the OVS operates like the below:
1. A tag number is created to identify each tenant, and it is used between
br-int and br-tun. Furthermore the tag number is identified as a VLAN ID (I
checked it via tcpdump).
2. Af
Hi all,
I have added another compute node to our Openstack installation. Newly
created instances are successfully created on the system, also networking
via Neutron and GRE works as it should - I have copied the configuration
from the working compute nodes (except changing some IP addresses to the
swift-init all status
swift-init may give you at least the current status of the running services.
On Tue, Sep 16, 2014 at 12:54 PM, Pete Zaitcev wrote:
> On Tue, 9 Sep 2014 15:36:03 +0530
> Ashish Chandra wrote:
>
> > 1) Do we have plans to include "swift service-list" in swiftclient ?
> > If
On Sep 17, 2014, at 10:51 AM, foss geek
mailto:thefossg...@gmail.com>> wrote:
Thanks for your time!!!
Here is the local.conf file which I am planing to use.
does this sound good?
$ cat local.conf
looks good to me.
Regards,
Pritesh
___
Mailing lis
I've a globally distributed swift infrastructure with many nodes in
different zones across the my whole country. In order to replicate a/c/o,
data travels through Internet so replicas goes to it place.
Replicas are copied between storage nodes and swift presume all storage
nodes are running in a s
Hi Pritesh,
Thanks for your time!!!
Here is the local.conf file which I am planing to use.
does this sound good?
$ cat local.conf
[[local|localrc]]
# Enable Logging
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs
# Reclone each time
RECLONE=ye
On Sep 17, 2014, at 10:10 AM, foss geek
mailto:thefossg...@gmail.com>> wrote:
Hi Pritesh,
Thanks for your time.
I have installed Nexus 1000v (Virtual Switch) software in VMWare.
I am able to ssh from Devstack box to VSM box using admin@ with
password 'admin'
Can you help me to understand th
Hi Pritesh,
Thanks for your time.
I have installed Nexus 1000v (Virtual Switch) software in VMWare.
I am able to ssh from Devstack box to VSM box using admin@ with
password 'admin'
Can you help me to understand the below variables?
Q_PLUGIN=cisco
Q_CISCO_PLUGIN_DEVSTACK_VSM=False
Q_CISCO_PLUGI
OK.. In a sunny day scenario, in my 4 replica configuration, and with a
region 1 write affinity, I do see the 3rd copy in the r1 hand off location
deleted upon async replication to zone 1 and zone 2 within region 2.
In a rainy day scenario with no region 2 storage servers available upon new
file u
I'd like to implement APIs in the link.
RAX-KSKEY admin extension
http://developer.openstack.org/api-ref-identity-v2.html#rax-kskey-admin-ext
Any clue would be appreciate ~
Hugo
2014-09-17 17:41 GMT+08:00 Kuo Hugo :
> Hi folks,
>
> I'm doing an integration test for a third-party application.
Here is a sample localrc file to enable it in devstack:
http://cisco-neutron-ci.cisco.com/logs/n1kv_neutron/3766/localrc.txt
Regards,
Pritesh
On Sep 17, 2014, at 9:11 AM, foss geek wrote:
> Dear All,
>
> Is there any document to enable Neutron N1KV Core Plugin in Devstack? I am
> searching
The object in the handoff location should get removed once it is successfully
copied to the primary locations. Check object-replicator error logs like “Error
syncing handoff partition”.
Gerry.
From: Brent Troge [mailto:brenttroge2...@gmail.com]
Sent: 17 September 2014 16:48
To: John Dickinson
Dear All,
Is there any document to enable Neutron N1KV Core Plugin in Devstack? I am
searching local.conf syntax to enable Neutron N1KV Core Plugin in Devstack
Icehouse? any help?
Thanks
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/lis
Hi Claudio.
I was having this issue too but maybe not as often as you. I believe I
finally fixed it by increasing "agent_down_time"
in /etc/neutron/neutron.conf. The default is 75 and I increased to 120. The
following are a few things you can check to determine if your problem is
the same as min
Hi,
Could be due to,
ssh server is not up and running in your instance,
or running in a different port rather than port 22,
or, ssh port access is restricted in openstack key pair configuration
You could also try telnet to check the connectivity,
$ telnet 22
Thanks,
Sajith
On Wed,
Check your security group rules. Allow ingress port 22.
On Sep 17, 2014 8:45 PM, "Srinivasreddy R"
wrote:
> hi,
> i am able to ping my instance form external network .
> but not able to ssh to the instance .
> i am using floating ip s for ping,ssh.
> please help me .
>
> thanks,
> srinivas.
>
>
If an object write is sent to a hand off location, is that object deleted
from that hand off location once the primary write location is written to?
In my testing, I forced a new object write to hand off locations. Once I
brought the primary locations back online, the object is then written to
all
I have faced stranger stuff with cirros. I would see dhcp lease messages
using tcpdump exactly on the instances port but still cirros keeps sending
discover messages.
On Sep 17, 2014 8:24 PM, "Claudio Pupparo"
wrote:
> Hi,
>
> I have the common issue of instances not getting their ip.
> When I st
Hi,
What’s the output of running ssh with the verbose (-v) flag?
BR,
Zoltan
From: Srinivasreddy R [mailto:srinivasreddy4...@gmail.com]
Sent: Wednesday, September 17, 2014 5:16 PM
To: openstack@lists.openstack.org
Subject: [Openstack] able to ping but not able to ssh to instance
hi,
i am able to
hi,
i am able to ping my instance form external network .
but not able to ssh to the instance .
i am using floating ip s for ping,ssh.
please help me .
thanks,
srinivas.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Pos
Hi,
I have the common issue of instances not getting their ip.
When I start a cirros instance, the output looks like this:
udhcpc (v1.20.1) started
Sending discover...
Sending discover...
Sending discover...
No lease, failing
I've found that during the boot process (while the instance is trying
Because we’re getting into future development plans, I’ve replied to this in
detail in a new thread on the -dev mailing list. Let’s continue the discussion
over there to ensure that the rest of the development team can be involved.
http://lists.openstack.org/pipermail/openstack-dev/2014-Septembe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi Li
On 17/09/14 11:58, Li Ma wrote:
>> The scale potential is very appealing and is something I want to
>> test - - hopefully in the next month or so.
>>
>> Canonical are interested in helping to maintain this driver and
>> hopefully we help any
On 2014/9/13 14:32, James Page wrote:
The scale potential is very appealing and is something I want to test
- - hopefully in the next month or so.
Canonical are interested in helping to maintain this driver and
hopefully we help any critical issues prior to Juno release.
That sounds good. I ju
Hi,
Thanks a lot for your response.
Actually my instances can communicate with each other but i am not able to
ping/ssh my instance from external network, i.e., I am not able to access
my instance from network/compute node.
This may be because the state of my router gateway is ACTIVE yet DOWN.
Th
Hi folks,
I'm doing an integration test for a third-party application. This
application provides Rackspace Cloudfile backend support. I'd like to make
it work with OpenStack Swift.
The first challenge is to enable RAX-KSKEY extension in Keystone Havana to
apply the auth method RAX-KSKEY:apikeycred
here is the message I get in error_log
[Wed Sep 17 07:31:52 2014] [error] Login successful for user "admin".
[Wed Sep 17 07:31:53 2014] [error] Internal Server Error: /dashboard/admin/
[Wed Sep 17 07:31:53 2014] [error] Traceback (most recent call last):
[Wed Sep 17 07:31:53 2014] [error] File
33 matches
Mail list logo