In my experience the networking is by far the hardest part of the setup,
though the entire manual setup of openstack is mostly an exercise in
whether you can follow ~150 steps without making a typing error. I have
some bash scripts we use here to quickly setup a test stack using intel NUC
/ gigabyte brix systems. I can send those your way if you want.

Here's my nova.conf - you'll have to fill in data for about 25% of the
lines, and make sure 'controller' resolves to the management IP of the
controller node from the compute host.  Everything wrapped in ${} needs to
be replaced.  If you didn't use the openstack setup guide for the rest of
the setup you may need to change other values (ports/usernames).  If you're
not using Neutron for networking you'll need to change a lot.

[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata
ec2_dmz_host=controller


# Use keystone for auth
auth_strategy=keystone

# Use RabbitMQ
#rpc_backend = nova.rpc.impl_kombu
rpc_backend = rabbit
rabbit_host = controller
rabbit_userid = guest
rabbit_password = ${RABBIT_PASSWORD}

# VNC Service setup
vnc_enabled=True
vncserver_listen=0.0.0.0

# The next 2 entries must have the management IP of this compute node
my_ip = ${COMPUTE_NODE_IP}
vncserver_proxyclient_address = ${COMPUTE_NODE_IP}

# The next 2 entries should point to the public IP of the vnc proxy
(controller) node
novncproxy_base_url = http://${CONTROLLER_PUBLIC_IP}:6080/vnc_auto.html
xpvvncproxy_base_url = http://${CONTROLLER_PUBLIC_IP}:6081/console

# Glance image storage
glance_host = controller

# OpenVSwitch Networking using Neutron
network_api_class = nova.network.neutronv2.api.API
neutron_url = http://controller:9696
neutron_auth_strategy = keystone
neutron_admin_tenant_name = service
neutron_admin_username = neutron
neutron_admin_password = ${NEUTRON_KEYSTONE_PASSWORD}
neutron_admin_auth_url = http://controller:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron
service_neutron_metadata_proxy=True
neutron_metadata_proxy_shared_secret = ${NEUTRON_METADATA_SECRET}

# Ceilometer usage tracking
instance_usage_audit = True
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
notification_driver = nova.openstack.common.notifier.rpc_notifier
notification_driver = ceilometer.compute.nova_notifier


[database]
connection=connection = mysql://nova:${NOVA_DB_PASSWORD}@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = ${NOVA_KEYSTONE_PASSWORD}




On Thu, Jun 26, 2014 at 12:21 PM, O'Reilly, Dan <daniel.orei...@dish.com>
wrote:

> Yes, I’ve been looking at it.  But with the plethora of available
> settings, a working configuration would be incredibly useful.  <grin>
>
>
>
> *From:* Andrew Mann [mailto:and...@divvycloud.com]
> *Sent:* Thursday, June 26, 2014 11:20 AM
>
> *To:* O'Reilly, Dan
> *Cc:* openstack@lists.openstack.org
> *Subject:* Re: [Openstack] Architectural question
>
>
>
>
> http://docs.openstack.org/trunk/config-reference/content/section_compute-hypervisors.html
>  has configuration entries for each hypervisor type.
>
>
>
> On Thu, Jun 26, 2014 at 12:06 PM, O'Reilly, Dan <daniel.orei...@dish.com>
> wrote:
>
> OK.  So, next question is the specific configuration for each compute
> node.  Do you know if there are sample configurations (e.g., nova.conf)
> available for these?
>
>
>
> Thanks!
>
>
>
> *From:* Andrew Mann [mailto:and...@divvycloud.com]
> *Sent:* Thursday, June 26, 2014 10:22 AM
> *To:* O'Reilly, Dan
> *Cc:* openstack@lists.openstack.org
> *Subject:* Re: [Openstack] Architectural question
>
>
>
> Dan,
>
>
>
> If I understand your question properly, you can do this from the Horizon
> management web interface as an admin user. Under Admin->System Panel->Host
> Aggregates setup 3 aggregates each in their own availability zone, and then
> assign each host into a corresponding aggregate.
>
>
>
> I don't think the controller node needs any special configuration, and the
> compute nodes should just need the configuration to use the specific
> hypervisor you want on that host.  Each compute node must be running before
> it can be added to the host aggregate through the UI.
>
>
>
>
>
> On Thu, Jun 26, 2014 at 10:45 AM, O'Reilly, Dan <daniel.orei...@dish.com>
> wrote:
>
> I’m building a cloud and have a question about architecture/viability.
> Right now, I have the following configuration:
>
>
>
> -          Controller node running RHEL 6.5
>
> -          Network node running RHEL 6.5
>
> -          5 node Ceph cluster for block and object storage
>
> -          3 compute nodes:
>
> o   1 running RHEL 6.5 and VMware as the hypervisor
>
> o   1 running RHEL 6.5 and KVM as the hypervisor
>
> o   1 running CentOS 6.5 and Xen as the hypervisor
>
>
>
> The basic question is on the compute nodes.  I’m doing 3 different
> hypervisors to do a best-of-breed study (each will be in its own
> availability zone).  Hence, one of each type.  But is it even possible to
> have 3 compute nodes like this, and if so, how do I configure the compute
> software that runs on the controller node to handle this; and how do I
> configure each of the 3 compute nodes as well?
>
>
>
> Dan O'Reilly
>
> UNIX Systems Administration
>
> [image: cid:638154011@09122011-048B]
>
> 9601 S. Meridian Blvd.
>
> Englewood, CO 80112
>
> 720-514-6293
>
>
>
>
>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
>
>
> --
>
> Andrew Mann
>
> DivvyCloud Inc.
>
> www.divvycloud.com
>
>
>
>
>
> --
>
> Andrew Mann
>
> DivvyCloud Inc.
>
> www.divvycloud.com
>



-- 
Andrew Mann
DivvyCloud Inc.
www.divvycloud.com
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to