Please, disregard this message and look into next mail.
Thank you. On 20.04.2020 03:23, Volodymyr Litovka wrote:
Dear colleagues, it seems, I didn't understand exactly Openstack integration procedure for Linstor. My configuration is the following: - there are three storage nodes (stor1, stor2 and stor3), which are both linstor-controller (only one is active, controlled by pacemaker) and linstor-satellite * note hostname 'stor' which is VIP hostname for storage nodes, backed by pacemaker - there are three controllers (ctrl1, ctrl2 and ctrl3) where Cinder's controller part (cinder-api, cinder-scheduler) is installed * note hostname 'controller' which is VIP hostname for controller nodes, backed by pacemaker Linstor is configured and, in manual mode, works as expected, allowing create and use volumes: root@stor1:~# linstor sp l ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────╮ ┊ StoragePool ┊ Node ┊ Driver ┊ PoolName ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════╡ ┊ drbdpool ┊ stor1 ┊ LVM_THIN ┊ sds/thin ┊ 93 GiB ┊ 93 GiB ┊ True ┊ Ok ┊ ┊ drbdpool ┊ stor2 ┊ LVM_THIN ┊ sds/thin ┊ 93 GiB ┊ 93 GiB ┊ True ┊ Ok ┊ ┊ drbdpool ┊ stor3 ┊ LVM_THIN ┊ sds/thin ┊ 93 GiB ┊ 93 GiB ┊ True ┊ Ok ┊ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────╯ volume_type 'linstor' is registered in Cinder service and set as default in Cinder's configuration: # openstack volume type show linstor +--------------------+--------------------------------------+ | Field | Value | +--------------------+--------------------------------------+ | access_project_ids | None | | description | None | | id | d2025962-503a-4f37-93bd-b766bb346a42 | | is_public | True | | name | linstor | | properties | volume_backend_name='linstor' | | qos_specs_id | None | +--------------------+--------------------------------------+ Cinder controller (e.g. ctrl1) is configured in the following way: [DEFAULT] verbose = true debug = true enabled_backends = linstor my_ip = ctrl1 host = ctrl1 enable_v2_api = false enable_v3_api = true state_path = /var/lib/cinder volumes_dir = /var/lib/cinder/volumes lock_path = /var/lib/cinder/lock auth_strategy = keystone storage_availability_zone = nova default_volume_type = linstor cinder_internal_tenant_project_id = d54a8fef77e541668e259d7a1cd158e4 cinder_internal_tenant_user_id = 5dc3226f0f3e42e7aefed6963442fe17 max_over_subscription_ratio = auto report_discard_supported = false image_upload_use_cinder_backend = true image_upload_use_internal_tenant = true image_volume_cache_enabled = true image_volume_cache_max_size_gb = 20 transport_url = rabbit://openstack:xxxxxxx@controller:5672/ [database] connection = mysql+pymysql://cinder:xxxxxxx@controller/cinder use_db_reconnect = true [linstor] storage_availability_zone = nova volume_backend_name = linstor volume_driver = cinder.volume.drivers.linstordrv.LinstorDrbdDriver linstor_default_volume_group_name=DfltRscGrp linstor_default_uri=linstor://stor linstor_default_storage_pool_name=drbdpool linstor_default_resource_size=1 linstor_volume_downsize_factor=4096 [keystone_authtoken] www_authenticate_uri =http://controller:5000 auth_url =http://controller:5000 memcached_servers = ctrl1:11211,ctrl2:11211,ctrl3:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = xxxxxxxxx Note: I didn't download linstor driver from Linbit Openstack Repo. Documentation says: "The linstor driver will be officially available starting OpenStack Stein release" while I'm using the Train release and assume that bundled driver is already ok. Having this, when I try to create volume using Openstack (openstack volume create --size 5 --image cirros qqq), I get the error - Cinder scheduler reports the following: 2020-04-20 02:26:15.631 11946 DEBUG cinder.scheduler.base_filter [] Starting with 0 host(s) get_filtered_objects /usr/lib/python3/dist-packages/cinder/scheduler/base_filter.py:95 2020-04-20 02:26:15.632 11946 DEBUG cinder.scheduler.base_filter [] Filter AvailabilityZoneFilter returned 0 host(s) get_filtered_objects /usr/lib/python3/dist-packages/cinder/scheduler/base_filter.py:125 2020-04-20 02:26:15.632 11946 DEBUG cinder.scheduler.base_filter [] Filter CapacityFilter returned 0 host(s) get_filtered_objects /usr/lib/python3/dist-packages/cinder/scheduler/base_filter.py:125 2020-04-20 02:26:15.633 11946 DEBUG cinder.scheduler.base_filter [] Filter CapabilitiesFilter returned 0 host(s) get_filtered_objects /usr/lib/python3/dist-packages/cinder/scheduler/base_filter.py:125 2020-04-20 02:26:15.633 11946 DEBUG cinder.scheduler.base_filter [] Filtering removed all hosts for the request with volume ID 'dceacdcc-2d3d-4ba0-b7ca-6b5d14aef2e9'. Filter results: [('AvailabilityZoneFilter', []), ('CapacityFilter', []), ('CapabilitiesFilter', [])] _log_filtration /usr/lib/python3/dist-packages/cinder/scheduler/base_filter.py:73 2020-04-20 02:26:15.634 11946 INFO cinder.scheduler.base_filter [] Filtering removed all hosts for the request with volume ID 'dceacdcc-2d3d-4ba0-b7ca-6b5d14aef2e9'. Filter results: AvailabilityZoneFilter: (start: 0, end: 0), CapacityFilter: (start: 0, end: 0), CapabilitiesFilter: (start: 0, end: 0) 2020-04-20 02:26:15.634 11946 WARNING cinder.scheduler.filter_scheduler [] No weighed backend found for volume with properties: {'id': 'd2025962-503a-4f37-93bd-b766bb346a42', 'name': 'linstor', 'description': None, 'is_public': True, 'projects': [], 'extra_specs': {'volume_backend_name': 'linstor'}, 'qos_specs_id': None, 'created_at': '2020-04-16T21:42:17.000000', 'updated_at': None, 'deleted_at': None, 'deleted': False} The questions are: - it's not clear from the documentation whether it's required to install cinder-volume service on storage nodes or it's enough to have only controller part of Cinder (api and scheduler) which call linstor controller using config's "linstor_default_uri" parameter? (In fact, nothing bothers me to control linstor storage from controller nodes manually.) - whether I missed only this part of configuration or something wrong with config above? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison
-- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison
_______________________________________________ Star us on GITHUB: https://github.com/LINBIT drbd-user mailing list [email protected] https://lists.linbit.com/mailman/listinfo/drbd-user
