btzq commented on issue #10479:
URL: https://github.com/apache/cloudstack/issues/10479#issuecomment-2691190996

   @wangwei0537 
   
   I think what you are trying to ask, is if it is possible to create and use a 
Cloudstack VPC, but use other VNF or Hardware Appliances to substitute/replace 
the systemVMs such a Virtual Router, while still being able to retain all the 
native functionalities of a Cloudstack VPC? 
   
   If this is the case, the answer is no. 
   
   I understand where you come from that there are some edge cases with 
Cloudstack VPCs, but based on my observations in this community, most of the 
issues are due to configuration issues, or lack of knowledge because there is a 
learning curve to learning how to setup, configure and scale Cloudstack as a 
Cloud Operator. But at the same time, i understand if you found bugs, i have 
encountered my fair share too. 
   
   Cloudstack VPC may have an impression that it has a bad design because the 
Virtual Router is taking on many functions such as Routing, DHCP, Load 
Balancing, Static NAT, etc all in 1 Virtual Machine. It makes it difficult to 
scale, and the Virtual Router is not optimized at the moment, which means you 
get a performance/bandwidth issue when you scale to a certain throughput. If i 
remember correctly, it is like 2.5-5 Gbps (Total Throughput)? But i might 
remember it incorrectly. 
   
   There is also a lack of more advanced networking options. The most advanced 
would be the support of OpenVSwitch (which i dont see many people using at the 
moment, or maybe i cant see it). There isnt any more advanced networking 
support such as macvtap, static routes for VPCs (currently Static Routes for 
Private Gateway are only possible). Ive reported that ticket here, last year, 
and its being scheduled for 4.21 at the moment: 
https://github.com/apache/cloudstack/issues/9791
   
   Bottom line, yes, i think there is lots of room to improve, and the 
community knows this. Which is why Cloudstack community tried to integrate 
Tungsten Fabric somewhere last year. I believe the first phase of integration 
was done, but unfortunately the Tungsten Fabric project was shut down. Some 
people have revived it under OpenSDN, but because the project is still shaky at 
the moment, the community seemed to have stop work on that. 
   
   If that had suceeded, i think you would have been able to achieve features 
very similar to Openstack in terms of more advanced networking options. But of 
course, the trade off is that you will need to have very familiar and 
knowledable people to operate and configure it. 
   
   However, i still think Cloudstack is easier (and cheaper) to setup than 
Openstack. Openstack has way too many components and microservices, and having 
to set it up would require a really really experienced team who knows 
infrastructure, networking, software development etc. To me, Openstack is a 
collection of microservices and multiple projects that Cloud Operators need to 
stitch together manually to make it work. From a cost perspective as well, it 
is costlier to setup (Capex) and maintain (Opex). Cloudstack, still has its use 
cases and can do lots of things. You just have to know what you want to 
achieve. 
   
   By the way, your scenario where you turned off the server, and the service 
hangs, dont not sounds normal. 
   
   - Are you using Hyperconverged, or Dissagregated Architecture?
   - What server did you restart excatly? Management Server? Compute? Storage?
   - What is the error you are receiving? 
   
   In my own use of cloudstack, ive not encountered the type of 'hang' i think 
you are talking about. Can you share more info? Could be something else in the 
architecture, setup that is wrong instead? Lets see?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to