Re: [Proposal] Storage Filesystem as a First Class Feature

2024-07-05 Thread Rohit Yadav
Proposed design doc largely LGTM.

 I've some additional suggestions and feedback to make requirements &  the 
first phase implementation more clearer and simpler:


  *
+1 on implementing it in a hypervisor and storage agnostic manner.

  *
Let's have the FS VMs owned by the caller (account or project), not like a 
system-owned appliance. It would then be just like CKS in that sense. This is 
because there is nothing special about the feature as in users can't do it, 
it's really for (a) users who want the benefit of shared storage but don't want 
to setup themselves and (b) orchestrate such a feature via API/SDKs/automation. 
Advanced users may not prefer to use it who want too many customisation and 
complexities.

  *
To keep the first phase simple, let's drop adding support for metrics/usage of 
FS VM and any other lifecycle that would need an agent or need for the 
management servers to SSH/manage the FS VM at all. Then, the scope can be 
limited to:
 *
Orchestrate the initial FS VM setup that can be easily done via user-data 
(config drive or VR depending on the network, cloud-init can orchestrate NFS 
exports), the FS VM's nfs service can also listen on all nics/IPs. This would 
make adding the FS capability to work out of the box if somebody want to attach 
the FS to other networks later (than the one it was initially created on).
 *
Keep it simple: as there is no agent or mgmt server access needed or required; 
any change to the FS properties or lifecycle could be done by a FS VM reboot or 
recreation, as the FS VM is stateless and a separate data disk holds the file 
share storage. For such operations, the UI can clearly mention a warning or 
note that such an operation would cause downtime due to reboot/recreate 
lifecycle operation of the FS VM.
 *
Suggestions for the Lifecycle operations:
*
(*list & update API are given, should support pagination, listing by 
name/keyword, by network, account/domain/project etc)
*
Create FS: initial user-data base FS VM setup (during initial setup, disk can 
be check/formatted + mounted with fstab rules)
*
Recreate/restart FS: destroy & recreate FS VM, attach data disk before starting 
the VM (setup can check and initialise disk if needed; and grow/expand 
filesystem if underlying volume was resized).
*
Attach/detach FS (to/from network): simply CloudStack nic/network attach/detach 
(worth checking if cloud-init or something in the systemvm template 
automatically takes care of nic setup in the FS VM)
*
Expand FS size: this could be simply UI-based proxy to resizing data disk, but 
resizing would cause recreating (or rebooting) or the FS VM, for it to grow the 
FS (due to lack of agent or SSH access, this may be acceptable in the first 
phase)
*
Delete FS: deleting FS with/without expunging the data disk; and for users to 
recover a non-expunged FS (similar to VMs)
 *
FSM states: FS should have states that correspond to the FS VM running and 
state of the underlying data disk
 *
Misc: Ensure FS VM is HA enabled, worth also either assuming some default 
compute offering or allow caller to specify compute offering for the FS VM.
 *
Network support: support all networks except L2 or networks which don't have 
userdata & dhcp capabilities
 *
Hypervisor & Storage support: agnostic

*FS = File Shares (suggested name)


Regards.

 



From: Alex Mattioli 
Sent: Wednesday, June 19, 2024 15:13
To: dev@cloudstack.apache.org 
Cc: us...@cloudstack.apache.org 
Subject: RE: [Proposal] Storage Filesystem as a First Class Feature

+1 on that,  keeping it hypervisor agnostic is key.




-Original Message-
From: Nux 
Sent: Wednesday, June 19, 2024 10:14 AM
To: dev@cloudstack.apache.org
Cc: us...@cloudstack.apache.org
Subject: Re: [Proposal] Storage Filesystem as a First Class Feature

Thanks Piotr,

This is the second time virtio-fs has been mentioned and just researched it a 
bit, it looks like something really nice to have in Cloudstack, definitely 
something to look at in the future.

Nice as it is though, it has a big drawback, it's KVM-only, so for now we'll 
stick to "old school" tech that can be used in an agnostic matter.

You are more than welcome to share thoughts on the other details presented, 
perhaps pros/cons on filesystems and other gotchas you may have encountered 
yourself.

On 2024-06-19 07:04, Piotr Pisz wrote:
> Hi,
> We considered a similar problem in our company.
> Shared storage is needed between VMs running on different networks.
> NFS/CephFS is ok as long as the VM can see the source.
> The best solution would be to use https://virtio-fs.gitlab.io/ Any FS
> would be used on the host side (e.g. NFS or CephFS) and exported to
> the VM natively (the network problem disappears).
> But you should start by introducing an appropriate mechanism on the CS
> side (similar in operation to Manila Share from Openstack).
>  So, the initiative itself i

[ANNOUNCE] Apache CloudStack LTS Security Releases 4.18.2.1 and 4.19.0.2

2024-07-05 Thread Abhishek Kumar
Apache CloudStack project announces the release of LTS security releases
4.18.2.1 and 4.19.0.2 that addresses CVE-2024-38346 and CVE-2024-39864,
both of severity rating 'important', explained below.

# CVE-2024-38346: Unauthenticated cluster service port leads to remote execution

The CloudStack cluster service runs on unauthenticated port (default 9090) that
can be misused to run arbitrary commands on targeted hypervisors and CloudStack
management server hosts. Some of these commands were found to have command
injection vulnerabilities that can result in arbitrary code execution via agents
on the hosts that may run as a privileged user. An attacker that can reach the
cluster service on the unauthenticated port (default 9090), can exploit this to
perform remote code execution on CloudStack managed hosts and result in complete
compromise of the confidentiality, integrity, and availability of CloudStack
managed infrastructure.

# CVE-2024-39864: Integration API service uses dynamic port when disabled

The CloudStack integration API service allows running its unauthenticated API
server (usually on port 8096 when configured and enabled via
integration.api.port global setting) for internal portal integrations and for
testing purposes. By default, the integration API service port is disabled and
is considered disabled when integration.api.port is set to 0 or negative. Due to
an improper initialisation logic, the integration API service would listen on a
random port when its port value is set to 0 (default value). An attacker that
can access the CloudStack management network could scan and find the randomised
integration API service port and exploit it to perform unauthorised
administrative actions and perform remote code execution on CloudStack managed
hosts and result in complete compromise of the confidentiality, integrity, and
availability of CloudStack managed infrastructure.

# Credits

Both the CVEs are credited to the following reporters from the Apple Services
Engineering Security team:

- Adam Pond (finder)
- Terry Thibault (finder)
- Damon Smith (finder)

# Affected Versions

- Apache CloudStack 4.0.0 through 4.18.2.0
- Apache CloudStack 4.19.0.0 through 4.19.0.1

# Resolution

Users are recommended to upgrade to version 4.18.2.1, 4.19.0.2 or later, which
addresses these issues.

Additionally, users are recommended the following actions:

- Restrict the network access to the cluster service
port (default 9090) on a CloudStack management server host to only its peer
CloudStack management server hosts.
- Restrict the network access on the CloudStack management server hosts
to only essential ports.

# Downloads and Documentation

The official source code for the 4.18.2.1 and 4.19.0.2 releases can be
downloaded from the project downloads page:
https://cloudstack.apache.org/downloads

The 4.18.2.1 and 4.19.0.2 release notes can be found at:
https://docs.cloudstack.apache.org/en/4.18.2.1/releasenotes/about.html
https://docs.cloudstack.apache.org/en/4.19.0.2/releasenotes/about.html

In addition to the official source code release, individual contributors
have also made release packages available on the Apache CloudStack
download page, and available at:

https://download.cloudstack.org/el/7/
https://download.cloudstack.org/el/8/
https://download.cloudstack.org/el/9/
https://download.cloudstack.org/suse/15/
https://download.cloudstack.org/ubuntu/dists/
https://www.shapeblue.com/cloudstack-packages/


Re: [Proposal] Storage Filesystem as a First Class Feature

2024-07-05 Thread Nux

Rohit,

Your reply LGTM, a few more lines from me:
- initially the export is NFS, that's how users/VMs will consume it as, 
just clarifying, I know we want to keep this somewhat agnostic
- while there is no agent running there, as you noted, most of the stuff 
can be configured either via userdata, udev rules or a combination of 
both
- in terms of monitoring we could enable snmpd on the appliance and/or a 
prometheus system exporter




On 2024-07-05 08:01, Rohit Yadav wrote:

Proposed design doc largely LGTM.

 I've some additional suggestions and feedback to make requirements &  
the first phase implementation more clearer and simpler:



  *
+1 on implementing it in a hypervisor and storage agnostic manner.

  *
Let's have the FS VMs owned by the caller (account or project), not 
like a system-owned appliance. It would then be just like CKS in that 
sense. This is because there is nothing special about the feature as in 
users can't do it, it's really for (a) users who want the benefit of 
shared storage but don't want to setup themselves and (b) orchestrate 
such a feature via API/SDKs/automation. Advanced users may not prefer 
to use it who want too many customisation and complexities.


  *
To keep the first phase simple, let's drop adding support for 
metrics/usage of FS VM and any other lifecycle that would need an agent 
or need for the management servers to SSH/manage the FS VM at all. 
Then, the scope can be limited to:

 *
Orchestrate the initial FS VM setup that can be easily done via 
user-data (config drive or VR depending on the network, cloud-init can 
orchestrate NFS exports), the FS VM's nfs service can also listen on 
all nics/IPs. This would make adding the FS capability to work out of 
the box if somebody want to attach the FS to other networks later (than 
the one it was initially created on).

 *
Keep it simple: as there is no agent or mgmt server access needed or 
required; any change to the FS properties or lifecycle could be done by 
a FS VM reboot or recreation, as the FS VM is stateless and a separate 
data disk holds the file share storage. For such operations, the UI can 
clearly mention a warning or note that such an operation would cause 
downtime due to reboot/recreate lifecycle operation of the FS VM.

 *
Suggestions for the Lifecycle operations:
*
(*list & update API are given, should support pagination, listing by 
name/keyword, by network, account/domain/project etc)

*
Create FS: initial user-data base FS VM setup (during initial setup, 
disk can be check/formatted + mounted with fstab rules)

*
Recreate/restart FS: destroy & recreate FS VM, attach data disk before 
starting the VM (setup can check and initialise disk if needed; and 
grow/expand filesystem if underlying volume was resized).

*
Attach/detach FS (to/from network): simply CloudStack nic/network 
attach/detach (worth checking if cloud-init or something in the 
systemvm template automatically takes care of nic setup in the FS VM)

*
Expand FS size: this could be simply UI-based proxy to resizing data 
disk, but resizing would cause recreating (or rebooting) or the FS VM, 
for it to grow the FS (due to lack of agent or SSH access, this may be 
acceptable in the first phase)

*
Delete FS: deleting FS with/without expunging the data disk; and for 
users to recover a non-expunged FS (similar to VMs)

 *
FSM states: FS should have states that correspond to the FS VM running 
and state of the underlying data disk

 *
Misc: Ensure FS VM is HA enabled, worth also either assuming some 
default compute offering or allow caller to specify compute offering 
for the FS VM.

 *
Network support: support all networks except L2 or networks which don't 
have userdata & dhcp capabilities

 *
Hypervisor & Storage support: agnostic

*FS = File Shares (suggested name)


Regards.





From: Alex Mattioli 
Sent: Wednesday, June 19, 2024 15:13
To: dev@cloudstack.apache.org 
Cc: us...@cloudstack.apache.org 
Subject: RE: [Proposal] Storage Filesystem as a First Class Feature

+1 on that,  keeping it hypervisor agnostic is key.




-Original Message-
From: Nux 
Sent: Wednesday, June 19, 2024 10:14 AM
To: dev@cloudstack.apache.org
Cc: us...@cloudstack.apache.org
Subject: Re: [Proposal] Storage Filesystem as a First Class Feature

Thanks Piotr,

This is the second time virtio-fs has been mentioned and just 
researched it a bit, it looks like something really nice to have in 
Cloudstack, definitely something to look at in the future.


Nice as it is though, it has a big drawback, it's KVM-only, so for now 
we'll stick to "old school" tech that can be used in an agnostic 
matter.


You are more than welcome to share thoughts on the other details 
presented, perhaps pros/cons on filesystems and other gotchas you may 
have encountered yourself.


On 2024-06-19 07:04, Piotr Pisz wrote:

Hi,
We considered a similar problem in

Re: [PR] ci: update terraform, opentofu and cloudstack versions [cloudstack-terraform-provider]

2024-07-05 Thread via GitHub


fabiomatavelli commented on PR #130:
URL: 
https://github.com/apache/cloudstack-terraform-provider/pull/130#issuecomment-2211032473

   hey @vishesh92 @kiranchavala can we get this one reviewed and merged if 
approved please 🙏 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org