Re: [pve-devel] [PATCH-SERIES v3 qemu-server/manager/common] add and set x86-64-v2 as default model for new vms and detect best cpumodel

2023-06-02 Thread Fiona Ebner
Am 01.06.23 um 23:15 schrieb DERUMIER, Alexandre:
> Hi,
> I found an interesting thread on the forum about kvm_pv_unhalt
> 
> https://forum.proxmox.com/threads/live-migration-between-intel-xeon-and-amd-epyc2-linux-guests.68663/
> 
> 
> Sounds good. Please also take a look at the default flag
> "kvm_pv_unhalt". As I mentioned, it would cause a kernel crash in
> paravirtualized unhalt code sooner or later in a migrated VM (started
> on Intel, migrated to AMD).
> 
> Please note that according to our tests simply leaving the CPU type
> empty in the GUI (leading to the qemu command line argument of -cpu
> kvm64,+sep,+lahf_lm,+kvm_pv_unhalt,+kvm_pv_eoi,enforce), while
> seemingly working at first, will after some load and idle time in the
> VM result in a crash involving kvm_kick_cpu function somewhere inside
> of the paravirtualized halt/unhalt code. Linux kernels tested ranged
> from Debian's 4.9.210-1 to Ubuntu's 5.3.0-46 (and some in between).
> Therefore the Proxmox default seems to be unsafe and apparently the
> very minimum working command line probably would be args: -cpu
> kvm64,+sep,+lahf_lm,+kvm_pv_eoi.
> 
> 
> 
> 
> So,it sound like it's crash if it's defined with a cpu vendor not
> matching the real hardware ?
> 
> as it's break migration between intel && amd, maybe we shouldn't add
> it to the new x86-64-vx model ?
> 
> 
> 
> a discussion on qemu-devel mailing is talking about performance
> with/witout it
> https://lists.nongnu.org/archive/html/qemu-devel/2017-10/msg01816.html
> 
> and it's seem to help when you have a lot of cores/numa nodes in guest,
> but can slowdown small vms. 
> 

Note that migration between CPUs of different vendors is not a supported
use case (it will always depend on specific models, kernel versions,
etc.), so we can only justify not adding it to the new default model if
it doesn't make life worse for everybody else.

And I'd be a bit careful to jump to general conclusions just from one
forum post.

It seems like you were the one adding the flag ;)

https://git.proxmox.com/?p=qemu-server.git;a=commitdiff;h=117a041466b3af8368506ae3ab7b8d26fc07d9b7

and the LWN-archived mail linked in the commit message says

> Ticket locks have an inherent problem in a virtualized case, because
> the vCPUs are scheduled rather than running concurrently (ignoring
> gang scheduled vCPUs).  This can result in catastrophic performance
> collapses when the vCPU scheduler doesn't schedule the correct "next"
> vCPU, and ends up scheduling a vCPU which burns its entire timeslice
> spinning.  (Note that this is not the same problem as lock-holder
> preemption, which this series also addresses; that's also a problem,
> but not catastrophic).

"catastrophic performance collapses" doesn't sound very promising :/

But if we find that
kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+sep,+lahf_lm,+popcnt,+sse4.1,+sse4.2,+ssse3
causes issues (even if not cross-vendor live-migrating) with the
+kvm_pv_unhalt flag, but not without, it would be a much more convincing
reason against adding the flag for the new default.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 proxmox 3/5] apt: tests: create temporary test directories in CARGO_TARGET_TMPDIR

2023-06-02 Thread Fiona Ebner
Signed-off-by: Fiona Ebner 
---

New in v2.

 proxmox-apt/tests/repositories.rs | 17 +
 1 file changed, 5 insertions(+), 12 deletions(-)

diff --git a/proxmox-apt/tests/repositories.rs 
b/proxmox-apt/tests/repositories.rs
index 4b3c9de..6319a62 100644
--- a/proxmox-apt/tests/repositories.rs
+++ b/proxmox-apt/tests/repositories.rs
@@ -19,15 +19,11 @@ fn test_parse_write() -> Result<(), Error> {
 
 fn test_parse_write_dir(read_dir: &str) -> Result<(), Error> {
 let test_dir = std::env::current_dir()?.join("tests");
+let tmp_dir = PathBuf::from(env!("CARGO_TARGET_TMPDIR").to_string());
 let read_dir = test_dir.join(read_dir);
-let write_dir = test_dir.join("sources.list.d.actual");
+let write_dir = tmp_dir.join("sources.list.d.actual");
 let expected_dir = test_dir.join("sources.list.d.expected");
 
-if write_dir.is_dir() {
-std::fs::remove_dir_all(&write_dir)
-.map_err(|err| format_err!("unable to remove dir {:?} - {}", 
write_dir, err))?;
-}
-
 std::fs::create_dir_all(&write_dir)
 .map_err(|err| format_err!("unable to create dir {:?} - {}", 
write_dir, err))?;
 
@@ -66,6 +62,7 @@ fn test_parse_write_dir(read_dir: &str) -> Result<(), Error> {
 expected_count += 1;
 
 let expected_path = entry?.path();
+
 let actual_path = write_dir.join(expected_path.file_name().unwrap());
 
 let expected_contents = std::fs::read(&expected_path)
@@ -91,13 +88,9 @@ fn test_parse_write_dir(read_dir: &str) -> Result<(), Error> 
{
 #[test]
 fn test_digest() -> Result<(), Error> {
 let test_dir = std::env::current_dir()?.join("tests");
+let tmp_dir = PathBuf::from(env!("CARGO_TARGET_TMPDIR").to_string());
 let read_dir = test_dir.join("sources.list.d");
-let write_dir = test_dir.join("sources.list.d.digest");
-
-if write_dir.is_dir() {
-std::fs::remove_dir_all(&write_dir)
-.map_err(|err| format_err!("unable to remove dir {:?} - {}", 
write_dir, err))?;
-}
+let write_dir = tmp_dir.join("sources.list.d.digest");
 
 std::fs::create_dir_all(&write_dir)
 .map_err(|err| format_err!("unable to create dir {:?} - {}", 
write_dir, err))?;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 proxmox 2/5] apt: split Ceph main repository into no-subscription and enterprise

2023-06-02 Thread Fiona Ebner
The old 'main' component stays valid, pointing to no-subscription,
which means the is_referenced_repository() check needs a special case
for it. It will eventually go away, together with the handles for
Quincy.

Alternatively, the standard repository's info() could've been changed
to return multiple possible components, similar to URLs, but as
opposed to URLs, there could be a standard repository that wants to
have multiple components and it feels a bit unnatural, because
multiple components are usually not aliases of the same. And adapting
is_referenced_repository() would be needed here too. So overall, the
above alternative just felt better.

Signed-off-by: Fiona Ebner 
---

Changes in v2:
* mention that 'main' component now maps to no-subscription

 proxmox-apt/src/repositories/mod.rs|  3 +-
 proxmox-apt/src/repositories/repository.rs | 10 +-
 proxmox-apt/src/repositories/standard.rs   | 36 --
 proxmox-apt/tests/repositories.rs  |  3 +-
 4 files changed, 39 insertions(+), 13 deletions(-)

diff --git a/proxmox-apt/src/repositories/mod.rs 
b/proxmox-apt/src/repositories/mod.rs
index 11f52dd..b2e83a0 100644
--- a/proxmox-apt/src/repositories/mod.rs
+++ b/proxmox-apt/src/repositories/mod.rs
@@ -90,7 +90,8 @@ pub fn standard_repositories(
 
 if product == "pve" {
 result.append(&mut vec![
-APTStandardRepository::from(APTRepositoryHandle::CephQuincy),
+
APTStandardRepository::from(APTRepositoryHandle::CephQuincyEnterprise),
+
APTStandardRepository::from(APTRepositoryHandle::CephQuincyNoSubscription),
 APTStandardRepository::from(APTRepositoryHandle::CephQuincyTest),
 ]);
 }
diff --git a/proxmox-apt/src/repositories/repository.rs 
b/proxmox-apt/src/repositories/repository.rs
index a5e3015..ba4e8b1 100644
--- a/proxmox-apt/src/repositories/repository.rs
+++ b/proxmox-apt/src/repositories/repository.rs
@@ -285,11 +285,19 @@ impl APTRepository {
 found_uri = found_uri || handle_uris.iter().any(|handle_uri| 
handle_uri == uri);
 }
 
+// In the past it was main instead of enterprise/no-subscription, and 
main now maps to
+// no-subscription. Note this only applies for Quincy.
+let found_component = if handle == 
APTRepositoryHandle::CephQuincyNoSubscription {
+self.components.contains(&component) || 
self.components.contains(&"main".to_string())
+} else {
+self.components.contains(&component)
+};
+
 self.types.contains(&package_type)
 && found_uri
 // using contains would require a &String
 && self.suites.iter().any(|self_suite| self_suite == suite)
-&& self.components.contains(&component)
+&& found_component
 }
 
 /// Guess the origin from the repository's URIs.
diff --git a/proxmox-apt/src/repositories/standard.rs 
b/proxmox-apt/src/repositories/standard.rs
index 33c4842..2cae630 100644
--- a/proxmox-apt/src/repositories/standard.rs
+++ b/proxmox-apt/src/repositories/standard.rs
@@ -46,8 +46,10 @@ pub enum APTRepositoryHandle {
 NoSubscription,
 /// The test repository.
 Test,
-/// Ceph Quincy repository.
-CephQuincy,
+/// Ceph Quincy enterprise repository.
+CephQuincyEnterprise,
+/// Ceph Quincy no-subscription repository.
+CephQuincyNoSubscription,
 /// Ceph Quincy test repository.
 CephQuincyTest,
 }
@@ -71,7 +73,8 @@ impl TryFrom<&str> for APTRepositoryHandle {
 "enterprise" => Ok(APTRepositoryHandle::Enterprise),
 "no-subscription" => Ok(APTRepositoryHandle::NoSubscription),
 "test" => Ok(APTRepositoryHandle::Test),
-"ceph-quincy" => Ok(APTRepositoryHandle::CephQuincy),
+"ceph-quincy-enterprise" => 
Ok(APTRepositoryHandle::CephQuincyEnterprise),
+"ceph-quincy-no-subscription" => 
Ok(APTRepositoryHandle::CephQuincyNoSubscription),
 "ceph-quincy-test" => Ok(APTRepositoryHandle::CephQuincyTest),
 _ => bail!("unknown repository handle '{}'", string),
 }
@@ -84,7 +87,8 @@ impl Display for APTRepositoryHandle {
 APTRepositoryHandle::Enterprise => write!(f, "enterprise"),
 APTRepositoryHandle::NoSubscription => write!(f, 
"no-subscription"),
 APTRepositoryHandle::Test => write!(f, "test"),
-APTRepositoryHandle::CephQuincy => write!(f, "ceph-quincy"),
+APTRepositoryHandle::CephQuincyEnterprise => write!(f, 
"ceph-quincy-enterprise"),
+APTRepositoryHandle::CephQuincyNoSubscription => write!(f, 
"ceph-quincy-no-subscription"),
 APTRepositoryHandle::CephQuincyTest => write!(f, 
"ceph-quincy-test"),
 }
 }
@@ -107,8 +111,13 @@ impl APTRepositoryHandle {
 "This repository contains the latest packages and is primarily 
used for test labs \
 and by developers to test new features."

[pve-devel] [PATCH v2 proxmox 1/5] apt: drop older Ceph standard repositories

2023-06-02 Thread Fiona Ebner
On Proxmox VE 8, only Quincy and newer will be supported.

Signed-off-by: Fiona Ebner 
---

No changes in v2.

Changes for the series in v2:
* create temporary test directories inside CARGO_TARGET_TMPDIR
* mention that deprecated 'main' component maps to no-subscription
  in description of the repo.

 proxmox-apt/src/repositories/mod.rs  |  4 --
 proxmox-apt/src/repositories/standard.rs | 58 
 proxmox-apt/tests/repositories.rs|  4 --
 3 files changed, 66 deletions(-)

diff --git a/proxmox-apt/src/repositories/mod.rs 
b/proxmox-apt/src/repositories/mod.rs
index d8848b8..11f52dd 100644
--- a/proxmox-apt/src/repositories/mod.rs
+++ b/proxmox-apt/src/repositories/mod.rs
@@ -92,10 +92,6 @@ pub fn standard_repositories(
 result.append(&mut vec![
 APTStandardRepository::from(APTRepositoryHandle::CephQuincy),
 APTStandardRepository::from(APTRepositoryHandle::CephQuincyTest),
-APTStandardRepository::from(APTRepositoryHandle::CephPacific),
-APTStandardRepository::from(APTRepositoryHandle::CephPacificTest),
-APTStandardRepository::from(APTRepositoryHandle::CephOctopus),
-APTStandardRepository::from(APTRepositoryHandle::CephOctopusTest),
 ]);
 }
 
diff --git a/proxmox-apt/src/repositories/standard.rs 
b/proxmox-apt/src/repositories/standard.rs
index 5af3117..33c4842 100644
--- a/proxmox-apt/src/repositories/standard.rs
+++ b/proxmox-apt/src/repositories/standard.rs
@@ -50,14 +50,6 @@ pub enum APTRepositoryHandle {
 CephQuincy,
 /// Ceph Quincy test repository.
 CephQuincyTest,
-/// Ceph Pacific repository.
-CephPacific,
-/// Ceph Pacific test repository.
-CephPacificTest,
-/// Ceph Octoput repository.
-CephOctopus,
-/// Ceph Octoput test repository.
-CephOctopusTest,
 }
 
 impl From for APTStandardRepository {
@@ -81,10 +73,6 @@ impl TryFrom<&str> for APTRepositoryHandle {
 "test" => Ok(APTRepositoryHandle::Test),
 "ceph-quincy" => Ok(APTRepositoryHandle::CephQuincy),
 "ceph-quincy-test" => Ok(APTRepositoryHandle::CephQuincyTest),
-"ceph-pacific" => Ok(APTRepositoryHandle::CephPacific),
-"ceph-pacific-test" => Ok(APTRepositoryHandle::CephPacificTest),
-"ceph-octopus" => Ok(APTRepositoryHandle::CephOctopus),
-"ceph-octopus-test" => Ok(APTRepositoryHandle::CephOctopusTest),
 _ => bail!("unknown repository handle '{}'", string),
 }
 }
@@ -98,10 +86,6 @@ impl Display for APTRepositoryHandle {
 APTRepositoryHandle::Test => write!(f, "test"),
 APTRepositoryHandle::CephQuincy => write!(f, "ceph-quincy"),
 APTRepositoryHandle::CephQuincyTest => write!(f, 
"ceph-quincy-test"),
-APTRepositoryHandle::CephPacific => write!(f, "ceph-pacific"),
-APTRepositoryHandle::CephPacificTest => write!(f, 
"ceph-pacific-test"),
-APTRepositoryHandle::CephOctopus => write!(f, "ceph-octopus"),
-APTRepositoryHandle::CephOctopusTest => write!(f, 
"ceph-octopus-test"),
 }
 }
 }
@@ -130,20 +114,6 @@ impl APTRepositoryHandle {
 "This repository contains the Ceph Quincy packages before they 
are moved to the \
 main repository."
 }
-APTRepositoryHandle::CephPacific => {
-"This repository holds the main Proxmox Ceph Pacific packages."
-}
-APTRepositoryHandle::CephPacificTest => {
-"This repository contains the Ceph Pacific packages before 
they are moved to the \
-main repository."
-}
-APTRepositoryHandle::CephOctopus => {
-"This repository holds the main Proxmox Ceph Octopus packages."
-}
-APTRepositoryHandle::CephOctopusTest => {
-"This repository contains the Ceph Octopus packages before 
they are moved to the \
-main repository."
-}
 }
 .to_string()
 }
@@ -156,10 +126,6 @@ impl APTRepositoryHandle {
 APTRepositoryHandle::Test => "Test",
 APTRepositoryHandle::CephQuincy => "Ceph Quincy",
 APTRepositoryHandle::CephQuincyTest => "Ceph Quincy Test",
-APTRepositoryHandle::CephPacific => "Ceph Pacific",
-APTRepositoryHandle::CephPacificTest => "Ceph Pacific Test",
-APTRepositoryHandle::CephOctopus => "Ceph Octopus",
-APTRepositoryHandle::CephOctopusTest => "Ceph Octopus Test",
 }
 .to_string()
 }
@@ -174,10 +140,6 @@ impl APTRepositoryHandle {
 APTRepositoryHandle::Test => "/etc/apt/sources.list".to_string(),
 APTRepositoryHandle::CephQuincy => 
"/etc/apt/sources.list.d/ceph.list".to_string(),
 APTRepositoryHandle::CephQuincyTest => 
"/etc/apt/sources.list.d/ceph.list".to_string(),
-  

[pve-devel] [PATCH v2 proxmox 4/5] apt: tests: code cleanup to avoid useless vector

2023-06-02 Thread Fiona Ebner
Signed-off-by: Fiona Ebner 
---

No changes in v2.

 proxmox-apt/tests/repositories.rs | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/proxmox-apt/tests/repositories.rs 
b/proxmox-apt/tests/repositories.rs
index 6319a62..2fb7ac8 100644
--- a/proxmox-apt/tests/repositories.rs
+++ b/proxmox-apt/tests/repositories.rs
@@ -389,13 +389,11 @@ fn test_standard_repositories() -> Result<(), Error> {
 let mut file = APTRepositoryFile::new(&pve_alt_list)?.unwrap();
 file.parse()?;
 
-let file_vec = vec![file];
-
 expected[0].status = Some(true);
 expected[1].status = Some(true);
 expected[2].status = Some(false);
 
-let std_repos = standard_repositories(&file_vec, "pve", 
DebianCodename::Bullseye);
+let std_repos = standard_repositories(&[file], "pve", 
DebianCodename::Bullseye);
 
 assert_eq!(std_repos, expected);
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 proxmox 5/5] apt: tests: add tests for Ceph Quincy repository detection on Bookworm

2023-06-02 Thread Fiona Ebner
Signed-off-by: Fiona Ebner 
---

No changes in v2.

 proxmox-apt/tests/repositories.rs | 30 +++
 .../ceph-quincy-bookworm.list |  6 
 .../ceph-quincy-nosub-bookworm.list   |  2 ++
 .../sources.list.d/ceph-quincy-bookworm.list  |  4 +++
 .../ceph-quincy-nosub-bookworm.list   |  2 ++
 5 files changed, 44 insertions(+)
 create mode 100644 
proxmox-apt/tests/sources.list.d.expected/ceph-quincy-bookworm.list
 create mode 100644 
proxmox-apt/tests/sources.list.d.expected/ceph-quincy-nosub-bookworm.list
 create mode 100644 proxmox-apt/tests/sources.list.d/ceph-quincy-bookworm.list
 create mode 100644 
proxmox-apt/tests/sources.list.d/ceph-quincy-nosub-bookworm.list

diff --git a/proxmox-apt/tests/repositories.rs 
b/proxmox-apt/tests/repositories.rs
index 2fb7ac8..be6aaff 100644
--- a/proxmox-apt/tests/repositories.rs
+++ b/proxmox-apt/tests/repositories.rs
@@ -397,6 +397,36 @@ fn test_standard_repositories() -> Result<(), Error> {
 
 assert_eq!(std_repos, expected);
 
+let pve_alt_list = read_dir.join("ceph-quincy-bookworm.list");
+let mut file = APTRepositoryFile::new(&pve_alt_list)?.unwrap();
+file.parse()?;
+
+expected[0].status = None;
+expected[1].status = None;
+expected[2].status = None;
+expected[3].status = Some(true);
+expected[4].status = Some(true);
+expected[5].status = Some(true);
+
+let std_repos = standard_repositories(&[file], "pve", 
DebianCodename::Bookworm);
+
+assert_eq!(std_repos, expected);
+
+let pve_alt_list = read_dir.join("ceph-quincy-nosub-bookworm.list");
+let mut file = APTRepositoryFile::new(&pve_alt_list)?.unwrap();
+file.parse()?;
+
+expected[0].status = None;
+expected[1].status = None;
+expected[2].status = None;
+expected[3].status = None;
+expected[4].status = Some(true);
+expected[5].status = None;
+
+let std_repos = standard_repositories(&[file], "pve", 
DebianCodename::Bookworm);
+
+assert_eq!(std_repos, expected);
+
 Ok(())
 }
 
diff --git 
a/proxmox-apt/tests/sources.list.d.expected/ceph-quincy-bookworm.list 
b/proxmox-apt/tests/sources.list.d.expected/ceph-quincy-bookworm.list
new file mode 100644
index 000..9095e27
--- /dev/null
+++ b/proxmox-apt/tests/sources.list.d.expected/ceph-quincy-bookworm.list
@@ -0,0 +1,6 @@
+deb https://enterprise.proxmox.com/debian/ceph-quincy bookworm enterprise
+
+deb http://download.proxmox.com/debian/ceph-quincy bookworm main
+
+deb http://download.proxmox.com/debian/ceph-quincy bookworm test
+
diff --git 
a/proxmox-apt/tests/sources.list.d.expected/ceph-quincy-nosub-bookworm.list 
b/proxmox-apt/tests/sources.list.d.expected/ceph-quincy-nosub-bookworm.list
new file mode 100644
index 000..b60fa98
--- /dev/null
+++ b/proxmox-apt/tests/sources.list.d.expected/ceph-quincy-nosub-bookworm.list
@@ -0,0 +1,2 @@
+deb http://download.proxmox.com/debian/ceph-quincy bookworm no-subscription
+
diff --git a/proxmox-apt/tests/sources.list.d/ceph-quincy-bookworm.list 
b/proxmox-apt/tests/sources.list.d/ceph-quincy-bookworm.list
new file mode 100644
index 000..fde8eba
--- /dev/null
+++ b/proxmox-apt/tests/sources.list.d/ceph-quincy-bookworm.list
@@ -0,0 +1,4 @@
+deb https://enterprise.proxmox.com/debian/ceph-quincy bookworm enterprise
+deb http://download.proxmox.com/debian/ceph-quincy bookworm main
+deb http://download.proxmox.com/debian/ceph-quincy bookworm test
+
diff --git a/proxmox-apt/tests/sources.list.d/ceph-quincy-nosub-bookworm.list 
b/proxmox-apt/tests/sources.list.d/ceph-quincy-nosub-bookworm.list
new file mode 100644
index 000..b60fa98
--- /dev/null
+++ b/proxmox-apt/tests/sources.list.d/ceph-quincy-nosub-bookworm.list
@@ -0,0 +1,2 @@
+deb http://download.proxmox.com/debian/ceph-quincy bookworm no-subscription
+
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH-SERIES v3 qemu-server/manager/common] add and set x86-64-v2 as default model for new vms and detect best cpumodel

2023-06-02 Thread DERUMIER, Alexandre
> 
> Note that migration between CPUs of different vendors is not a
> supported
> use case (it will always depend on specific models, kernel versions,
> etc.), so we can only justify not adding it to the new default model
> if
> it doesn't make life worse for everybody else.
> 
> And I'd be a bit careful to jump to general conclusions just from one
> forum post.
> 
> It seems like you were the one adding the flag ;)
> 
> 
yes, I known ^_^   (It's the default too on rhev, but they are not
supporting amd-intel migration too).



> and the LWN-archived mail linked in the commit message says
> 
> > Ticket locks have an inherent problem in a virtualized case,
> > because
> > the vCPUs are scheduled rather than running concurrently (ignoring
> > gang scheduled vCPUs).  This can result in catastrophic performance
> > collapses when the vCPU scheduler doesn't schedule the correct
> > "next"
> > vCPU, and ends up scheduling a vCPU which burns its entire
> > timeslice
> > spinning.  (Note that this is not the same problem as lock-holder
> > preemption, which this series also addresses; that's also a
> > problem,
> > but not catastrophic).
> 
> "catastrophic performance collapses" doesn't sound very promising :/
> 

I have found another thread here:
https://lore.kernel.org/all/0484ea3f-4ba7-4b93-e976-098c57171...@redhat.com/
where paolo have done benchmark with only 3% difference.
but yes, still slower anyway.

at minimum, it could be interesting to expose the flag in the gui, for
users really needed intel-amd migration.



> But if we find that
> kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+sep,+lahf_lm,+popcnt,+sse4.
> 1,+sse4.2,+ssse3
> causes issues (even if not cross-vendor live-migrating) with the
> +kvm_pv_unhalt flag, but not without, it would be a much more
> convincing
> reason against adding the flag for the new default.
> 
ok ! (I will send a new patch version today)



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v3 qemu-server 1/7] migration: only migrate disks used by the guest

2023-06-02 Thread Fiona Ebner
Am 01.06.23 um 15:53 schrieb Aaron Lauterer:
> When scanning all configured storages for disk images belonging to the
> VM, the migration could easily fail if a storage is not available, but
> enabled. That storage might not even be used by the VM at all.
> 
> By not doing that and only looking at the disk images referenced in the
> VM config, we can avoid that.
> Extra handling is needed for disk images currently in the 'pending'
> section of the VM config. These disk images used to be detected by
> scanning all storages before.
> It is also necessary to fetch some information (size, format) about the
> disk images explicitly that used to be provided by the initial scan of
> all storages.
> 
> The big change regarding behavior is that disk images not referenced in
> the VM config file will be ignored.  They are already orphans that used
> to be migrated as well, but are now left where they are.  The tests have
> been adapted to that changed behavior.
> 
> Signed-off-by: Aaron Lauterer 
> ---
> changes sind v2:
> - move handling of pending changes into QemuSerer::foreach_volid
> Seems to not have any bad side-effects

This should really be its own patch and talk about what the side effects
actually are and why they are okay in the commit message.

Maybe even keep old behavior for replication at first and then add a
second patch which explicitly enables replication for pending volumes by
removing the filtering initially added. Easiest probably is an option
$exclude_pending for foreach_volid() used only by replication and then
remove it in the following patch.

Like that, each patch has its own clear purpose and effect.

> - style fixes
> - use 'volume_size_info()' to get format and size of the image
> 
>  PVE/QemuMigrate.pm| 88 ---
>  PVE/QemuServer.pm |  9 ++-
>  test/MigrationTest/QemuMigrateMock.pm |  9 +++
>  test/run_qemu_migrate_tests.pl| 12 ++--
>  4 files changed, 50 insertions(+), 68 deletions(-)
> 
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index 09cc1d8..163a721 100644
> --- a/PVE/QemuMigrate.pm
> +++ b/PVE/QemuMigrate.pm
> @@ -149,6 +149,22 @@ sub lock_vm {
>  return PVE::QemuConfig->lock_config($vmid, $code, @param);
>  }
>  
> +sub target_storage_check_available {
> +my ($self, $storecfg, $targetsid, $volid) = @_;
> +
> +if (!$self->{opts}->{remote}) {
> + # check if storage is available on target node
> + my $target_scfg = PVE::Storage::storage_check_enabled(
> + $storecfg,
> + $targetsid,
> + $self->{node},
> + );
> + my ($vtype) = PVE::Storage::parse_volname($storecfg, $volid);
> + die "$volid: content type '$vtype' is not available on storage 
> '$targetsid'\n"
> + if !$target_scfg->{content}->{$vtype};
> +}
> +}

This should also be its own patch, please.

> +
>  sub prepare {
>  my ($self, $vmid) = @_;
>  
> @@ -396,17 +358,21 @@ sub scan_local_volumes {
>   $targetsid = 
> PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $sid);
>   }
>  
> - # check target storage on target node if intra-cluster migration
> - if (!$self->{opts}->{remote}) {
> - PVE::Storage::storage_check_enabled($storecfg, $targetsid, 
> $self->{node});
> -
> - return if $scfg->{shared};
> - }
> + return if $scfg->{shared} && !$self->{opts}->{remote};
> + $self->target_storage_check_available($storecfg, $targetsid, 
> $volid);

Shouldn't these two lines be switched?

Before, the enabled check would be done if !$self->{opts}->{remote} and
then early return.

But now, if $scfg->{shared} && !$self->{opts}->{remote};, you early
return without doing the check at all.

With switched lines, you call the helper, which does the check if
!$self->{opts}->{remote} and then you do the early return like before.

>  
>   $local_volumes->{$volid}->{ref} = $attr->{referenced_in_config} ? 
> 'config' : 'snapshot';
>   $local_volumes->{$volid}->{ref} = 'storage' if $attr->{is_unused};

Not yours, but this is actually wrong, because if the same volume is
already referenced in a snapshot, the reference will be overwritten to
be 'storage' here :P

> + $local_volumes->{$volid}->{ref} = 'storage' if $attr->{is_pending};

But yours is also not correct ;) It should only be done for volumes that
are only referenced in pending, but not in the current config. You don't
touch the...

> @@ -4888,6 +4888,8 @@ sub foreach_volid {

$volhash->{$volid}->{referenced_in_config} = 1 if !defined($snapname);

...line in PVE/QemuServer.pm's foreach_volid(), so is_pending being 1
means referenced_in_config is also 1 and there's no way to distinguish
via attributes if a volume is only in pending or both in pending and the
current config.

Not sure if we should even bother to distinguish between more than
'config' and 'snapshot' anymore. Everything that's just in a snapshot
gets 'snapshot' and 

Re: [pve-devel] [PATCH v3 qemu-server 1/7] migration: only migrate disks used by the guest

2023-06-02 Thread Fiona Ebner
Am 02.06.23 um 11:45 schrieb Fiona Ebner:
> Am 01.06.23 um 15:53 schrieb Aaron Lauterer:
>> When scanning all configured storages for disk images belonging to the
>> VM, the migration could easily fail if a storage is not available, but
>> enabled. That storage might not even be used by the VM at all.
>>
>> By not doing that and only looking at the disk images referenced in the
>> VM config, we can avoid that.
>> Extra handling is needed for disk images currently in the 'pending'
>> section of the VM config. These disk images used to be detected by
>> scanning all storages before.
>> It is also necessary to fetch some information (size, format) about the
>> disk images explicitly that used to be provided by the initial scan of
>> all storages.
>>
>> The big change regarding behavior is that disk images not referenced in
>> the VM config file will be ignored.  They are already orphans that used
>> to be migrated as well, but are now left where they are.  The tests have
>> been adapted to that changed behavior.
>>
>> Signed-off-by: Aaron Lauterer 
>> ---
>> changes sind v2:
>> - move handling of pending changes into QemuSerer::foreach_volid
>> Seems to not have any bad side-effects
> 
> This should really be its own patch and talk about what the side effects
> actually are and why they are okay in the commit message.
> 
> Maybe even keep old behavior for replication at first and then add a
> second patch which explicitly enables replication for pending volumes by
> removing the filtering initially added. Easiest probably is an option
> $exclude_pending for foreach_volid() used only by replication and then
> remove it in the following patch.
> 
> Like that, each patch has its own clear purpose and effect.
> 
>> - style fixes
>> - use 'volume_size_info()' to get format and size of the image
>>
>>  PVE/QemuMigrate.pm| 88 ---
>>  PVE/QemuServer.pm |  9 ++-
>>  test/MigrationTest/QemuMigrateMock.pm |  9 +++
>>  test/run_qemu_migrate_tests.pl| 12 ++--
>>  4 files changed, 50 insertions(+), 68 deletions(-)
>>
>> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
>> index 09cc1d8..163a721 100644
>> --- a/PVE/QemuMigrate.pm
>> +++ b/PVE/QemuMigrate.pm
>> @@ -149,6 +149,22 @@ sub lock_vm {
>>  return PVE::QemuConfig->lock_config($vmid, $code, @param);
>>  }
>>  
>> +sub target_storage_check_available {
>> +my ($self, $storecfg, $targetsid, $volid) = @_;
>> +
>> +if (!$self->{opts}->{remote}) {
>> +# check if storage is available on target node
>> +my $target_scfg = PVE::Storage::storage_check_enabled(
>> +$storecfg,
>> +$targetsid,
>> +$self->{node},
>> +);
>> +my ($vtype) = PVE::Storage::parse_volname($storecfg, $volid);
>> +die "$volid: content type '$vtype' is not available on storage 
>> '$targetsid'\n"
>> +if !$target_scfg->{content}->{$vtype};
>> +}
>> +}
> 
> This should also be its own patch, please.
> 
>> +
>>  sub prepare {
>>  my ($self, $vmid) = @_;
>>  
>> @@ -396,17 +358,21 @@ sub scan_local_volumes {
>>  $targetsid = 
>> PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $sid);
>>  }
>>  
>> -# check target storage on target node if intra-cluster migration
>> -if (!$self->{opts}->{remote}) {
>> -PVE::Storage::storage_check_enabled($storecfg, $targetsid, 
>> $self->{node});
>> -
>> -return if $scfg->{shared};
>> -}
>> +return if $scfg->{shared} && !$self->{opts}->{remote};
>> +$self->target_storage_check_available($storecfg, $targetsid, 
>> $volid);
> 
> Shouldn't these two lines be switched?
> 
> Before, the enabled check would be done if !$self->{opts}->{remote} and
> then early return.
> 
> But now, if $scfg->{shared} && !$self->{opts}->{remote};, you early
> return without doing the check at all.
> 
> With switched lines, you call the helper, which does the check if
> !$self->{opts}->{remote} and then you do the early return like before.
> 
>>  
>>  $local_volumes->{$volid}->{ref} = $attr->{referenced_in_config} ? 
>> 'config' : 'snapshot';
>>  $local_volumes->{$volid}->{ref} = 'storage' if $attr->{is_unused};
> 
> Not yours, but this is actually wrong, because if the same volume is
> already referenced in a snapshot, the reference will be overwritten to
> be 'storage' here :P
> 
>> +$local_volumes->{$volid}->{ref} = 'storage' if $attr->{is_pending};
> 
> But yours is also not correct ;) It should only be done for volumes that
> are only referenced in pending, but not in the current config. You don't
> touch the...
> 
>> @@ -4888,6 +4888,8 @@ sub foreach_volid {
> 
> $volhash->{$volid}->{referenced_in_config} = 1 if !defined($snapname);
> 
> ...line in PVE/QemuServer.pm's foreach_volid(), so is_pending being 1
> means referenced_in_config is also 1 and there's no way to distinguish
> via attributes if a volume is only in pending or both in pendi

Re: [pve-devel] [PATCH widget-toolkit 2/2] utils: format_size: show negative size as NA

2023-06-02 Thread Aaron Lauterer




On 6/1/23 16:22, Thomas Lamprecht wrote:

Am 19/04/2023 um 12:34 schrieb Aaron Lauterer:

Signed-off-by: Aaron Lauterer 
---

AFAIK we do not have negative sizes anywhere, and if, it is an
indication that something is wrong.


above belongs in the commit message, additionaly some background for why doing
this now (i.e., did you run into this or what made you make this change?)



good point. It happens with the first patch of the series, when we return '-1' 
to indicate a broken RBD image.





  src/Utils.js | 3 +++
  1 file changed, 3 insertions(+)

diff --git a/src/Utils.js b/src/Utils.js
index ef72630..8cdbe86 100644
--- a/src/Utils.js
+++ b/src/Utils.js
@@ -688,6 +688,9 @@ utilities: {
  },
  
  format_size: function(size, useSI) {

+   if (size < 0) {
+   return gettext("N/A");


catching this seems OK, but I'd rather just return the value then, as "N/A" (Not
Applicable) doesn't really makes sense here and just hides a potential 
underlying
problem.


Since 'format_size' is used in many places all over the place, what about only 
checking for it in the content view, where we really shouldn't expect a negative 
size?

I think showing N/A instead of '-1 B' is more obvious. Something like this:

diff --git a/www/manager6/storage/ContentView.js 
b/www/manager6/storage/ContentView.js

index 2761b48e..c7b3d5ef 100644
--- a/www/manager6/storage/ContentView.js
+++ b/www/manager6/storage/ContentView.js
@@ -182,7 +182,12 @@ Ext.define('PVE.storage.ContentView', {
'size': {
header: gettext('Size'),
width: 100,
-   renderer: Proxmox.Utils.format_size,
+   renderer: function(size) {
+   if (Number(size) === -1) {
+   return gettext("N/A");
+   }
+   return Proxmox.Utils.format_size(size);
+   },
dataIndex: 'size',
},
};


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH-SERIES v4 qemu-server/manager] add and set x86-64-v2-AES as default model for new vms

2023-06-02 Thread Alexandre Derumier


Hi,

we used kvm64 as default cpumodel since the begin of proxmox. (basically, it's 
like a pentium4 cpu flags).

New distros like rhel9 are compiled to use more modern cpu flags.
(and windows already use new flags since year, and we already add some extra 
cpu flags)

"
In 2020, AMD, Intel, Red Hat, and SUSE worked together to define
three microarchitecture levels on top of the historical x86-64
baseline:

  * x86-64-v1: original x86_64 baseline instruction set
  * x86-64-v2: vector instructions up to Streaming SIMD
   Extensions 4.2 (SSE4.2)  and Supplemental
   Streaming SIMD Extensions 3 (SSSE3), the
   POPCNT instruction, and CMPXCHG16B
  * x86-64-v3: vector instructions up to AVX2, MOVBE,
   and additional bit-manipulation instructions.
  * x86-64-v4: vector instructions from some of the
   AVX-512 variants.

"


This patch series add new models inspired from a patch was found on qemu 
mailing, but never appplied

https://lore.kernel.org/all/20210526144038.278899-1-berra...@redhat.com/T/


In addition to theses model, I have enabled aes too.
I think it's really important, because a lot of users use default values and 
have
bad performance with ssl and other crypto stuffs.


This was discussed on the qemu mailing

"
Crypto accelerator caveats
==

Similarly I'm not a huge fan of leaving out the "aes"
instruction for accelerated crypto, as missing "aes" is
also one of the key factors in making qemu64 a bad choice.

If we include 'aes' in x86-64-v2, then we loose support
for Nehalem hosts.

If we include 'aes' in x86-64-v3 then we further loose
support for Dhyana hosts (an EPYC derived CPU).
"


Nahelemn is a 2008 cpu, so I think it's ok, we are in 2013 ;)
(and user can disable aes flag in gui too)

Dhyana is a chinese fork of epyc, so we don't support the vendor


I still think than enable aes by default is really the more easy,
but for x86-64-v2, just do 1 model without aes (for nehalem), and a
model with aes.

Like this, users don't need to play manually with flags.

This patch series add new models, and set x86-64-v2-AES model as default in 
pve-manager
wizard only.
(to not break current vm, where kvm64 is the default when cputype is not defined
in configuration)

Here the new builtin models:


x86-64-v1 : not implemented, as it's basicaly qemu64|kvm64 -vme,-cx16 for 
compat Opteron_G1 from 2004
so will use it as qemu64|kvm64 is higher are not working on 
opteron_g1 anyway

x86-64-v2 : Derived from qemu, +popcnt;+pni;+sse4.1;+sse4.2;+ssse3

min intel: Nehalem
min amd : Opteron_G3

x86-64-v2-AES : Derived from qemu, +aes;+popcnt;+pni;+sse4.1;+sse4.2;+ssse3

min intel: Westmere
min amd : Opteron_G3

x86-64-v3 : Derived from qemu64 
+aes;+popcnt;+pni;+sse4.1;+sse4.2;+ssse3;+avx;+avx2;+bmi1;+bmi2;+f16c;+fma;+abm;+movbe

min intel: Haswell
min amd : EPYC_v1

x86-64-v4 : Derived from qemu64 
+aes;+popcnt;+pni;+sse4.1;+sse4.2;+ssse3;+avx;+avx2;+bmi1;+bmi2;+f16c;+fma;+abm;+movbe;+avx512f;+avx512bw;+avx512cd;+avx512dq;+avx512vl

min intel: Skylake
min amd : EPYC_v4





changelog v4:
 - remove patches for best cpu detection (maybe do a standalone tool later)
 - use qemu64 as base model and add extra flags
 - add x64-64-v2-AES (default)
 - remove x64-64-v1
 - add x64-64-v4
 - fix fiona comments

qemu-server :

Alexandre Derumier (1):
  cpuconfig: add new x86-64-vX models

 PVE/QemuServer/CPUConfig.pm | 48 +
 1 file changed, 43 insertions(+), 5 deletions(-)

pve-manager:

Alexandre Derumier (1):
  qemu: processor : set x86-64-v2-AES as default cputype for create
wizard

 www/manager6/qemu/OSDefaults.js|  1 +
 www/manager6/qemu/ProcessorEdit.js | 13 +
 2 files changed, 14 insertions(+)


-- 
2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 pve-manager 1/1] qemu: processor : set x86-64-v2-AES as default cputype for create wizard

2023-06-02 Thread Alexandre Derumier
---
 www/manager6/qemu/OSDefaults.js|  1 +
 www/manager6/qemu/ProcessorEdit.js | 13 +
 2 files changed, 14 insertions(+)

diff --git a/www/manager6/qemu/OSDefaults.js b/www/manager6/qemu/OSDefaults.js
index 5e588a58..58bc76ff 100644
--- a/www/manager6/qemu/OSDefaults.js
+++ b/www/manager6/qemu/OSDefaults.js
@@ -43,6 +43,7 @@ Ext.define('PVE.qemu.OSDefaults', {
virtio: 1,
},
scsihw: 'virtio-scsi-single',
+   cputype: 'x86-64-v2-AES',
};
 
// virtio-net is in kernel since 2.6.25
diff --git a/www/manager6/qemu/ProcessorEdit.js 
b/www/manager6/qemu/ProcessorEdit.js
index b845ff66..727d1679 100644
--- a/www/manager6/qemu/ProcessorEdit.js
+++ b/www/manager6/qemu/ProcessorEdit.js
@@ -29,6 +29,19 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
 
viewModel.set('userIsRoot', Proxmox.UserName === 'root@pam');
},
+   control: {
+   '#': {
+   afterrender: 'setCputype',
+   },
+   },
+   setCputype: function() {
+   let me = this;
+   let view = me.getView();
+   let cputype = view.down('CPUModelSelector[name=cputype]');
+   if (view.insideWizard) {
+   cputype.setValue(PVE.qemu.OSDefaults.generic.cputype);
+   }
+},
 },
 
 onGetValues: function(values) {
-- 
2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 qemu-server 1/1] cpuconfig: add new x86-64-vX models

2023-06-02 Thread Alexandre Derumier
https://lists.gnu.org/archive/html/qemu-devel/2021-06/msg01592.html
"
In 2020, AMD, Intel, Red Hat, and SUSE worked together to define
three microarchitecture levels on top of the historical x86-64
baseline:

  * x86-64:original x86_64 baseline instruction set
  * x86-64-v2: vector instructions up to Streaming SIMD
   Extensions 4.2 (SSE4.2)  and Supplemental
   Streaming SIMD Extensions 3 (SSSE3), the
   POPCNT instruction, and CMPXCHG16B
  * x86-64-v3: vector instructions up to AVX2, MOVBE,
   and additional bit-manipulation instructions.
  * x86-64-v4: vector instructions from some of the
   AVX-512 variants.
"

This patch add new builtin model derivated from qemu64 model,
to be compatible between intel/amd.

x86-64-v1 : I'm skipping it, as it's basicaly qemu64|kvm64 -vme,-cx16 for 
compat Opteron_G1 from 2004
so will use it as qemu64|kvm64 is higher are not working on 
opteron_g1 anyway

x86-64-v2 : Derived from qemu, +popcnt;+pni;+sse4.1;+sse4.2;+ssse3

min intel: Nehalem
min amd : Opteron_G3

x86-64-v2-AES : Derived from qemu, +aes;+popcnt;+pni;+sse4.1;+sse4.2;+ssse3

min intel: Westmere
min amd : Opteron_G3

x86-64-v3 : Derived from qemu64 
+aes;+popcnt;+pni;+sse4.1;+sse4.2;+ssse3;+avx;+avx2;+bmi1;+bmi2;+f16c;+fma;+abm;+movbe

min intel: Haswell
min amd : EPYC_v1

x86-64-v4 : Derived from qemu64 
+aes;+popcnt;+pni;+sse4.1;+sse4.2;+ssse3;+avx;+avx2;+bmi1;+bmi2;+f16c;+fma;+abm;+movbe;+avx512f;+avx512bw;+avx512cd;+avx512dq;+avx512vl

min intel: Skylake
min amd : EPYC_v4

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer/CPUConfig.pm | 48 +
 1 file changed, 43 insertions(+), 5 deletions(-)

diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index fb0861b..ac532fb 100644
--- a/PVE/QemuServer/CPUConfig.pm
+++ b/PVE/QemuServer/CPUConfig.pm
@@ -31,6 +31,25 @@ sub load_custom_model_conf {
 return cfs_read_file($default_filename);
 }
 
+my $builtin_models = {
+'x86-64-v2' => {
+   'reported-model' => 'qemu64',
+   flags => "+popcnt;+pni;+sse4.1;+sse4.2;+ssse3",
+},
+'x86-64-v2-AES' => {
+   'reported-model' => 'qemu64',
+   flags => "+aes;+popcnt;+pni;+sse4.1;+sse4.2;+ssse3",
+},
+'x86-64-v3' => {
+   'reported-model' => 'qemu64',
+   flags => 
"+aes;+popcnt;+pni;+sse4.1;+sse4.2;+ssse3;+avx;+avx2;+bmi1;+bmi2;+f16c;+fma;+abm;+movbe",
+},
+'x86-64-v4' => {
+   'reported-model' => 'qemu64',
+   flags => 
"+aes;+popcnt;+pni;+sse4.1;+sse4.2;+ssse3;+avx;+avx2;+bmi1;+bmi2;+f16c;+fma;+abm;+movbe;+avx512f;+avx512bw;+avx512cd;+avx512dq;+avx512vl",
+},
+};
+
 my $depreacated_cpu_map = {
 # there never was such a client CPU, so map it to the server one for 
backward compat
 'Icelake-Client' => 'Icelake-Server',
@@ -212,7 +231,7 @@ sub validate_vm_cpu_conf {
get_custom_model($cputype);
 } else {
die "Built-in cputype '$cputype' is not defined (missing 'custom-' 
prefix?)\n"
-   if !defined($cpu_vendor_list->{$cputype});
+   if !defined($cpu_vendor_list->{$cputype}) && 
!defined($builtin_models->{$cputype});
 }
 
 # in a VM-specific config, certain properties are limited/forbidden
@@ -302,6 +321,17 @@ sub get_cpu_models {
};
 }
 
+for my $model (keys %{$builtin_models}) {
+   my $reported_model = $builtin_models->{$model}->{'reported-model'};
+   $reported_model //= $cpu_fmt->{'reported-model'}->{default};
+   my $vendor = $cpu_vendor_list->{$reported_model};
+   push @$models, {
+   name => $model,
+   custom => 0,
+   vendor => $vendor,
+   };
+}
+
 return $models if !$include_custom;
 
 my $conf = load_custom_model_conf();
@@ -359,7 +389,9 @@ sub print_cpu_device {
or die "Cannot parse cpu description: $cputype\n";
$cpu = $cpuconf->{cputype};
 
-   if (is_custom_model($cpu)) {
+   if (my $model = $builtin_models->{$cpu}) {
+   $cpu = $model->{'reported-model'} // 
$cpu_fmt->{'reported-model'}->{default};
+   } elsif (is_custom_model($cputype)) {
my $custom_cpu = get_custom_model($cpu);
 
$cpu = $custom_cpu->{'reported-model'} // 
$cpu_fmt->{'reported-model'}->{default};
@@ -468,14 +500,17 @@ sub get_cpu_options {
 
 my $cpu = {};
 my $custom_cpu;
+my $builtin_cpu;
 my $hv_vendor_id;
 if (my $cpu_prop_str = $conf->{cpu}) {
$cpu = PVE::JSONSchema::parse_property_string('pve-vm-cpu-conf', 
$cpu_prop_str)
or die "Cannot parse cpu description: $cpu_prop_str\n";
 
$cputype = $cpu->{cputype};
-
-   if (is_custom_model($cputype)) {
+   if (my $model = $builtin_models->{$cputype}) {
+   $cputype = $model->{'reported-model'} // 
$cpu_fmt->{'reported-model'}->{default};
+   $builtin_cpu->{flags} = $model->{'flags'};
+   } elsif (is_custom_model($cputype)) {
$

Re: [pve-devel] [PATCH-SERIES v3 qemu-server/manager/common] add and set x86-64-v2 as default model for new vms and detect best cpumodel

2023-06-02 Thread Fiona Ebner
Am 02.06.23 um 11:13 schrieb DERUMIER, Alexandre:
>>
>> "catastrophic performance collapses" doesn't sound very promising :/
>>
> 
> I have found another thread here:
> https://lore.kernel.org/all/0484ea3f-4ba7-4b93-e976-098c57171...@redhat.com/
> where paolo have done benchmark with only 3% difference.
> but yes, still slower anyway.

So the "catastrophic performance collapses" might be a bit over the top
or the situation has improved since then for non-paravirtualized locks.
> at minimum, it could be interesting to expose the flag in the gui, for
> users really needed intel-amd migration.
> 

I'm not opposed to that. We could also add a short note to the docs that
it might be worth a try to disable the flag if you need cross-vendor
migration and the default model causes issues. IIRC, from some forum
threads, other people did have success with cross-vendor migration just
selecting kvm64 or Nehalem in the UI, i.e. even with the flag. Likely
depends on concrete host CPU models/kernels whether it's an issue or not.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v3 qemu-server 4/7] tests: add migration alias check

2023-06-02 Thread Fiona Ebner
Am 01.06.23 um 15:53 schrieb Aaron Lauterer:
> @@ -351,9 +387,9 @@ my $source_vdisks = {
>   {
>   'ctime' => '1589277334',
>   'format' => 'raw',
> - 'size' => 108003328,
> - 'vmid' => '111',
> - 'volid' => 'local-zfs:vm-111-disk-0',
> + 'size' => 4294967296,
> + 'vmid' => '123',
> + 'volid' => 'local-zfs:vm-123-disk-0',

Why is this changed?

>   },
>   {
>   'format' => 'raw',
> @@ -364,6 +400,24 @@ my $source_vdisks = {
>   'volid' => 'local-zfs:vm-4567-disk-0',
>   },
>  ],
> +'alias-zfs' => [
> + {
> + 'ctime' => '1589277334',
> + 'format' => 'raw',
> + 'size' => 108003328,
> + 'vmid' => '111',
> + 'volid' => 'local-zfs:vm-111-disk-0',
> + },
> +],
> +'alias-zfs-2' => [
> + {
> + 'ctime' => '1589277334',
> + 'format' => 'raw',
> + 'size' => 108003328,
> + 'vmid' => '111',
> + 'volid' => 'local-zfs:vm-111-disk-0',
> + },
> +],

Those are added for 111 instead of 123 and the volids also are not
corresponding to the storage.

If you'd die in the mocked version of volume_size_info when not finding
the volid, you would have noticed ;)


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH pve-manager 2/4] add permissions management for "local" network zone

2023-06-02 Thread Fabian Grünbichler
On May 26, 2023 9:27 am, Alexandre Derumier wrote:
> add a default virtual zone called 'local' in the ressource tree,
> and handle permissions like a true sdn zone
> 
> Signed-off-by: Alexandre Derumier 
> ---
>  PVE/API2/Cluster.pm | 12 
>  PVE/API2/Network.pm |  5 +++--
>  www/manager6/sdn/ZoneContentView.js | 27 ++-
>  3 files changed, 41 insertions(+), 3 deletions(-)
> 
> diff --git a/PVE/API2/Cluster.pm b/PVE/API2/Cluster.pm
> index 2e942368..e8f5e06e 100644
> --- a/PVE/API2/Cluster.pm
> +++ b/PVE/API2/Cluster.pm
> @@ -474,6 +474,18 @@ __PACKAGE__->register_method({
>   }
>   }
>  
> + #add default "local" network zone
> + foreach my $node (@$nodelist) {
> + my $local_sdn = {
> + id => "sdn/$node/local",
> + sdn => 'local',
> + node => $node,
> + type => 'sdn',
> + status => 'ok',
> + };
> + push @$res, $local_sdn;
> + }
> +

should this als obe guarded by the type check like below? is there
anything that ensures that a 'local' zone doesn't already exist as
regular SDN-managed zone?

>   if ($have_sdn) {
>   if (!$param->{type} || $param->{type} eq 'sdn') {
>  
> diff --git a/PVE/API2/Network.pm b/PVE/API2/Network.pm
> index a43579fa..b3faba1a 100644
> --- a/PVE/API2/Network.pm
> +++ b/PVE/API2/Network.pm
> @@ -209,7 +209,7 @@ __PACKAGE__->register_method({
>   type => {
>   description => "Only list specific interface types.",
>   type => 'string',
> - enum => [ @$network_type_enum, 'any_bridge' ],
> + enum => [ @$network_type_enum, 'any_bridge', 'any_local_bridge' 
> ],
>   optional => 1,
>   },
>   },
> @@ -254,7 +254,8 @@ __PACKAGE__->register_method({
>  
>   for my $k (sort keys $ifaces->%*) {
>   my $type = $ifaces->{$k}->{type};
> - my $match = $tfilter eq $type || ($tfilter eq 'any_bridge' && 
> ($type eq 'bridge' || $type eq 'OVSBridge'));
> + my $match = $tfilter eq $type || ($tfilter =~ 
> /^any(_local)?_bridge$/ && ($type eq 'bridge' || $type eq 'OVSBridge'));

this line is getting a bit long, maybe it could be reformated or
refactored?

> +

nit: this blank newline is introduced here and removed in the next patch ;)

>   delete $ifaces->{$k} if !($match && $can_access_vnet->($k));
>   }
>  
> diff --git a/www/manager6/sdn/ZoneContentView.js 
> b/www/manager6/sdn/ZoneContentView.js
> index 08fa9d81..1a994fc9 100644
> --- a/www/manager6/sdn/ZoneContentView.js
> +++ b/www/manager6/sdn/ZoneContentView.js
> @@ -26,6 +26,9 @@ Ext.define('PVE.sdn.ZoneContentView', {
>   }
>  
>   var baseurl = "/nodes/" + me.nodename + "/sdn/zones/" + me.zone + 
> "/content";
> + if (me.zone === 'local') {
> + baseurl = "/nodes/" + me.nodename + 
> "/network?type=any_local_bridge";
> + }
>   var store = Ext.create('Ext.data.Store', {
>   model: 'pve-sdnzone-content',
>   groupField: 'content',
> @@ -95,7 +98,29 @@ Ext.define('PVE.sdn.ZoneContentView', {
>  Ext.define('pve-sdnzone-content', {
>   extend: 'Ext.data.Model',
>   fields: [
> - 'vnet', 'status', 'statusmsg',
> + {
> + name: 'iface',
> + convert: function(value, record) {
> + //map local vmbr to vnet
> + if (record.data.iface) {
> + record.data.vnet = record.data.iface;
> + }
> + return value;
> + },
> + },
> + {
> + name: 'comments',
> + convert: function(value, record) {
> + //map local vmbr comments to vnet alias
> + if (record.data.comments) {
> + record.data.alias = record.data.comments;
> + }
> + return value;
> + },
> + },
> + 'vnet',
> + 'status',
> + 'statusmsg',
>   {
>   name: 'text',
>   convert: function(value, record) {
> -- 
> 2.30.2
> 
> 
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH pve-manager 3/4] api2: network: check permissions for local bridges

2023-06-02 Thread Fabian Grünbichler
On May 26, 2023 9:27 am, Alexandre Derumier wrote:
> Signed-off-by: Alexandre Derumier 
> ---
>  PVE/API2/Network.pm | 12 +---
>  1 file changed, 5 insertions(+), 7 deletions(-)
> 
> diff --git a/PVE/API2/Network.pm b/PVE/API2/Network.pm
> index b3faba1a..ba3b3e0e 100644
> --- a/PVE/API2/Network.pm
> +++ b/PVE/API2/Network.pm
> @@ -240,22 +240,20 @@ __PACKAGE__->register_method({
>  
>   if (my $tfilter = $param->{type}) {
>   my $vnets;
> - my $vnet_cfg;
> - my $can_access_vnet = sub { # only matters for the $have_sdn case, 
> checked implict
> - return 1 if $authuser eq 'root@pam' || !defined($vnets);
> - return 1 if 
> !defined(PVE::Network::SDN::Vnets::sdn_vnets_config($vnet_cfg, $_[0], 1)); # 
> not a vnet
> - $rpcenv->check_any($authuser, "/sdn/vnets/$_[0]", ['SDN.Audit', 
> 'SDN.Allocate'], 1)
> + #check access for local bridges
> + my $can_access_vnet = sub {
> + return 1 if $authuser eq 'root@pam';
> + return 1 if $rpcenv->check_any($authuser, "/sdn/zones/local", 
> ['SDN.Audit', 'SDN.Allocate'], 1);
> + return 1 if $rpcenv->check_any($authuser, "/sdn/vnets/$_[0]", 
> ['SDN.Audit', 'SDN.Allocate'], 1);

here the same question arises - is there anything guarding against name
collisions between SDN vnets and local bridges that pretend to be vnets?
;)

>   };
>  
>   if ($have_sdn && $param->{type} eq 'any_bridge') {
>   $vnets = PVE::Network::SDN::get_local_vnets(); # returns 
> already access-filtered
> - $vnet_cfg = PVE::Network::SDN::Vnets::config();
>   }
>  
>   for my $k (sort keys $ifaces->%*) {
>   my $type = $ifaces->{$k}->{type};
>   my $match = $tfilter eq $type || ($tfilter =~ 
> /^any(_local)?_bridge$/ && ($type eq 'bridge' || $type eq 'OVSBridge'));
> -
>   delete $ifaces->{$k} if !($match && $can_access_vnet->($k));
>   }
>  
> -- 
> 2.30.2
> 
> 
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH pve-manager 4/4] api2: network: check vlan permissions for local bridges

2023-06-02 Thread Fabian Grünbichler
On May 26, 2023 9:27 am, Alexandre Derumier wrote:
> We need to display the bridge is the user have a permission
> on any vlan on the bridge.
> 
> to avoid to check permissions on 4096 vlans for each bridge
> (could be slow with a lot of bridges),
> we first list vlans where acls are defined.
> 
> (4000 check took 60ms on 10year xeon server, should be enough
> for any network where the total number of vlans is limited)

some sort of spec here for how the ACL looks like would be nice to have
(while it's possible to reverse it from the code, it's always nicer to
have the expection explicit as well).

> 
> Signed-off-by: Alexandre Derumier 
> ---
>  PVE/API2/Network.pm | 20 +++-
>  1 file changed, 19 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/API2/Network.pm b/PVE/API2/Network.pm
> index ba3b3e0e..39f17d14 100644
> --- a/PVE/API2/Network.pm
> +++ b/PVE/API2/Network.pm
> @@ -240,17 +240,35 @@ __PACKAGE__->register_method({
>  
>   if (my $tfilter = $param->{type}) {
>   my $vnets;
> + my $bridges_vlans_acl = {};
>   #check access for local bridges
>   my $can_access_vnet = sub {
> + my $bridge = $_[0];
>   return 1 if $authuser eq 'root@pam';
>   return 1 if $rpcenv->check_any($authuser, "/sdn/zones/local", 
> ['SDN.Audit', 'SDN.Allocate'], 1);
> - return 1 if $rpcenv->check_any($authuser, "/sdn/vnets/$_[0]", 
> ['SDN.Audit', 'SDN.Allocate'], 1);
> + return 1 if $rpcenv->check($authuser, "/sdn/vnets/$bridge", 
> ['SDN.Audit'], 1);

why does this drop the Allocate? we usually have the "more privileged"
privilege in addition to Audit (if there is one).

> + my $bridge_vlan = $bridges_vlans_acl->{$bridge};
> + for my $tag (sort keys %$bridge_vlan) {
> + return 1 if $rpcenv->check($authuser, 
> "/sdn/vnets/$bridge.$tag", ['SDN.Audit'], 1);

wouldn't $bridge/$tag be more natural? it would allow propagation from a
bridge to all tags using the usual propagate flag as well..

this could also live in pve-access-control as a special helper, then we
wouldn't need to do this dance here (it would be the first
iterate_acl_tree call site outside of pve-access-control!).

something like this in PVE::RPCEnvironment:

sub check_sdn_vlan(.., $bridge, $priv) {
  .. iterate over all vlans and check while iterating, returning early for 
first one with access
}

basically:

my $bridge = PVE::AccessControl::find_acl_tree_node($cfg->{acl_root}, 
"/sdn/vnets/$bridge");
if ($bridge) {
  for my $vlan (keys $bridge->{children}->%$) {
return 1 if $self->check_any(...);
  }
  return 1 if # check propagate on bridge itself
}
return 0;

> + }
>   };
>  
>   if ($have_sdn && $param->{type} eq 'any_bridge') {
>   $vnets = PVE::Network::SDN::get_local_vnets(); # returns 
> already access-filtered
>   }
>  
> + #find all vlans where we have specific acls
> + if ($tfilter =~ /^any(_local)?_bridge$/) {
> + my $cfg = $rpcenv->{user_cfg};
> + my $vnets_acl_root = 
> $cfg->{acl_root}->{children}->{sdn}->{children}->{vnets};
> + PVE::AccessControl::iterate_acl_tree("/", $vnets_acl_root, sub {
> + my ($path, $node) = @_;
> + if ($path =~ /\/(.*)\.(\d+)$/) {
> + $bridges_vlans_acl->{$1}->{$2} = 1;
> + }
> + });
> + }

because this iterates over *all* ACLs, only to then return upon the
first match above in $can_access_vnet..

it should also be

iterate_acl_tree("/sdn/vnets", ..) or "/sdn/vnets/$bridge" if vlan tags
live as children of the bridge (path and the node should match!). there
also is "find_acl_tree_node" so you don't need to make assumptions about
how the nodes are laid out.

I do wonder whether something that supports ranges would be more
appropriate rather then encoding this in ACLs directly though.. could
always be added later on though (e.g., named vlan objects that define a
set of vlans).

> +
>   for my $k (sort keys $ifaces->%*) {
>   my $type = $ifaces->{$k}->{type};
>   my $match = $tfilter eq $type || ($tfilter =~ 
> /^any(_local)?_bridge$/ && ($type eq 'bridge' || $type eq 'OVSBridge'));
> -- 
> 2.30.2
> 
> 
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v3 container 5/7] migration: only migrate volumes used by the guest

2023-06-02 Thread Fiona Ebner
Am 01.06.23 um 15:53 schrieb Aaron Lauterer:
> @@ -256,42 +262,18 @@ sub phase1 {

(...)

>  
> +# then all pending volumes
> +if (defined $conf->{pending} && %{$conf->{pending}}) {

Same style nits I mentioned in the qemu-server patch last time ;)

> + PVE::LXC::Config->foreach_volume($conf->{pending}, $test_mp);
> +}
> +

Other than that, both container patches:

Reviewed-by: Fiona Ebner 



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server 1/1] api2: add check_bridge_access for create/update vm

2023-06-02 Thread Fabian Grünbichler
a few more places that come to my mind that might warrant further
thinking or discussion:
- restoring a backup
- cloning a VM

On May 26, 2023 9:33 am, Alexandre Derumier wrote:
> Signed-off-by: Alexandre Derumier 
> ---
>  PVE/API2/Qemu.pm | 37 -
>  1 file changed, 36 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 587bb22..ebef93f 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -46,6 +46,12 @@ use PVE::SSHInfo;
>  use PVE::Replication;
>  use PVE::StorageTunnel;
>  
> +my $have_sdn;
> +eval {
> +require PVE::Network::SDN;
> +$have_sdn = 1;
> +};
> +
>  BEGIN {
>  if (!$ENV{PVE_GENERATING_DOCS}) {
>   require PVE::HA::Env::PVE2;
> @@ -601,6 +607,33 @@ my $check_vm_create_usb_perm = sub {
>  return 1;
>  };
>  
> +my $check_bridge_access = sub {
> +my ($rpcenv, $authuser, $param) = @_;
> +
> +return 1 if $authuser eq 'root@pam';
> +
> +foreach my $opt (keys %{$param}) {
> + next if $opt !~ m/^net\d+$/;
> + my $net = PVE::QemuServer::parse_net($param->{$opt});
> + my $bridge = $net->{bridge};
> + my $tag = $net->{tag};
> + my $zone = 'local';
> +
> + if ($have_sdn) {
> + my $vnet_cfg = PVE::Network::SDN::Vnets::config();
> + if (defined(my $vnet = 
> PVE::Network::SDN::Vnets::sdn_vnets_config($vnet_cfg, $bridge, 1))) {
> + $zone = $vnet->{zone};
> + }
> + }
> + return 1 if $rpcenv->check_any($authuser, "/sdn/zones/$zone", 
> ['SDN.Audit', 'SDN.Allocate'], 1);

why is this $noerr? should be a hard fail AFAICT! a, re-reading the
cover letter, this is intentional. might be worth a callout here in a
comment (and once this is finalized, in the docs ;))

> + return 1 if $tag && $rpcenv->check_any($authuser, 
> "/sdn/vnets/$bridge.$tag", ['SDN.Audit', 'SDN.Allocate'], 1);

same here, I think I'd prefer /sdn/vnets/$bridge/$tag if we go down that
route! then this would also be improved, since having access to the tag
also checks access to the bridge, and this could be turned into a hard
fail, which it should be!

what about trunking?

> +
> + $rpcenv->check_any($authuser, "/sdn/vnets/$bridge", ['SDN.Audit', 
> 'SDN.Allocate']);

for the whole block of checks here:

what are the semantics of Audit and Allocate here? do we need another
level between "sees bridge/vnet" and "can administer/reconfigure
bridge/vnet" that says "can use bridge/vnet"?

> +}
> +return 1;
> +};
> +
>  my $check_vm_modify_config_perm = sub {
>  my ($rpcenv, $authuser, $vmid, $pool, $key_list) = @_;
>  
> @@ -878,7 +911,7 @@ __PACKAGE__->register_method({
>  
>   &$check_vm_create_serial_perm($rpcenv, $authuser, $vmid, $pool, 
> $param);
>   &$check_vm_create_usb_perm($rpcenv, $authuser, $vmid, $pool, 
> $param);
> -
> + &$check_bridge_access($rpcenv, $authuser, $param);
>   &$check_cpu_model_access($rpcenv, $authuser, $param);
>  
>   $check_drive_param->($param, $storecfg);
> @@ -1578,6 +1611,8 @@ my $update_vm_api  = sub {
>  
>  &$check_storage_access($rpcenv, $authuser, $storecfg, $vmid, $param);
>  
> +&$check_bridge_access($rpcenv, $authuser, $param);
> +
>  my $updatefn =  sub {
>  
>   my $conf = PVE::QemuConfig->load_config($vmid);
> -- 
> 2.30.2
> 
> 
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server 0/1] api2: add check_bridge_access

2023-06-02 Thread Fabian Grünbichler
On May 26, 2023 9:33 am, Alexandre Derumier wrote:
> For proxmox 8, following the pve-manager patch serie
> https://lists.proxmox.com/pipermail/pve-devel/2023-May/056970.html
> 
> This patch serie add check of permissions for bridge/vnets access
> (currently only at vm create/update, I'm note sureif they are other
> places where it should be added)
> 
> if user have access to a zone, it have access to all vnets + vnet vlans
> if user have access to a vnet, it have access to the vnet + vnet vlans
> if user have access to a specific vnet+vlan, it have access to the vlan only

the last part could be solved more elegantly IMHO by making tags
children of vnets (and delegating the propagation the propagation bit of
the ACL), see comments on individual patches.

nit: if you send a single commit, no need for a cover letter - and then
please include this information in the commit message, as series cover
letters are not included once the patch is applied!

> 
> Alexandre Derumier (1):
>   api2: add check_bridge_access for create/update vm
> 
>  PVE/API2/Qemu.pm | 37 -
>  1 file changed, 36 insertions(+), 1 deletion(-)
> 
> -- 
> 2.30.2
> 
> 
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH-SERIES v3 qemu-server/manager/common] add and set x86-64-v2 as default model for new vms and detect best cpumodel

2023-06-02 Thread DERUMIER, Alexandre
> > 
> > at minimum, it could be interesting to expose the flag in the gui,
> > for
> > users really needed intel-amd migration.
> > 
> 
> I'm not opposed to that. We could also add a short note to the docs
> that
> it might be worth a try to disable the flag if you need cross-vendor
> migration and the default model causes issues. IIRC, from some forum
> threads, other people did have success with cross-vendor migration
> just
> selecting kvm64 or Nehalem in the UI, i.e. even with the flag. Likely
> depends on concrete host CPU models/kernels whether it's an issue or
> not.
> 
ok thanks.I'll look to add it in gui, and also make doc.

I'll add also new cpu model description in doc, and also other vendor
models and revisions.

___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v3 docs 7/7] storage: add hint to avoid storage aliasing

2023-06-02 Thread Fiona Ebner
Am 01.06.23 um 15:53 schrieb Aaron Lauterer:
> Signed-off-by: Aaron Lauterer 
> ---
> I am happy for suggestions on how to improve the phrasing if it is not
> clear enough.
> 
>  pvesm.adoc | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/pvesm.adoc b/pvesm.adoc
> index 6ade1a4..7e91c50 100644
> --- a/pvesm.adoc
> +++ b/pvesm.adoc
> @@ -174,6 +174,9 @@ zfspool: local-zfs
>   content images,rootdir
>  
>  
> +CAUTION: It is not advisable to have multiple storage configurations pointing
> +to the exact same underlying storage. Such an _aliased_ storage configuration
> +can lead to unexpected behavior.

s/not advisable/problematic/ would sound a bit stronger IMHO

Maybe give some rationale and mention that volume IDs are not unique
anymore with such a configuration and that PVE expects that in certain
places, maybe giving a quick example. And we could also mention that in
case the content types is different, it can be fine, but still not
recommended.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH container/manager/docs] Cloudinit support for LXC

2023-06-02 Thread Leo Nunner
This series introduces basic cloudinit support for containers. All in
all, it works quite similar to VMs, with the caveat that we only allow
network configuration through the alrady existing systems, and not via
cloud-init.

pve-container:

Leo Nunner (4):
  cloudinit: introduce config parameters
  cloudinit: basic implementation
  cloudinit: add dump command to pct
  cloudinit: add function dumping options for docs

 src/PVE/API2/LXC.pm|  36 +++
 src/PVE/API2/LXC/Config.pm |   7 ++-
 src/PVE/CLI/pct.pm |   4 ++
 src/PVE/LXC.pm |   1 +
 src/PVE/LXC/Cloudinit.pm   | 125 +
 src/PVE/LXC/Config.pm  |  64 +++
 src/PVE/LXC/Makefile   |   1 +
 src/lxc-pve-prestart-hook  |   5 ++
 8 files changed, 242 insertions(+), 1 deletion(-)
 create mode 100644 src/PVE/LXC/Cloudinit.pm

pve-manager:

Leo Nunner (2):
  cloudinit: rename qemu cloudinit panel
  cloudinit: introduce panel for LXCs

 www/manager6/Makefile  |   1 +
 www/manager6/lxc/CloudInit.js  | 237 +
 www/manager6/lxc/Config.js |   6 +
 www/manager6/qemu/CloudInit.js |   4 +-
 www/manager6/qemu/Config.js|   2 +-
 5 files changed, 247 insertions(+), 3 deletions(-)
 create mode 100644 www/manager6/lxc/CloudInit.js

pve-docs:

Leo Nunner (2):
  pct: add script to generate cloudinit options
  pct: document cloudinit for LXC

 Makefile   |   1 +
 gen-pct-cloud-init-opts.pl |  16 ++
 pct-cloud-init.adoc| 114 +
 pct.adoc   |   4 ++
 4 files changed, 135 insertions(+)
 create mode 100755 gen-pct-cloud-init-opts.pl
 create mode 100644 pct-cloud-init.adoc

-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH container 2/4] cloudinit: basic implementation

2023-06-02 Thread Leo Nunner
The code to generate the actual configuration works pretty much the same
as with the VM system. We generate an instance ID by hashing the user
configuration, causing cloud-init to run every time said configuration
changes.

Instead of creating a config drive, we write files directly into the
volume of the container. We create a folder at
'/var/lib/cloud/seed/nocloud-net' and write the files 'user-data',
'vendor-data' and 'meta-data'. Cloud-init looks at the instance ID
inside 'meta-data' to decide whether it should run (again) or not.

Custom scripts need to be located inside the snippets directory, and
overwrite the default generated configuration file.

Signed-off-by: Leo Nunner 
---
 src/PVE/LXC.pm|   1 +
 src/PVE/LXC/Cloudinit.pm  | 114 ++
 src/PVE/LXC/Makefile  |   1 +
 src/lxc-pve-prestart-hook |   5 ++
 4 files changed, 121 insertions(+)
 create mode 100644 src/PVE/LXC/Cloudinit.pm

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index fe1a5cf..f4391c8 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -39,6 +39,7 @@ use PVE::Tools qw(
 use PVE::Syscall qw(:fsmount);
 
 use PVE::LXC::CGroup;
+use PVE::LXC::Cloudinit;
 use PVE::LXC::Config;
 use PVE::LXC::Monitor;
 use PVE::LXC::Tools;
diff --git a/src/PVE/LXC/Cloudinit.pm b/src/PVE/LXC/Cloudinit.pm
new file mode 100644
index 000..3e8617b
--- /dev/null
+++ b/src/PVE/LXC/Cloudinit.pm
@@ -0,0 +1,114 @@
+package PVE::LXC::Cloudinit;
+
+use strict;
+use warnings;
+
+use Digest::SHA;
+use File::Path;
+
+use PVE::LXC;
+
+sub gen_cloudinit_metadata {
+my ($user) = @_;
+
+my $uuid_str = Digest::SHA::sha1_hex($user);
+return cloudinit_metadata($uuid_str);
+}
+
+sub cloudinit_metadata {
+my ($uuid) = @_;
+my $raw = "";
+
+$raw .= "instance-id: $uuid\n";
+
+return $raw;
+}
+
+sub cloudinit_userdata {
+my ($conf) = @_;
+
+my $content = "#cloud-config\n";
+
+my $username = $conf->{ciuser};
+my $password = $conf->{cipassword};
+
+$content .= "user: $username\n" if defined($username);
+$content .= "password: $password\n" if defined($password);
+
+if (defined(my $keys = $conf->{sshkeys})) {
+   $keys = URI::Escape::uri_unescape($keys);
+   $keys = [map { my $key = $_; chomp $key; $key } split(/\n/, $keys)];
+   $keys = [grep { /\S/ } @$keys];
+   $content .= "ssh_authorized_keys:\n";
+   foreach my $k (@$keys) {
+   $content .= "  - $k\n";
+   }
+}
+$content .= "chpasswd:\n";
+$content .= "  expire: False\n";
+
+if (!defined($username) || $username ne 'root') {
+   $content .= "users:\n";
+   $content .= "  - default\n";
+}
+
+$content .= "package_upgrade: true\n" if $conf->{ciupgrade};
+
+return $content;
+}
+
+sub read_cloudinit_snippets_file {
+my ($storage_conf, $volid) = @_;
+
+my ($full_path, undef, $type) = PVE::Storage::path($storage_conf, $volid);
+die "$volid is not in the snippets directory\n" if $type ne 'snippets';
+return PVE::Tools::file_get_contents($full_path, 1 * 1024 * 1024);
+}
+
+sub read_custom_cloudinit_files {
+my ($conf) = @_;
+
+my $cloudinit_conf = $conf->{cicustom};
+my $files = $cloudinit_conf ? 
PVE::JSONSchema::parse_property_string('pve-pct-cicustom', $cloudinit_conf) : 
{};
+
+my $user_volid = $files->{user};
+my $vendor_volid = $files->{vendor};
+
+my $storage_conf = PVE::Storage::config();
+
+my $user_data;
+if ($user_volid) {
+   $user_data = read_cloudinit_snippets_file($storage_conf, $user_volid);
+}
+
+my $vendor_data;
+if ($vendor_volid) {
+   $user_data = read_cloudinit_snippets_file($storage_conf, $vendor_volid);
+}
+
+return ($user_data, $vendor_data);
+}
+
+sub create_cloudinit_files {
+my ($conf, $setup) = @_;
+
+my $cloudinit_dir = "/var/lib/cloud/seed/nocloud-net";
+
+my ($user_data, $vendor_data) = read_custom_cloudinit_files($conf);
+$user_data = cloudinit_userdata($conf) if !defined($user_data);
+$vendor_data = '' if !defined($vendor_data);
+
+my $meta_data = gen_cloudinit_metadata($user_data);
+
+$setup->protected_call(sub {
+   my $plugin = $setup->{plugin};
+
+   $plugin->ct_make_path($cloudinit_dir);
+
+   $plugin->ct_file_set_contents("$cloudinit_dir/user-data", $user_data);
+   $plugin->ct_file_set_contents("$cloudinit_dir/vendor-data", 
$vendor_data);
+   $plugin->ct_file_set_contents("$cloudinit_dir/meta-data", $meta_data);
+});
+}
+
+1;
diff --git a/src/PVE/LXC/Makefile b/src/PVE/LXC/Makefile
index a190260..5d595ba 100644
--- a/src/PVE/LXC/Makefile
+++ b/src/PVE/LXC/Makefile
@@ -1,5 +1,6 @@
 SOURCES= \
CGroup.pm \
+   Cloudinit.pm \
Command.pm \
Config.pm \
Create.pm \
diff --git a/src/lxc-pve-prestart-hook b/src/lxc-pve-prestart-hook
index 3bdf7e4..e5932d2 100755
--- a/src/lxc-pve-prestart-hook
+++ b/src/lxc-pve-prestart-hook
@@ -12,6 +12,7 @@ use POSIX;
 use PVE::CGroup;

[pve-devel] [PATCH manager 2/2] cloudinit: introduce panel for LXCs

2023-06-02 Thread Leo Nunner
based on the already existing panel for VMs. Some things have been
changed, there is no network configuration, and a separate "enable"
options toggles cloud-init (simillar to adding/removing a cloud-init
drive for VMs).

Signed-off-by: Leo Nunner 
---
 www/manager6/Makefile |   1 +
 www/manager6/lxc/CloudInit.js | 237 ++
 www/manager6/lxc/Config.js|   6 +
 3 files changed, 244 insertions(+)
 create mode 100644 www/manager6/lxc/CloudInit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 336355abf..05a2fd558 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -168,6 +168,7 @@ JSSRC=  
\
dc/UserTagAccessEdit.js \
dc/RegisteredTagsEdit.js\
lxc/CmdMenu.js  \
+   lxc/CloudInit.js\
lxc/Config.js   \
lxc/CreateWizard.js \
lxc/DNS.js  \
diff --git a/www/manager6/lxc/CloudInit.js b/www/manager6/lxc/CloudInit.js
new file mode 100644
index 0..11d5448de
--- /dev/null
+++ b/www/manager6/lxc/CloudInit.js
@@ -0,0 +1,237 @@
+Ext.define('PVE.lxc.CloudInit', {
+extend: 'Proxmox.grid.PendingObjectGrid',
+xtype: 'pveLxcCiPanel',
+
+tbar: [
+   {
+   xtype: 'proxmoxButton',
+   disabled: true,
+   dangerous: true,
+   confirmMsg: function(rec) {
+   let view = this.up('grid');
+   var warn = gettext('Are you sure you want to remove entry {0}');
+
+   var entry = rec.data.key;
+   var msg = Ext.String.format(warn, "'"
+   + view.renderKey(entry, {}, rec) + "'");
+
+   return msg;
+   },
+   enableFn: function(record) {
+   let view = this.up('grid');
+   var caps = Ext.state.Manager.get('GuiCap');
+   if (view.rows[record.data.key].never_delete ||
+   !caps.vms['VM.Config.Network']) {
+   return false;
+   }
+
+   if (record.data.key === 'cipassword' && !record.data.value) {
+   return false;
+   }
+   return true;
+   },
+   handler: function() {
+   let view = this.up('grid');
+   let records = view.getSelection();
+   if (!records || !records.length) {
+   return;
+   }
+
+   var id = records[0].data.key;
+
+   var params = {};
+   params.delete = id;
+   Proxmox.Utils.API2Request({
+   url: view.baseurl + '/config',
+   waitMsgTarget: view,
+   method: 'PUT',
+   params: params,
+   failure: function(response, opts) {
+   Ext.Msg.alert('Error', response.htmlStatus);
+   },
+   callback: function() {
+   view.reload();
+   },
+   });
+   },
+   text: gettext('Remove'),
+   },
+   {
+   xtype: 'proxmoxButton',
+   disabled: true,
+   enableFn: function(rec) {
+   let view = this.up('pveLxcCiPanel');
+   return !!view.rows[rec.data.key].editor;
+   },
+   handler: function() {
+   let view = this.up('grid');
+   view.run_editor();
+   },
+   text: gettext('Edit'),
+   },
+],
+
+border: false,
+
+renderKey: function(key, metaData, rec, rowIndex, colIndex, store) {
+   var me = this;
+   var rows = me.rows;
+   var rowdef = rows[key] || {};
+
+   var icon = "";
+   if (rowdef.iconCls) {
+   icon = ' ';
+   }
+   return icon + (rowdef.header || key);
+},
+
+listeners: {
+   activate: function() {
+   var me = this;
+   me.rstore.startUpdate();
+   },
+   itemdblclick: function() {
+   var me = this;
+   me.run_editor();
+   },
+},
+
+initComponent: function() {
+   var me = this;
+
+   var nodename = me.pveSelNode.data.node;
+   if (!nodename) {
+   throw "no node name specified";
+   }
+
+   var vmid = me.pveSelNode.data.vmid;
+   if (!vmid) {
+   throw "no VM ID specified";
+   }
+   var caps = Ext.state.Manager.get('GuiCap');
+   me.baseurl = '/api2/extjs/nodes/' + nodename + '/lxc/' + vmid;
+   me.url = me.baseurl + '/pending';
+   me.editorConfig.url = me.baseurl + '/config';
+   me.editorConfig.pveSelNode = me.pveSelNode;
+
+   let caps_ci = caps.vms['VM.Config.Cloudinit'] || 
caps.vms['VM.Config.Network'];
+   /* editor is string and object */
+   me.r

[pve-devel] [PATCH manager 1/2] cloudinit: rename qemu cloudinit panel

2023-06-02 Thread Leo Nunner
Signed-off-by: Leo Nunner 
---
 www/manager6/qemu/CloudInit.js | 4 ++--
 www/manager6/qemu/Config.js| 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/www/manager6/qemu/CloudInit.js b/www/manager6/qemu/CloudInit.js
index 77ff93d41..14117ff6b 100644
--- a/www/manager6/qemu/CloudInit.js
+++ b/www/manager6/qemu/CloudInit.js
@@ -1,6 +1,6 @@
 Ext.define('PVE.qemu.CloudInit', {
 extend: 'Proxmox.grid.PendingObjectGrid',
-xtype: 'pveCiPanel',
+xtype: 'pveQemuCiPanel',
 
 onlineHelp: 'qm_cloud_init',
 
@@ -66,7 +66,7 @@ Ext.define('PVE.qemu.CloudInit', {
xtype: 'proxmoxButton',
disabled: true,
enableFn: function(rec) {
-   let view = this.up('pveCiPanel');
+   let view = this.up('pveQemuCiPanel');
return !!view.rows[rec.data.key].editor;
},
handler: function() {
diff --git a/www/manager6/qemu/Config.js b/www/manager6/qemu/Config.js
index 94c540c59..03e1e6d8d 100644
--- a/www/manager6/qemu/Config.js
+++ b/www/manager6/qemu/Config.js
@@ -284,7 +284,7 @@ Ext.define('PVE.qemu.Config', {
title: 'Cloud-Init',
itemId: 'cloudinit',
iconCls: 'fa fa-cloud',
-   xtype: 'pveCiPanel',
+   xtype: 'pveQemuCiPanel',
},
{
title: gettext('Options'),
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH container 4/4] cloudinit: add function dumping options for docs

2023-06-02 Thread Leo Nunner
Adds a cloudinit_config_properties function which dumps the config
parameters from the cloudinit config description. This is called
automatically when generating the docs.

Signed-off-by: Leo Nunner 
---
 src/PVE/LXC/Config.pm | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index f8bfb95..49d6479 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -1244,6 +1244,9 @@ sub check_type {
 }
 }
 
+sub cloudinit_config_properties {
+return Storable::dclone($confdesc_cloudinit);
+}
 
 # add JSON properties for create and set function
 sub json_config_properties {
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH container 3/4] cloudinit: add dump command to pct

2023-06-02 Thread Leo Nunner
Introduce a 'pct cloudinit dump  ' command to dump the
generated cloudinit configuration for a section.

Signed-off-by: Leo Nunner 
---
 src/PVE/API2/LXC.pm  | 33 +
 src/PVE/CLI/pct.pm   |  4 
 src/PVE/LXC/Cloudinit.pm | 11 +++
 3 files changed, 48 insertions(+)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index e585509..2cae727 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -2963,4 +2963,37 @@ __PACKAGE__->register_method({
 
return { socket => $socket };
 }});
+
+__PACKAGE__->register_method({
+name => 'cloudinit_generated_config_dump',
+path => '{vmid}/cloudinit/dump',
+method => 'GET',
+proxyto => 'node',
+description => "Get automatically generated cloudinit config.",
+permissions => {
+   check => ['perm', '/vms/{vmid}', [ 'VM.Audit' ]],
+},
+parameters => {
+   additionalProperties => 0,
+   properties => {
+   node => get_standard_option('pve-node'),
+   vmid => get_standard_option('pve-vmid', { completion => 
\&PVE::LXC::complete_ctid }),
+   type => {
+   description => 'Config type.',
+   type => 'string',
+   enum => ['user', 'meta'],
+   },
+   },
+},
+returns => {
+   type => 'string',
+},
+code => sub {
+   my ($param) = @_;
+
+   my $conf = PVE::LXC::Config->load_config($param->{vmid});
+
+   return PVE::LXC::Cloudinit::dump_cloudinit_config($conf, 
$param->{type});
+}});
+
 1;
diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm
index ff75d33..69f3560 100755
--- a/src/PVE/CLI/pct.pm
+++ b/src/PVE/CLI/pct.pm
@@ -1000,6 +1000,10 @@ our $cmddef = {
 rescan  => [ __PACKAGE__, 'rescan', []],
 cpusets => [ __PACKAGE__, 'cpusets', []],
 fstrim => [ __PACKAGE__, 'fstrim', ['vmid']],
+
+cloudinit => {
+   dump => [ "PVE::API2::LXC", 'cloudinit_generated_config_dump', ['vmid', 
'type'], { node => $nodename }, sub { print "$_[0]\n"; }],
+},
 };
 
 1;
diff --git a/src/PVE/LXC/Cloudinit.pm b/src/PVE/LXC/Cloudinit.pm
index 3e8617b..b6fec2c 100644
--- a/src/PVE/LXC/Cloudinit.pm
+++ b/src/PVE/LXC/Cloudinit.pm
@@ -111,4 +111,15 @@ sub create_cloudinit_files {
 });
 }
 
+sub dump_cloudinit_config {
+my ($conf, $type) = @_;
+
+if ($type eq 'user') {
+   return cloudinit_userdata($conf);
+} else { # metadata config
+   my $user = cloudinit_userdata($conf);
+   return gen_cloudinit_metadata($user);
+}
+}
+
 1;
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH docs 1/2] pct: add script to generate cloudinit options

2023-06-02 Thread Leo Nunner
…the same way as it's already being done for VMs.

Signed-off-by: Leo Nunner 
---
 gen-pct-cloud-init-opts.pl | 16 
 1 file changed, 16 insertions(+)
 create mode 100755 gen-pct-cloud-init-opts.pl

diff --git a/gen-pct-cloud-init-opts.pl b/gen-pct-cloud-init-opts.pl
new file mode 100755
index 000..c22c4d1
--- /dev/null
+++ b/gen-pct-cloud-init-opts.pl
@@ -0,0 +1,16 @@
+#!/usr/bin/perl
+
+use lib '.';
+use strict;
+use warnings;
+use PVE::JSONSchema;
+use PVE::RESTHandler;
+use PVE::LXC::Config;
+
+my $prop = PVE::LXC::Config::cloudinit_config_properties();
+
+my $data = PVE::RESTHandler::dump_properties($prop, 'asciidoc', 'config');
+
+$data =~ s/cloud-init: //g;
+
+print $data;
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container 1/4] cloudinit: introduce config parameters

2023-06-02 Thread Leo Nunner
Introduce configuration parameters for cloud-init. Like with VMs, it's
possible to specify:
- user
- password
- ssh keys
- enable/disable updates on first boot

It's also possible to pass through custom config files for the user and
vendor settings. We don't allow configuring the network through
cloud-init, since it will clash with whatever configuration we already
did for the container.

Signed-off-by: Leo Nunner 
---
 src/PVE/API2/LXC.pm|  3 ++
 src/PVE/API2/LXC/Config.pm |  7 -
 src/PVE/LXC/Config.pm  | 61 ++
 3 files changed, 70 insertions(+), 1 deletion(-)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 50c9eaf..e585509 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -2492,6 +2492,9 @@ __PACKAGE__->register_method({
 
my $pending_delete_hash = 
PVE::LXC::Config->parse_pending_delete($conf->{pending}->{delete});
 
+   $conf->{cipassword} = '**' if defined($conf->{cipassword});
+   $conf->{pending}->{cipassword} = '** ' if 
defined($conf->{pending}->{cipassword});
+
return PVE::GuestHelpers::config_with_pending_array($conf, 
$pending_delete_hash);
 }});
 
diff --git a/src/PVE/API2/LXC/Config.pm b/src/PVE/API2/LXC/Config.pm
index e6c0980..0ff4115 100644
--- a/src/PVE/API2/LXC/Config.pm
+++ b/src/PVE/API2/LXC/Config.pm
@@ -79,7 +79,7 @@ __PACKAGE__->register_method({
} else {
$conf = PVE::LXC::Config->load_current_config($param->{vmid}, 
$param->{current});
}
-
+   $conf->{cipassword} = '**' if $conf->{cipassword};
return $conf;
 }});
 
@@ -148,6 +148,11 @@ __PACKAGE__->register_method({
$param->{cpuunits} = PVE::CGroup::clamp_cpu_shares($param->{cpuunits})
if defined($param->{cpuunits}); # clamp value depending on cgroup 
version
 
+   if (defined(my $cipassword = $param->{cipassword})) {
+   $param->{cipassword} = PVE::Tools::encrypt_pw($cipassword)
+   if $cipassword !~ /^\$(?:[156]|2[ay])(\$.+){2}/;
+   }
+
my $code = sub {
 
my $conf = PVE::LXC::Config->load_config($vmid);
diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index ac9db94..f8bfb95 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -442,6 +442,63 @@ my $features_desc = {
 },
 };
 
+my $cicustom_fmt = {
+user => {
+   type => 'string',
+   optional => 1,
+   description => 'To pass a custom file containing all user data to the 
container via cloud-init.',
+   format => 'pve-volume-id',
+   format_description => 'volume',
+},
+vendor => {
+   type => 'string',
+   optional => 1,
+   description => 'To pass a custom file containing all vendor data to the 
container via cloud-init.',
+   format => 'pve-volume-id',
+   format_description => 'volume',
+},
+};
+PVE::JSONSchema::register_format('pve-pct-cicustom', $cicustom_fmt);
+
+my $confdesc_cloudinit = {
+cienable => {
+   optional => 1,
+   type => 'boolean',
+   description => "cloud-init: provide cloud-init configuration to 
container.",
+},
+ciuser => {
+   optional => 1,
+   type => 'string',
+   description => "cloud-init: User name to change ssh keys and password 
for instead of the"
+   ." image's configured default user.",
+},
+cipassword => {
+   optional => 1,
+   type => 'string',
+   description => 'cloud-init: Password to assign the user. Using this is 
generally not'
+   .' recommended. Use ssh keys instead. Also note that older 
cloud-init versions do not'
+   .' support hashed passwords.',
+},
+ciupgrade => {
+   optional => 1,
+   type => 'boolean',
+   description => 'cloud-init: do an automatic package update on boot.'
+},
+cicustom => {
+   optional => 1,
+   type => 'string',
+   description => 'cloud-init: Specify custom files to replace the 
automatically generated'
+   .' ones at start.',
+   format => 'pve-pct-cicustom',
+},
+sshkeys => {
+   optional => 1,
+   type => 'string',
+   format => 'urlencoded',
+   description => "cloud-init: Setup public SSH keys (one key per line, 
OpenSSH format).",
+},
+};
+
 my $confdesc = {
 lock => {
optional => 1,
@@ -614,6 +671,10 @@ my $confdesc = {
 },
 };
 
+foreach my $key (keys %$confdesc_cloudinit) {
+$confdesc->{$key} = $confdesc_cloudinit->{$key};
+}
+
 my $valid_lxc_conf_keys = {
 'lxc.apparmor.profile' => 1,
 'lxc.apparmor.allow_incomplete' => 1,
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH docs 2/2] pct: document cloudinit for LXC

2023-06-02 Thread Leo Nunner
adds documentation for Cloud-Init for containers. Most of it has been
taken from the VM documentation, since the configuration mostly works
the same. Added a script to extract the cloudinit parameters the same
way it's already done for VMs.

Signed-off-by: Leo Nunner 
---
 Makefile|   1 +
 pct-cloud-init.adoc | 114 
 pct.adoc|   4 ++
 3 files changed, 119 insertions(+)
 create mode 100644 pct-cloud-init.adoc

diff --git a/Makefile b/Makefile
index b8a0666..ec5319a 100644
--- a/Makefile
+++ b/Makefile
@@ -53,6 +53,7 @@ GEN_SCRIPTS=  \
gen-pct.conf.5-opts.pl  \
gen-pct-network-opts.pl \
gen-pct-mountpoint-opts.pl  \
+   gen-pct-cloud-init-opts.pl  \
gen-qm.conf.5-opts.pl   \
gen-cpu-models.conf.5-opts.pl   \
gen-qm-cloud-init-opts.pl   \
diff --git a/pct-cloud-init.adoc b/pct-cloud-init.adoc
new file mode 100644
index 000..1398e7b
--- /dev/null
+++ b/pct-cloud-init.adoc
@@ -0,0 +1,114 @@
+[[pct_cloud_init]]
+Cloud-Init Support
+--
+ifdef::wiki[]
+:pve-toplevel:
+endif::wiki[]
+
+{pve} supports the Cloud-init 'nocloud' format for LXC.
+
+{pve} writes the Cloud-init configuration directly into the container.
+When the 'cienable' option is set to true, the configuration is updated
+directly before before every boot. Each configuration is identified by
+an 'instance id', which Cloud-Init uses to decide whether it should run
+again or not.
+
+Preparing Cloud-Init Templates
+~~
+
+The first step is to prepare your container. Any template will suffice.
+Simply install the Cloud-Init packages inside the CT that you want to
+prepare. On Debian/Ubuntu based systems this is as simple as:
+
+
+apt install cloud-init
+
+
+WARNING: This command is *not* intended to be executed on the {pve} host, but
+only inside the container.
+
+A simple preparation for a cloud-init capable container could look like this:
+
+
+# download an image
+pveam download local ubuntu-22.10-standard_22.10-1_amd64.tar.zst
+
+# create a new container
+pct create 9000 local:vztmpl/ubuntu-22.10-standard_22.10-1_amd64.tar.zst \
+--storage local-lvm --memory 512 \
+--net0 name=eth0,bridge=vmbr0,ip=dhcp,type=veth
+
+
+Now, the package can be installed inside the container:
+
+
+# install the needed packages
+pct start 9000
+pct exec 9000 apt update
+pct exec 9000 apt install cloud-init
+pct stop 9000
+
+
+Finally, it can be helpful to turn the container into a template, to be able
+to quickly create clones whenever needed.
+
+
+pct template 9000
+
+
+Deploying Cloud-Init Templates
+~~
+
+A template can easily be deployed by cloning:
+
+
+pct clone 9000 3000 --hostname ubuntu
+
+
+Cloud-Init can now be enabled, and will run automatically on the next boot.
+
+
+pct set 3000 --cienable=1
+
+
+Custom Cloud-Init Configuration
+~~~
+
+The Cloud-Init integration also allows custom config files to be used instead
+of the automatically generated configs. This is done via the `cicustom`
+option on the command line:
+
+
+pct set 9000 --cicustom "user=,meta="
+
+
+The custom config files have to be on a storage that supports snippets and have
+to be available on all nodes the container is going to be migrated to. 
Otherwise the
+container won't be able to start.
+For example:
+
+
+qm set 9000 --cicustom "user=local:snippets/userconfig.yaml"
+
+
+In contrast to the options for VMs, containers only support custom 'user'
+and 'vendor' scripts, but not 'network'. Network configuration is done through
+the already existing facilities integrated into {pve}. They can all be 
specified
+together or mixed and matched however needed.
+The automatically generated config will be used for any section that doesn't 
have a
+custom config file specified.
+
+The generated config can be dumped to serve as a base for custom configs:
+
+
+pct cloudinit dump 9000 user
+
+
+The same command exists for `meta`.
+
+
+Cloud-Init specific Options
+~~~
+
+include::pct-cloud-init-opts.adoc[]
+
diff --git a/pct.adoc b/pct.adoc
index fdeb6dd..7cfb09e 100644
--- a/pct.adoc
+++ b/pct.adoc
@@ -622,6 +622,10 @@ It will be called during various phases of the guests 
lifetime.  For an example
 and documentation see the example script under
 `/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
 
+ifndef::wiki[]
+include::pct-cloud-init.adoc[]
+endif::wiki[]
+
 Security Considerations
 ---
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server 1/1] api2: add check_bridge_access for create/update vm

2023-06-02 Thread DERUMIER, Alexandre
Le vendredi 02 juin 2023 à 13:43 +0200, Fabian Grünbichler a écrit :
> a few more places that come to my mind that might warrant further
> thinking or discussion:
> - restoring a backup
doesn't it also use create_vm ?

__PACKAGE__->register_method({
name => 'create_vm',
path => '',
method => 'POST',
description => "Create or restore a virtual machine.",


> - cloning a VM

for cloning, we can't change the target bridge, so if user have access
to the clone, isn't it enough ?



> 
> On May 26, 2023 9:33 am, Alexandre Derumier wrote:
> > Signed-off-by: Alexandre Derumier 
> > ---
> >  PVE/API2/Qemu.pm | 37 -
> >  1 file changed, 36 insertions(+), 1 deletion(-)
> > 
> > diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> > index 587bb22..ebef93f 100644
> > --- a/PVE/API2/Qemu.pm
> > +++ b/PVE/API2/Qemu.pm
> > @@ -46,6 +46,12 @@ use PVE::SSHInfo;
> >  use PVE::Replication;
> >  use PVE::StorageTunnel;
> >  
> > +my $have_sdn;
> > +eval {
> > +    require PVE::Network::SDN;
> > +    $have_sdn = 1;
> > +};
> > +
> >  BEGIN {
> >  if (!$ENV{PVE_GENERATING_DOCS}) {
> > require PVE::HA::Env::PVE2;
> > @@ -601,6 +607,33 @@ my $check_vm_create_usb_perm = sub {
> >  return 1;
> >  };
> >  
> > +my $check_bridge_access = sub {
> > +    my ($rpcenv, $authuser, $param) = @_;
> > +
> > +    return 1 if $authuser eq 'root@pam';
> > +
> > +    foreach my $opt (keys %{$param}) {
> > +   next if $opt !~ m/^net\d+$/;
> > +   my $net = PVE::QemuServer::parse_net($param->{$opt});
> > +   my $bridge = $net->{bridge};
> > +   my $tag = $net->{tag};
> > +   my $zone = 'local';
> > +
> > +   if ($have_sdn) {
> > +   my $vnet_cfg = PVE::Network::SDN::Vnets::config();
> > +   if (defined(my $vnet =
> > PVE::Network::SDN::Vnets::sdn_vnets_config($vnet_cfg, $bridge, 1)))
> > {
> > +   $zone = $vnet->{zone};
> > +   }
> > +   }
> > +   return 1 if $rpcenv->check_any($authuser,
> > "/sdn/zones/$zone", ['SDN.Audit', 'SDN.Allocate'], 1);
> 
> why is this $noerr? should be a hard fail AFAICT! a, re-reading the
> cover letter, this is intentional. might be worth a callout here in a
> comment (and once this is finalized, in the docs ;))
yes, because we need to check later if we have permissoins on
bridge/tag

> 
> > +   return 1 if $tag && $rpcenv->check_any($authuser,
> > "/sdn/vnets/$bridge.$tag", ['SDN.Audit', 'SDN.Allocate'], 1);
> 
> same here, I think I'd prefer /sdn/vnets/$bridge/$tag if we go down
> that
> route! then this would also be improved, since having access to the
> tag
> also checks access to the bridge, and this could be turned into a
> hard
> fail, which it should be!
> 
I had tried this way, I don't remember, I think I had a problem
somewhere,
but I'll retry it again, as it's cleaner indeed.


> what about trunking?
> 
not implemented indeed, but I think we could compare the trunk vlans
list to the vlan path permissions.

Or maybe we just want that user have full bridge permissions,to defined
the any vlans in the trunk ?


> > +
> > +   $rpcenv->check_any($authuser, "/sdn/vnets/$bridge",
> > ['SDN.Audit', 'SDN.Allocate']);
> 
> for the whole block of checks here:
> 
> what are the semantics of Audit and Allocate here? do we need another
> level between "sees bridge/vnet" and "can administer/reconfigure
> bridge/vnet" that says "can use bridge/vnet"?

, the SDN.Allocate on /sdn/vnets indeed don't make sense.
(it's only on the /sdn/zones, where you allocate the vnets).

it should be SDN.Audit only.




> 
> > +    }
> > +    return 1;
> > +};
> > +
> >  my $check_vm_modify_config_perm = sub {
> >  my ($rpcenv, $authuser, $vmid, $pool, $key_list) = @_;
> >  
> > @@ -878,7 +911,7 @@ __PACKAGE__->register_method({
> >  
> >     &$check_vm_create_serial_perm($rpcenv, $authuser,
> > $vmid, $pool, $param);
> >     &$check_vm_create_usb_perm($rpcenv, $authuser, $vmid,
> > $pool, $param);
> > -
> > +   &$check_bridge_access($rpcenv, $authuser, $param);
> >     &$check_cpu_model_access($rpcenv, $authuser, $param);
> >  
> >     $check_drive_param->($param, $storecfg);
> > @@ -1578,6 +1611,8 @@ my $update_vm_api  = sub {
> >  
> >  &$check_storage_access($rpcenv, $authuser, $storecfg, $vmid,
> > $param);
> >  
> > +    &$check_bridge_access($rpcenv, $authuser, $param);
> > +
> >  my $updatefn =  sub {
> >  
> > my $conf = PVE::QemuConfig->load_config($vmid);
> > -- 
> > 2.30.2
> > 
> > 
> > ___
> > pve-devel mailing list
> > pve-devel@lists.proxmox.com
> > https://antiphishing.cetsi.fr/proxy/v3?i=Zk92VEFKaGQ4Ums4cnZEUWMTpfHaXFQGRw1_CnOoOH0&r=bHA1dGV3NWJQVUloaWNFUZPu0fK2BfWNnEHaDLDwG0rtDedpluZBIffSL1M5cj3F&f=SlhDbE9uS2laS2JaZFpNWvmsxai1zlJP9llgnl5HIv-4jAji8Dh2BQawzxID5bzr6Uv-3EQd-eluQbsPfcUOTg&u=https%3A//lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel&k=XRKU
> > 
> > 
> > 
> 
> 
> __

[pve-devel] [PATCH pve-manager v3 1/1] fix #4551: ui: use gettext on hardcoded byte units

2023-06-02 Thread Noel Ullreich
Since some languages translate byte units like 'GiB' or write them in their
own script, this patch wraps units in the `gettext` function.

While most occurrences of byte strings can be translated within the
`format_size` function in `proxmox-widget-toolkit/src/Utils.js`, this patch
catches those instances that are not translated.

Signed-off-by: Noel Ullreich 
---
changes from v1:
* improved commit messages
* put hardcoded byte unit strings in a template string

changes from v2:
* rebased the patch

 www/manager6/ceph/OSD.js | 4 ++--
 www/manager6/form/DiskStorageSelector.js | 2 +-
 www/manager6/lxc/MPResize.js | 2 +-
 www/manager6/qemu/HDResize.js| 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/www/manager6/ceph/OSD.js b/www/manager6/ceph/OSD.js
index 69d5061f..d2caafa4 100644
--- a/www/manager6/ceph/OSD.js
+++ b/www/manager6/ceph/OSD.js
@@ -83,7 +83,7 @@ Ext.define('PVE.CephCreateOsd', {
{
xtype: 'numberfield',
name: 'db_dev_size',
-   fieldLabel: gettext('DB size') + ' (GiB)',
+   fieldLabel: `${gettext('DB size')} 
(${gettext('GiB')})`,
minValue: 1,
maxValue: 128*1024,
decimalPrecision: 2,
@@ -137,7 +137,7 @@ Ext.define('PVE.CephCreateOsd', {
{
xtype: 'numberfield',
name: 'wal_dev_size',
-   fieldLabel: gettext('WAL size') + ' (GiB)',
+   fieldLabel: `${gettext('WAL size')} 
(${gettext('GiB')})`,
minValue: 0.5,
maxValue: 128*1024,
decimalPrecision: 2,
diff --git a/www/manager6/form/DiskStorageSelector.js 
b/www/manager6/form/DiskStorageSelector.js
index abd46deb..d408b815 100644
--- a/www/manager6/form/DiskStorageSelector.js
+++ b/www/manager6/form/DiskStorageSelector.js
@@ -148,7 +148,7 @@ Ext.define('PVE.form.DiskStorageSelector', {
itemId: 'disksize',
reference: 'disksize',
name: 'disksize',
-   fieldLabel: gettext('Disk size') + ' (GiB)',
+   fieldLabel: `${gettext('Disk size')} (${gettext('GiB')})`,
hidden: me.hideSize,
disabled: me.hideSize,
minValue: 0.001,
diff --git a/www/manager6/lxc/MPResize.js b/www/manager6/lxc/MPResize.js
index 881c037b..d560b788 100644
--- a/www/manager6/lxc/MPResize.js
+++ b/www/manager6/lxc/MPResize.js
@@ -52,7 +52,7 @@ Ext.define('PVE.window.MPResize', {
maxValue: 128*1024,
decimalPrecision: 3,
value: '0',
-   fieldLabel: gettext('Size Increment') + ' (GiB)',
+   fieldLabel: `${gettext('Size Increment')} (${gettext('GiB')})`,
allowBlank: false,
});
 
diff --git a/www/manager6/qemu/HDResize.js b/www/manager6/qemu/HDResize.js
index f9c7290d..29ff253b 100644
--- a/www/manager6/qemu/HDResize.js
+++ b/www/manager6/qemu/HDResize.js
@@ -49,7 +49,7 @@ Ext.define('PVE.window.HDResize', {
maxValue: 128*1024,
decimalPrecision: 3,
value: '0',
-   fieldLabel: gettext('Size Increment') + ' (GiB)',
+   fieldLabel: `${gettext('Size Increment')} (${gettext('GiB')})`,
allowBlank: false,
});
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH pve-manager 2/4] add permissions management for "local" network zone

2023-06-02 Thread DERUMIER, Alexandre
Le vendredi 02 juin 2023 à 13:39 +0200, Fabian Grünbichler a écrit :
> On May 26, 2023 9:27 am, Alexandre Derumier wrote:
> > add a default virtual zone called 'local' in the ressource tree,
> > and handle permissions like a true sdn zone
> > 
> > Signed-off-by: Alexandre Derumier 
> > ---
> >  PVE/API2/Cluster.pm | 12 
> >  PVE/API2/Network.pm |  5 +++--
> >  www/manager6/sdn/ZoneContentView.js | 27
> > ++-
> >  3 files changed, 41 insertions(+), 3 deletions(-)
> > 
> > diff --git a/PVE/API2/Cluster.pm b/PVE/API2/Cluster.pm
> > index 2e942368..e8f5e06e 100644
> > --- a/PVE/API2/Cluster.pm
> > +++ b/PVE/API2/Cluster.pm
> > @@ -474,6 +474,18 @@ __PACKAGE__->register_method({
> >     }
> > }
> >  
> > +   #add default "local" network zone
> > +   foreach my $node (@$nodelist) {
> > +   my $local_sdn = {
> > +   id => "sdn/$node/local",
> > +   sdn => 'local',
> > +   node => $node,
> > +   type => 'sdn',
> > +   status => 'ok',
> > +   };
> > +   push @$res, $local_sdn;
> > +   }
> > +
> 
> should this als obe guarded by the type check like below? is there
> anything that ensures that a 'local' zone doesn't already exist as
> regular SDN-managed zone?

I was more thinking to forbid "local" name in sdn code in another
patch,
as sdn it's still in beta anyway, user could still rename zone manually
in cfg.

if not, user will not be able to manage local bridges permissions.


> 
> > if ($have_sdn) {
> >     if (!$param->{type} || $param->{type} eq 'sdn') {
> >  
> > diff --git a/PVE/API2/Network.pm b/PVE/API2/Network.pm
> > index a43579fa..b3faba1a 100644
> > --- a/PVE/API2/Network.pm
> > +++ b/PVE/API2/Network.pm
> > @@ -209,7 +209,7 @@ __PACKAGE__->register_method({
> >     type => {
> > description => "Only list specific interface
> > types.",
> > type => 'string',
> > -   enum => [ @$network_type_enum, 'any_bridge' ],
> > +   enum => [ @$network_type_enum, 'any_bridge',
> > 'any_local_bridge' ],
> > optional => 1,
> >     },
> > },
> > @@ -254,7 +254,8 @@ __PACKAGE__->register_method({
> >  
> >     for my $k (sort keys $ifaces->%*) {
> > my $type = $ifaces->{$k}->{type};
> > -   my $match = $tfilter eq $type || ($tfilter eq
> > 'any_bridge' && ($type eq 'bridge' || $type eq 'OVSBridge'));
> > +   my $match = $tfilter eq $type || ($tfilter =~
> > /^any(_local)?_bridge$/ && ($type eq 'bridge' || $type eq
> > 'OVSBridge'));
> 
> this line is getting a bit long, maybe it could be reformated or
> refactored?

yes sure.

> 
> > +
> 
> nit: this blank newline is introduced here and removed in the next
> patch ;)
> 
> > delete $ifaces->{$k} if !($match &&
> > $can_access_vnet->($k));
> >     }
> >  
> > diff --git a/www/manager6/sdn/ZoneContentView.js
> > b/www/manager6/sdn/ZoneContentView.js
> > index 08fa9d81..1a994fc9 100644
> > --- a/www/manager6/sdn/ZoneContentView.js
> > +++ b/www/manager6/sdn/ZoneContentView.js
> > @@ -26,6 +26,9 @@ Ext.define('PVE.sdn.ZoneContentView', {
> > }
> >  
> > var baseurl = "/nodes/" + me.nodename + "/sdn/zones/" +
> > me.zone + "/content";
> > +   if (me.zone === 'local') {
> > +   baseurl = "/nodes/" + me.nodename +
> > "/network?type=any_local_bridge";
> > +   }
> > var store = Ext.create('Ext.data.Store', {
> >     model: 'pve-sdnzone-content',
> >     groupField: 'content',
> > @@ -95,7 +98,29 @@ Ext.define('PVE.sdn.ZoneContentView', {
> >  Ext.define('pve-sdnzone-content', {
> > extend: 'Ext.data.Model',
> > fields: [
> > -   'vnet', 'status', 'statusmsg',
> > +   {
> > +   name: 'iface',
> > +   convert: function(value, record) {
> > +   //map local vmbr to vnet
> > +   if (record.data.iface) {
> > +   record.data.vnet = record.data.iface;
> > +   }
> > +   return value;
> > +   },
> > +   },
> > +   {
> > +   name: 'comments',
> > +   convert: function(value, record) {
> > +   //map local vmbr comments to vnet alias
> > +   if (record.data.comments) {
> > +   record.data.alias = record.data.comments;
> > +   }
> > +   return value;
> > +   },
> > +   },
> > +   'vnet',
> > +   'status',
> > +   'statusmsg',
> >     {
> > name: 'text',
> > convert: function(value, record) {
> > -- 
> > 2.30.2
> > 
> > 
> > ___
> > pve-devel mailing list
> > pve-devel@lists.proxmo

Re: [pve-devel] [PATCH pve-manager 4/4] api2: network: check vlan permissions for local bridges

2023-06-02 Thread DERUMIER, Alexandre
Le vendredi 02 juin 2023 à 13:39 +0200, Fabian Grünbichler a écrit :
> On May 26, 2023 9:27 am, Alexandre Derumier wrote:
> > We need to display the bridge is the user have a permission
> > on any vlan on the bridge.
> > 
> > to avoid to check permissions on 4096 vlans for each bridge
> > (could be slow with a lot of bridges),
> > we first list vlans where acls are defined.
> > 
> > (4000 check took 60ms on 10year xeon server, should be enough
> > for any network where the total number of vlans is limited)
> 
> some sort of spec here for how the ACL looks like would be nice to
> have
> (while it's possible to reverse it from the code, it's always nicer
> to
> have the expection explicit as well).
> 
> > 
> > Signed-off-by: Alexandre Derumier 
> > ---
> >  PVE/API2/Network.pm | 20 +++-
> >  1 file changed, 19 insertions(+), 1 deletion(-)
> > 
> > diff --git a/PVE/API2/Network.pm b/PVE/API2/Network.pm
> > index ba3b3e0e..39f17d14 100644
> > --- a/PVE/API2/Network.pm
> > +++ b/PVE/API2/Network.pm
> > @@ -240,17 +240,35 @@ __PACKAGE__->register_method({
> >  
> > if (my $tfilter = $param->{type}) {
> >     my $vnets;
> > +   my $bridges_vlans_acl = {};
> >     #check access for local bridges
> >     my $can_access_vnet = sub {
> > +   my $bridge = $_[0];
> > return 1 if $authuser eq 'root@pam';
> > return 1 if $rpcenv->check_any($authuser,
> > "/sdn/zones/local", ['SDN.Audit', 'SDN.Allocate'], 1);
> > -   return 1 if $rpcenv->check_any($authuser,
> > "/sdn/vnets/$_[0]", ['SDN.Audit', 'SDN.Allocate'], 1);
> > +   return 1 if $rpcenv->check($authuser,
> > "/sdn/vnets/$bridge", ['SDN.Audit'], 1);
> 
> why does this drop the Allocate? we usually have the "more
> privileged"
> privilege in addition to Audit (if there is one).

I think allocate maybe sense on zone. (as what's we allocate are
vnets).
(it should be removed on /sdn/vnets/$_[0]).


But I don't, maybe if user add a simple acl like "/" + propagate with
SDN.allocate only, we want to give it access everywhere ?


> 
> > +   my $bridge_vlan = $bridges_vlans_acl->{$bridge};
> > +   for my $tag (sort keys %$bridge_vlan) {
> > +   return 1 if $rpcenv->check($authuser,
> > "/sdn/vnets/$bridge.$tag", ['SDN.Audit'], 1);
> 
> wouldn't $bridge/$tag be more natural? it would allow propagation
> from a
> bridge to all tags using the usual propagate flag as well..
> 
> this could also live in pve-access-control as a special helper, then
> we
> wouldn't need to do this dance here (it would be the first
> iterate_acl_tree call site outside of pve-access-control!).
> 
> something like this in PVE::RPCEnvironment:
> 
> sub check_sdn_vlan(.., $bridge, $priv) {
>   .. iterate over all vlans and check while iterating, returning
> early for first one with access
> }
> 
> basically:
> 
> my $bridge = PVE::AccessControl::find_acl_tree_node($cfg->{acl_root},
> "/sdn/vnets/$bridge");
> if ($bridge) {
>   for my $vlan (keys $bridge->{children}->%$) {
>     return 1 if $self->check_any(...);
>   }
>   return 1 if # check propagate on bridge itself
> }
> return 0;
> 
> > +   }
> >     };
> >  
> >     if ($have_sdn && $param->{type} eq 'any_bridge') {
> > $vnets = PVE::Network::SDN::get_local_vnets(); #
> > returns already access-filtered
> >     }
> >  
> > +   #find all vlans where we have specific acls
> > +   if ($tfilter =~ /^any(_local)?_bridge$/) {
> > +   my $cfg = $rpcenv->{user_cfg};
> > +   my $vnets_acl_root = $cfg->{acl_root}->{children}-
> > >{sdn}->{children}->{vnets};
> > +   PVE::AccessControl::iterate_acl_tree("/",
> > $vnets_acl_root, sub {
> > +   my ($path, $node) = @_;
> > +   if ($path =~ /\/(.*)\.(\d+)$/) {
> > +   $bridges_vlans_acl->{$1}->{$2} = 1;
> > +   }
> > +   });
> > +   }
> 
> because this iterates over *all* ACLs, only to then return upon the
> first match above in $can_access_vnet..
> 
> it should also be
> 
> iterate_acl_tree("/sdn/vnets", ..) or "/sdn/vnets/$bridge" if vlan
> tags
> live as children of the bridge (path and the node should match!).
> there
> also is "find_acl_tree_node" so you don't need to make assumptions
> about
> how the nodes are laid out.
> 

Thanks ! I was really banging my head to implement this properly &&
fast ^_^


> I do wonder whether something that supports ranges would be more
> appropriate rather then encoding this in ACLs directly though..
>  could
> always be added later on though (e.g., named vlan objects that define
> a
> set of vlans).

yes, I was thinking the same to avoid too much acl when you need to
access to many vlans.

> 
> > +
> >     for my $k (sort keys $ifaces->%*) {
> > my $type = $ifaces->{$k}->{type};
> > my

Re: [pve-devel] [PATCH-SERIES v3 qemu-server/manager/common] add and set x86-64-v2 as default model for new vms and detect best cpumodel

2023-06-02 Thread Aaron Lauterer

For the record, I tested on:

AMD Ryzen 9 7900X 12-Core Processor
AMD EPYC 7302P 16-Core Processor (Rome)
AMD EPYC 7313 16-Core Processor (Milan)

VMs were all with 2 or 4 cores, q35 and UEFI boot.

>> qm set  -args '-cpu 
kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+sep,+lahf_lm,+popcnt,+sse4.1,+sse4.2,+ssse3'


all VMs installed nicely without a problem

>> qm set  -args '-cpu 
Nehalem,enforce,+aes,-svm,-vmx,+kvm_pv_eoi,+kvm_pv_unhalt,vendor="GenuineIntel"'


VMs got stuck during the Debian installer at some point, some were able to 
switch to another TTY, but I don't think that is deterministic.


On the Ryzen 7900X machine, I even saw a crash and kernel trace once when 
starting the installer. It didn't show up again after a reset and cold boot of 
the VM :-/



On 6/1/23 18:00, Fiona Ebner wrote:

Am 01.06.23 um 14:02 schrieb Eneko Lacunza:

Hi,

We have Ryzen 1700, 2600X, 3700 and 5950X machines here, I can test on
them if that helps (please detail tests to perform).

Thanks



Hi,
thank you for the offer. It would be interesting to see if you have any
issues with the following:


qm set  -args '-cpu 
kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+sep,+lahf_lm,+popcnt,+sse4.1,+sse4.2,+ssse3'


If you like you can also test


qm set  -args '-cpu 
Nehalem,enforce,+aes,-svm,-vmx,+kvm_pv_eoi,+kvm_pv_unhalt,vendor="GenuineIntel"'


After testing use


qm set  --delete args


to get rid of the modification again.

Make sure to stop/start the VM fresh after each modification.

As for what to test, installing Debian 11 would be nice just for
comparison, but other than that, just do what you like, shouldn't really
matter too much :)

Best Regards,
Fiona


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel





___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH widget-toolkit 3/3] fix #3892: NetworkEdit: add bridge vids field for bridge_vids

2023-06-02 Thread Aaron Lauterer




On 6/1/23 16:24, Thomas Lamprecht wrote:

Don't we reuse that on PBS/PMG too, and if is it working there?

The commit message isn't excactly telling... ;-)



on PMG we only allow creating bonds in the GUI. On PBS we allow bonds and 
bridges. Though that makes we wonder what the use case for a bridge on PBS is.


What about safeguarding the vids to only be added/used if we are on PVE?
Something like `if (PVE) {`?



Am 13/04/2023 um 17:10 schrieb Aaron Lauterer:

Signed-off-by: Aaron Lauterer 
---
  src/node/NetworkEdit.js | 16 
  1 file changed, 16 insertions(+)

diff --git a/src/node/NetworkEdit.js b/src/node/NetworkEdit.js
index bb9add3..e4ab158 100644
--- a/src/node/NetworkEdit.js
+++ b/src/node/NetworkEdit.js
@@ -57,11 +57,26 @@ Ext.define('Proxmox.node.NetworkEdit', {
}
  
  	if (me.iftype === 'bridge') {

+   let vids = Ext.create('Ext.form.field.Text', {
+   fieldLabel: gettext('Bridge VIDS'),
+   name: 'bridge_vids',
+   emptyText: '2-4094',
+   disabled: true,
+   autoEl: {
+   tag: 'div',
+   'data-qtip': gettext('Space-separated list of VLANs and 
ranges, for example: 2 4 100-200'),
+   },
+   });
column2.push({
xtype: 'proxmoxcheckbox',
fieldLabel: gettext('VLAN aware'),
name: 'bridge_vlan_aware',
deleteEmpty: !me.isCreate,
+   listeners: {
+   change: function(f, newVal) {
+   vids.setDisabled(!newVal);
+   },
+   },
});
column2.push({
xtype: 'textfield',
@@ -72,6 +87,7 @@ Ext.define('Proxmox.node.NetworkEdit', {
'data-qtip': gettext('Space-separated list of interfaces, 
for example: enp0s0 enp1s0'),
},
});
+   advancedColumn2.push(vids);
} else if (me.iftype === 'OVSBridge') {
column2.push({
xtype: 'textfield',





___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH cluster] pvecm: fix cluster join over ssh with newer rsync

2023-06-02 Thread Dominik Csapak
since rsync 3.2.4, the syntax to give multiple files in one parameter
does not work anymore, so instead add both files explicitly

this fixes the cluster join over ssh on bookworm

Signed-off-by: Dominik Csapak 
---
 src/PVE/CLI/pvecm.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/PVE/CLI/pvecm.pm b/src/PVE/CLI/pvecm.pm
index b0b5931..d816a0c 100755
--- a/src/PVE/CLI/pvecm.pm
+++ b/src/PVE/CLI/pvecm.pm
@@ -438,7 +438,7 @@ __PACKAGE__->register_method ({
eval {
print "copy corosync auth key\n";
$cmd = ['rsync', '--rsh=ssh -l root -o BatchMode=yes', '-lpgoq',
-   "[$host]:$authfile $clusterconf", $tmpdir];
+   "[$host]:$authfile", "[$host]:$clusterconf", $tmpdir];
 
system(@$cmd) == 0 || die "can't rsync data from host 
'$host'\n";
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH-SERIES v3 qemu-server/manager/common] add and set x86-64-v2 as default model for new vms and detect best cpumodel

2023-06-02 Thread DERUMIER, Alexandre
Thanks Aeron.

I'll try to reproduce on my side on epyc milan. 
AMD EPYC 7543 32-Core Processor microcode patch_level=0x0a0011ce
(I just tried again with q35, but I'm still not able de reproduce)


can you share your full vmid.conf ?  (numa ?  vga ?..).

do you use debian graphical install or console install ?

host kernel version ? (I'm still testing on 5.15 / qemu 7.4)

my last patch version use qemu64 as base without specifify the vendor,
so I think it should ok, but I would like to find what happend exactly
:)




Le vendredi 02 juin 2023 à 14:41 +0200, Aaron Lauterer a écrit :
> For the record, I tested on:
> 
> AMD Ryzen 9 7900X 12-Core Processor
> AMD EPYC 7302P 16-Core Processor (Rome)
> AMD EPYC 7313 16-Core Processor (Milan)
> 
> VMs were all with 2 or 4 cores, q35 and UEFI boot.
> 
>  >> qm set  -args '-cpu 
> kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+sep,+lahf_lm,+popcnt,+sse4.
> 1,+sse4.2,+ssse3'
> 
> all VMs installed nicely without a problem
> 
>  >> qm set  -args '-cpu 
> Nehalem,enforce,+aes,-svm,-
> vmx,+kvm_pv_eoi,+kvm_pv_unhalt,vendor="GenuineIntel"'
> 
> VMs got stuck during the Debian installer at some point, some were
> able to 
> switch to another TTY, but I don't think that is deterministic.
> 
> On the Ryzen 7900X machine, I even saw a crash and kernel trace once
> when 
> starting the installer. It didn't show up again after a reset and
> cold boot of 
> the VM :-/
> 
> 
> On 6/1/23 18:00, Fiona Ebner wrote:
> > Am 01.06.23 um 14:02 schrieb Eneko Lacunza:
> > > Hi,
> > > 
> > > We have Ryzen 1700, 2600X, 3700 and 5950X machines here, I can
> > > test on
> > > them if that helps (please detail tests to perform).
> > > 
> > > Thanks
> > > 
> > 
> > Hi,
> > thank you for the offer. It would be interesting to see if you have
> > any
> > issues with the following:
> > 
> > > qm set  -args '-cpu
> > > kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+sep,+lahf_lm,+popcnt,+s
> > > se4.1,+sse4.2,+ssse3'
> > 
> > If you like you can also test
> > 
> > > qm set  -args '-cpu Nehalem,enforce,+aes,-svm,-
> > > vmx,+kvm_pv_eoi,+kvm_pv_unhalt,vendor="GenuineIntel"'
> > 
> > After testing use
> > 
> > > qm set  --delete args
> > 
> > to get rid of the modification again.
> > 
> > Make sure to stop/start the VM fresh after each modification.
> > 
> > As for what to test, installing Debian 11 would be nice just for
> > comparison, but other than that, just do what you like, shouldn't
> > really
> > matter too much :)
> > 
> > Best Regards,
> > Fiona
> > 
> > 
> > ___
> > pve-devel mailing list
> > pve-devel@lists.proxmox.com
> > https://antiphishing.cetsi.fr/proxy/v3?i=SGI0YVJGNmxZNE90Z2thMFYLWSxJOfIERJocpmb73Vs&r=SW5LV3JodE9QZkRVZ3JEYaKhfBhKBzRXSL89azwXC1T82d4SHYTQZhKJK2pOWOed&f=bnJjU3hQT3pQSmNQZVE3aOBk4INl-9oWq-3WY1DAs1r4EwFE_F3He2PZuXGJPKRN&u=https%3A//lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel&k=dFBm
> > 
> > 
> 

___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-manager 2/4] add permissions management for "local" network zone

2023-06-02 Thread DERUMIER, Alexandre
Le vendredi 02 juin 2023 à 12:21 +, DERUMIER, Alexandre a écrit :
> > should this als obe guarded by the type check like below? is there
> > anything that ensures that a 'local' zone doesn't already exist as
> > regular SDN-managed zone?
> 
> I was more thinking to forbid "local" name in sdn code in another
> patch,
> as sdn it's still in beta anyway, user could still rename zone
> manually
> in cfg.
> 
> if not, user will not be able to manage local bridges permissions.

Thinking about this, the sdn zone names are limited to 8 characters,

so I could simply use "localnetwork" or "localbridges" as name


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] pve7to8: ceph version check: ignore commit hash

2023-06-02 Thread Aaron Lauterer
The commit hash of the Ceph version might be different between major
releases. For example:
ceph version 17.2.6 (810db68029296377607028a6c6da1ec06f5a2b27) quincy (stable)
ceph version 17.2.6 (995dec2cdae920da21db2d455e55efbc339bde24) quincy (stable)

This can lead to unnecessary warnings of multiple detected versions.
Therefore, extract the version, e.g. 'ceph version 17.2.6', and the
commit hash. Use the simplified version for the version checks and show
an info line when the commit is different instead of a warning.
If the commit hashes are the only difference, we are likely in the
middle of the upgrade.

Signed-off-by: Aaron Lauterer 
---
 PVE/CLI/pve7to8.pm | 25 +++--
 1 file changed, 23 insertions(+), 2 deletions(-)

diff --git a/PVE/CLI/pve7to8.pm b/PVE/CLI/pve7to8.pm
index 06f4e2bb..cc2bd643 100644
--- a/PVE/CLI/pve7to8.pm
+++ b/PVE/CLI/pve7to8.pm
@@ -500,9 +500,24 @@ sub check_ceph {
{ 'key' => 'osd', 'name' => 'OSD' },
];
 
+   my $ceph_versions_simple = {};
+   my $ceph_versions_commits = {};
+   for my $type (keys %$ceph_versions) {
+   for my $full_version (keys $ceph_versions->{$type}->%*) {
+   if ($full_version =~ m/^(.*) \((.*)\).*\(.*\)$/) {
+   # String is in the form of
+   # ceph version 17.2.6 
(810db68029296377607028a6c6da1ec06f5a2b27) quincy (stable)
+   # only check the first part, e.g. 'ceph version 17.2.6', 
the commit hash can
+   # be different
+   $ceph_versions_simple->{$type}->{$1} = 1;
+   $ceph_versions_commits->{$type}->{$2} = 1;
+   }
+   }
+   }
+
foreach my $service (@$services) {
my ($name, $key) = $service->@{'name', 'key'};
-   if (my $service_versions = $ceph_versions->{$key}) {
+   if (my $service_versions = $ceph_versions_simple->{$key}) {
if (keys %$service_versions == 0) {
log_skip("no running instances detected for daemon type 
$name.");
} elsif (keys %$service_versions == 1) {
@@ -513,6 +528,12 @@ sub check_ceph {
} else {
log_skip("unable to determine versions of running Ceph $name 
instances.");
}
+   if (my $service_commits = $ceph_versions_commits->{$key}) {
+   if (keys %$service_commits > 1) {
+   log_info("multiple version commits detected for daemon type 
$name. ".
+   "Are you in the middle of the upgrade?");
+   }
+   }
}
 
my $overall_versions = $ceph_versions->{overall};
@@ -521,7 +542,7 @@ sub check_ceph {
} elsif (keys %$overall_versions == 1) {
log_pass("single running overall version detected for all Ceph 
daemon types.");
$noout_wanted = 0; # off post-upgrade, on pre-upgrade
-   } else {
+   } elsif (keys $ceph_versions_simple->{overall}->%* != 1) {
log_warn("overall version mismatch detected, check 'ceph versions' 
output for details!");
}
 }
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH-SERIES v3 qemu-server/manager/common] add and set x86-64-v2 as default model for new vms and detect best cpumodel

2023-06-02 Thread Aaron Lauterer



On 6/2/23 16:15, DERUMIER, Alexandre wrote:

Thanks Aeron.

I'll try to reproduce on my side on epyc milan.
AMD EPYC 7543 32-Core Processor microcode patch_level=0x0a0011ce
(I just tried again with q35, but I'm still not able de reproduce)


can you share your full vmid.conf ?  (numa ?  vga ?..).



On the Epyc Systems:

args: -cpu 
Nehalem,enforce,+aes,-svm,-vmx,+kvm_pv_eoi,+kvm_pv_unhalt,vendor="GenuineIntel"

bios: ovmf
boot: order=scsi0;ide2;net0
cores: 2
cpu: host,flags=+aes
efidisk0: local-storage:vm-104-disk-0,efitype=4m,size=1M
ide2: iso-images:iso/debian-11.7.0-amd64-netinst.iso,media=cdrom,size=389M
machine: q35
memory: 2048
meta: creation-qemu=7.2.0,ctime=1685631249
name: tmp-debtest
net0: virtio=26:A2:08:6B:00:8F,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-storage:vm-104-disk-1,discard=on,iothread=1,size=8G
scsihw: virtio-scsi-single
smbios1: uuid=ef95de37-c93f-4987-820d-4f969913b03d
sockets: 1
vmgenid: cb42b08c-9448-4821-b1e6-d018150de807


On the Ryzen system:

args: -cpu 
kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+sep,+lahf_lm,+popcnt,+sse4.1,+sse4.2,+ssse3

bios: ovmf
boot: order=scsi0;ide2;net0
cores: 4
cpu: kvm64
efidisk0: local-zfs:vm-115-disk-1,efitype=4m,size=1M
ide2: iso:iso/debian-11.6.0-amd64-netinst.iso,media=cdrom,size=388M
machine: q35
memory: 2048
meta: creation-qemu=7.2.0,ctime=1685537448
name: debtest
net0: virtio=AA:AA:7A:3A:F5:94,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-zfs:vm-115-disk-0,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=05785fd0-69b7-4df7-a96e-962e030cdbca
sockets: 1
vmgenid: 869bd693-3560-4cd1-a5be-f61ec158acef


The CPU args are different, depending on what I tested last tested.


do you use debian graphical install or console install ?


the cli installer



host kernel version ? (I'm still testing on 5.15 / qemu 7.4)


On the Epyc Systems:
$ pveversion -v
proxmox-ve: 7.4-1 (running kernel: 6.2.6-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-6.2: 7.3-8
pve-kernel-5.15: 7.3-3
pve-kernel-6.2.6-1-pve: 6.2.6-1
pve-kernel-5.15.102-1-pve: 5.15.102-1
pve-kernel-5.15.5-1-pve: 5.15.5-1
(…)
qemu-server: 7.4-3

On the Ryzen System:
$ pveversion -v
proxmox-ve: 7.4-1 (running kernel: 6.2.11-2-pve)
pve-manager: 7.4-4 (running version: 7.4-4/4a8501a8)
pve-kernel-6.2: 7.4-3
pve-kernel-5.15: 7.4-3
pve-kernel-6.2.11-2-pve: 6.2.11-2
pve-kernel-6.2.11-1-pve: 6.2.11-1
pve-kernel-5.15.107-2-pve: 5.15.107-2
(…)
qemu-server: 7.4-3



my last patch version use qemu64 as base without specifify the vendor,
so I think it should ok, but I would like to find what happend exactly
:)






___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH-SERIES v3 qemu-server/manager/common] add and set x86-64-v2 as default model for new vms and detect best cpumodel

2023-06-02 Thread DERUMIER, Alexandre
Thanks!

I'll try to upgrade to 6.2 kernel, this is the only difference

Le vendredi 02 juin 2023 à 18:09 +0200, Aaron Lauterer a écrit :
> 
> 
> On 6/2/23 16:15, DERUMIER, Alexandre wrote:
> > Thanks Aeron.
> > 
> > I'll try to reproduce on my side on epyc milan.
> > AMD EPYC 7543 32-Core Processor microcode patch_level=0x0a0011ce
> > (I just tried again with q35, but I'm still not able de reproduce)
> > 
> > 
> > can you share your full vmid.conf ?  (numa ?  vga ?..).
> 
> 
> On the Epyc Systems:
> 
> args: -cpu 
> Nehalem,enforce,+aes,-svm,-
> vmx,+kvm_pv_eoi,+kvm_pv_unhalt,vendor="GenuineIntel"
> bios: ovmf
> boot: order=scsi0;ide2;net0
> cores: 2
> cpu: host,flags=+aes
> efidisk0: local-storage:vm-104-disk-0,efitype=4m,size=1M
> ide2: iso-images:iso/debian-11.7.0-amd64-
> netinst.iso,media=cdrom,size=389M
> machine: q35
> memory: 2048
> meta: creation-qemu=7.2.0,ctime=1685631249
> name: tmp-debtest
> net0: virtio=26:A2:08:6B:00:8F,bridge=vmbr0,firewall=1
> numa: 0
> ostype: l26
> scsi0: local-storage:vm-104-disk-1,discard=on,iothread=1,size=8G
> scsihw: virtio-scsi-single
> smbios1: uuid=ef95de37-c93f-4987-820d-4f969913b03d
> sockets: 1
> vmgenid: cb42b08c-9448-4821-b1e6-d018150de807
> 
> 
> On the Ryzen system:
> 
> args: -cpu 
> kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+sep,+lahf_lm,+popcnt,+sse4.
> 1,+sse4.2,+ssse3
> bios: ovmf
> boot: order=scsi0;ide2;net0
> cores: 4
> cpu: kvm64
> efidisk0: local-zfs:vm-115-disk-1,efitype=4m,size=1M
> ide2: iso:iso/debian-11.6.0-amd64-netinst.iso,media=cdrom,size=388M
> machine: q35
> memory: 2048
> meta: creation-qemu=7.2.0,ctime=1685537448
> name: debtest
> net0: virtio=AA:AA:7A:3A:F5:94,bridge=vmbr0,firewall=1
> numa: 0
> ostype: l26
> scsi0: local-zfs:vm-115-disk-0,iothread=1,size=32G
> scsihw: virtio-scsi-single
> smbios1: uuid=05785fd0-69b7-4df7-a96e-962e030cdbca
> sockets: 1
> vmgenid: 869bd693-3560-4cd1-a5be-f61ec158acef
> 
> 
> The CPU args are different, depending on what I tested last tested.
> > 
> > do you use debian graphical install or console install ?
> 
> the cli installer
> 
> > 
> > host kernel version ? (I'm still testing on 5.15 / qemu 7.4)
> 
> On the Epyc Systems:
> $ pveversion -v
> proxmox-ve: 7.4-1 (running kernel: 6.2.6-1-pve)
> pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
> pve-kernel-6.2: 7.3-8
> pve-kernel-5.15: 7.3-3
> pve-kernel-6.2.6-1-pve: 6.2.6-1
> pve-kernel-5.15.102-1-pve: 5.15.102-1
> pve-kernel-5.15.5-1-pve: 5.15.5-1
> (…)
> qemu-server: 7.4-3
> 
> On the Ryzen System:
> $ pveversion -v
> proxmox-ve: 7.4-1 (running kernel: 6.2.11-2-pve)
> pve-manager: 7.4-4 (running version: 7.4-4/4a8501a8)
> pve-kernel-6.2: 7.4-3
> pve-kernel-5.15: 7.4-3
> pve-kernel-6.2.11-2-pve: 6.2.11-2
> pve-kernel-6.2.11-1-pve: 6.2.11-1
> pve-kernel-5.15.107-2-pve: 5.15.107-2
> (…)
> qemu-server: 7.4-3
> 
> > 
> > my last patch version use qemu64 as base without specifify the
> > vendor,
> > so I think it should ok, but I would like to find what happend
> > exactly
> > :)
> > 
> > 
> > 
> 

___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] little PATCH proxmox-i18n for ja.po (for 3.0.0)

2023-06-02 Thread ribbon
little Japanese translation update for 3.0.0

--- ja.po   2023-06-03 11:10:29.097009127 +0900
+++ jan.po  2023-06-03 11:10:08.708856465 +0900
@@ -2104,9 +2104,8 @@
 msgstr "データDevs"
 
 #: pve-manager/www/manager6/ceph/FS.js:159
-#, fuzzy
 msgid "Data Pool"
-msgstr "メディア Pool"
+msgstr "データPool"
 
 #: pmg-gui/js/ClamAVDatabase.js:10
 msgid "Database Mirror"
@@ -3399,9 +3398,8 @@
 msgstr "Extra ID"
 
 #: pmg-gui/js/SpamDetectorOptions.js:22
-#, fuzzy
 msgid "Extract Text from Attachments"
-msgstr "添付がありません"
+msgstr "添付からテキストを抽出"
 
 #: proxmox-backup/www/window/RemoteEdit.js:50
 msgid "FQDN or IP-address"
@@ -5305,7 +5303,7 @@
 #: pve-manager/www/manager6/ceph/FS.js:164
 #, fuzzy
 msgid "Metadata Pool"
-msgstr "メディア Pool"
+msgstr "メタデータPool"
 
 #: pve-manager/www/manager6/ceph/FS.js:175
 msgid "Metadata Servers"
@@ -10038,9 +10036,8 @@
 
 #: proxmox-backup/www/config/SyncView.js:245
 #: proxmox-backup/www/window/SyncJobEdit.js:236
-#, fuzzy
 msgid "Transfer Last"
-msgstr "転送"
+msgstr "最後の転送"
 
 #: proxmox-backup/www/datastore/Summary.js:258
 msgid "Transfer Rate (bytes/second)"
@@ -11388,7 +11385,6 @@
 msgstr "ゾーン"
 
 #: proxmox-backup/www/window/SyncJobEdit.js:239
-#, fuzzy
 msgid "all"
 msgstr "全部"
 
@@ -11689,14 +11685,12 @@
 msgstr "unsafe"
 
 #: pve-manager/www/manager6/ceph/OSD.js:76
-#, fuzzy
 msgid "use OSD disk"
-msgstr "ディスクにサスペンド"
+msgstr "OSDディスクを使用"
 
 #: pve-manager/www/manager6/ceph/OSD.js:130
-#, fuzzy
 msgid "use OSD/DB disk"
-msgstr "ディスクにサスペンド"
+msgstr "OSD/DB ディスクを使用"
 
 #: pve-manager/www/manager6/lxc/DNS.js:38
 #: pve-manager/www/manager6/lxc/DNS.js:46

___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel