Hi Pedro,

No problem, I’ve been really busy recently too 😊. I definitely recommend 
setting up filtering rules for the edk2 mailing list(s) because there is a LOT 
of volume. Personally, set up my filter rules so that PATCH emails go in a 
separate folder, that makes the list much more readable.

For the Raspberry Pi… yeah it is more of a refactoring activity. For some of 
the other board ports it is more interesting. We already went over the x86 
board porting, which I think would be an interesting and unique project, the 
problem is finding a suitable board… and most x86 boards are expensive. The 
Qemu OpenBoard also has a bit more meat to it I think. Since it involves 
figuring out how to merge ArmVirtPkg and OvmfPkg in a way that makes sense, it 
probably involves building some new advanced features. I think it is possible 
to spice up the Raspberry Pi project a bit though. Maybe add support for the 
Raspberry Pi Zero? Right now we only support the Raspberry Pi 3 & 4.

The ext2/4 disk driver would certainly also be interesting! My understanding is 
that ext4 is pretty much the same filesystem as ext2, they just added some 
features on top of ext2. I took a quick peek and your boilerplate and it looks 
like a good start. The one thing you will have to be VERY careful about is 
keeping TianoCore GPL-free. You need to be very careful to not read a single 
line of Linux kernel source code. You can read the kernel’s ext2/3/4 
documentation but do NOT read the kernel’s source code. I think that should be 
doable since FreeBSD 12 (circa 2018) now has full read/write support for ext4 
in their ext2 driver and has had a GPL-free ext2 driver for quite some time 
(FreeBSD 9 circa 2012.) You are welcome to read (and even use) the FreeBSD 
source code (as long as it is newer than 2012 of course.) In your proposal, I 
would recommend you set an achievable target. Maybe promise read-only as a 
baseline success criteria and then make writing a stretch goal. I strongly 
prefer that everyone ends up with a successful project.

I do this this project would be useful. If full write support is achieved 
GPL-free, this driver could be integrated into the firmware distributed in OEM 
motherboards. This would make it really easy for us to boot Linux kernels 
directly from the UEFI boot manager, eliminating the need for GRUB and a 
separate EFI system partition for loading Linux.

Our disk I/O subsystem does not need to reach the performance level of an 
operating system. We usually only need to load a few MBs from disk before we 
handoff to the OS kernel. In general, UEFI is designed to favor simplicity over 
performance. The biggest bottleneck is the fact that UEFI uses exclusively 
polling I/O and does not have any interrupt handlers except for the timer 
interrupt. My guess is a disk cache is probably overkill because the filesystem 
driver is unlikely to be the bottleneck.

For comparison, currently our only filesystem driver is the FAT32 driver, and 
the FAT32 driver is definitely fast enough for our purposes. On an Intel Tiger 
Lake platform with the EFI System Partition stored on a NVMe SSD, it takes 60ms 
(milliseconds) to initialize the NVMe driver, enumerate the partition table, 
and mount the FAT32 filesystem. Out of that 60ms, only 1.4ms is directly 
consumed by the FAT32 driver, the rest of it is initialization of the lower 
level drivers. After initialization is complete, it takes the FAT32 driver 
3.5ms to load bootmgfw.efi from disk to memory. To put that into perspective, 
after bootmgfw.efi is loaded into memory it takes the DXE core 13.5ms to verify 
that the signature on the file is valid and was signed by a code signing 
certificate that is in the UEFI secure boot trusted certificate list as well as 
processing PE/COFF relocation fixups needed to execute the file.

Thinking about performance, bootmgfw.efi (the Windows boot manager) is a little 
over 1.5MB, so that works out to a disk I/O subsystem performance of roughly 
450MB/sec… while this is not the ~1000MB/sec you would expect from a high 
performance NVMe driver, it is still not bad considering it is being done using 
polling I/O and the EDK II NVMe driver doesn’t build the largest possible 
scatter gather list that the NVMe controller can handle.

If you look at all of this at the high level, we ended up spending 64ms on disk 
I/O, when the entire Tiger Lake UEFI firmware takes 1.74 sec to run. So, only 
~4% of the system boot time was spent on disk I/O, so right now the filesystem 
drivers are a small drop in the bucket of the overall boot time. As long as 
your driver reads and writes at the same speed or better than FatPkg, there 
would be no concerns.

Hope that helps,
Nate

From: Pedro Falcato <pedro.falc...@gmail.com>
Sent: Tuesday, March 23, 2021 7:17 AM
To: Desimone, Nathaniel L <nathaniel.l.desim...@intel.com>; devel@edk2.groups.io
Subject: Re: [edk2-devel] GSoC 2021 (MinPlatform, Ext2, ACPICA, etc)

Hi Nate!

Sorry for taking so long to get back to you, I've been a bit busy and I'm not 
too used to so many emails in my inbox :)

So, if I'm getting this right, essentially what is proposed in the MinPlatform 
board ports is to refactor the existing board code into an OpenBoardPkg that 
uses MinPlatform to reuse more generic code? I was thinking about getting a 
Raspberry Pi and doing the MinPlatform port for that, although honestly I'm not 
too inclined for that option anymore.

Honestly, I'm looking more towards the ext2/4 drivers now. I've been poking a 
little at the build system and how the driver model is supposed to work and I 
think I more or less got the idea. Here's the link to my ext2 test repo, if 
you're curious: https://github.com/heatd/edk2-ext. Note that it doesn't quite 
do anything right now, it's missing all sorts of features and testing and 
what's there is mostly driver model and build system boilerplate that was 
pieced together by looking at the UEFI spec and the FatPkg code, plus some of 
my own ext2 headers.

With regards to ext4, yes, it sounds like the better option at the moment, 
although I'm not terribly familiar with it. I do have some questions though:

  1.  What are the standards for filesystem driver performance? Is a page/disk 
cache a necessity for the driver? I would assume the FS driver has some 
substancial footprint in the overall boot time.
  2.  Is the read-only behaviour still the target?
Thanks,
Pedro Falcato


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#73491): https://edk2.groups.io/g/devel/message/73491
Mute This Topic: https://groups.io/mt/81290134/21656
Group Owner: devel+ow...@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Reply via email to