I have no idea what kind of hardware you have access to so  I asked gemini.

What about a proxy server? 
The good news is that for the purpose of acting as a "middle-man" for a simple 
download, most of these solutions are very lightweight and can run on minimal 
hardware. You're not trying to serve millions of users or cache terabytes of 
data.
Here's a breakdown of minimal hardware for each, assuming you're running a 
lightweight Linux distribution (like Debian Minimal, Alpine Linux, or a minimal 
Ubuntu Server install) on the intermediary machine:
General Minimal Linux Server Baseline (for all options):
 * CPU: Single-core, 1 GHz or faster (e.g., an older Atom, Celeron, or even a 
Raspberry Pi Zero/1).
 * RAM: 512 MB to 1 GB. While some official docs suggest more, a stripped-down 
Linux install plus the proxy software can function with 512MB for simple tasks. 
1GB provides more comfort.
 * Storage: 4-8 GB of disk space (SSD is preferred for speed and longevity, but 
an HDD works for minimal installs). This is enough for the OS and the proxy 
software, plus some temporary download space.
 * Network: A working Ethernet port or Wi-Fi adapter.
1. HTTP/HTTPS Proxy Servers
 * Squid:
   * CPU: Single-core, 1 GHz or faster.
   * RAM: 512 MB to 1 GB is often sufficient for basic forwarding. If you 
enable significant caching, you'd want more RAM (e.g., 32 MB per GB of disk 
cache). For your download scenario, caching won't be heavily used.
   * Storage: 10 GB recommended if you enable caching, but for simple 
forwarding without much caching, 4-8 GB for the OS and Squid itself is fine.
   * Notes: Squid is powerful but can be resource-intensive if configured for 
high-volume caching. For a simple proxy, it's surprisingly light.
 * Nginx (as a Reverse Proxy):
   * CPU: Single-core, 1 GHz or faster.
   * RAM: 512 MB to 1 GB. Nginx is known for its efficiency and low memory 
footprint, especially when not serving a massive number of concurrent 
connections or complex configurations.
   * Storage: 4-8 GB.
   * Notes: Excellent choice for performance and flexibility.
 * TinyProxy:
   * CPU: Very low CPU usage, scales linearly with connections. Any basic 
single-core CPU is more than enough.
   * RAM: Extremely low. Around 2 MB footprint with glibc. 256 MB or even less 
for the system + TinyProxy could potentially work, but 512 MB is a safer 
minimum for the OS.
   * Storage: Very minimal, 4-8 GB is plenty for the OS and the TinyProxy 
binaries.
   * Notes: Designed for minimal resources, ideal if you want the absolute 
lightest proxy.
 * HAProxy:
   * CPU: 1 CPU core for light workloads.
   * RAM: 2 GB of RAM is often cited as a minimum for HAProxy Enterprise, but a 
standard HAProxy on a minimal Linux install for a simple single-download proxy 
could probably get by with 512MB to 1GB for the OS and HAProxy.
   * Storage: Minimal disk for OS and logs, 4-8 GB.
   * Notes: Typically used for high-performance load balancing; might be 
overkill for a simple one-off download proxy but certainly capable.
2. Wget/Curl on an Intermediary Server:
 * CPU: Any basic single-core CPU (e.g., an ARM-based single-board computer 
like a Raspberry Pi Zero/1/2/3/4, or an old desktop/laptop CPU).
 * RAM: 256 MB (for very minimal Linux + basic commands) to 512 MB.
 * Storage: 4-8 GB (enough for the OS and the downloaded file temporarily).
 * Notes: This is the most "manual" approach, but requires the least dedicated 
resources as it's just running command-line tools. You could even use an 
existing desktop/laptop if you have one lying around.
3. Cloud Storage with HTTP Access:
 * Hardware: You don't need any dedicated hardware for this! You use your 
existing computer to upload to the cloud, and your old system just needs basic 
network access to the internet to download from the cloud's HTTP link. This is 
perhaps the "minimal hardware" solution in terms of what you need to set up.
4. Self-Hosted "Download Bridge" Script (e.g., Python Flask):
 * CPU: Any single-core CPU capable of running Linux and Python (e.g., a 
Raspberry Pi Zero W, an old PC).
 * RAM: 512 MB to 1 GB. Python applications, especially Flask, are generally 
lightweight for simple tasks.
 * Storage: 4-8 GB.
 * Notes: This is flexible and can be very resource-efficient for simple 
requests. The Flask app itself uses very little memory when idle.
In summary, for most of these solutions, especially for minimal usage (a few 
downloads, not serving heavy traffic), you could typically get away with:
 * A Raspberry Pi 1, 2, 3, or even Zero W.
 * An old, low-power desktop PC or laptop.
 * A virtual machine (VM) with minimal allocated resources (e.g., 1 vCPU, 
512MB-1GB RAM).
The key is to use a lightweight operating system without a graphical user 
interface (like a server-focused Linux distribution) to keep the base system's 
resource consumption low.

Minimal hardware?
The good news is that for the purpose of acting as a "middle-man" for a simple 
download, most of these solutions are very lightweight and can run on minimal 
hardware. You're not trying to serve millions of users or cache terabytes of 
data.
Here's a breakdown of minimal hardware for each, assuming you're running a 
lightweight Linux distribution (like Debian Minimal, Alpine Linux, or a minimal 
Ubuntu Server install) on the intermediary machine:
General Minimal Linux Server Baseline (for all options):
 * CPU: Single-core, 1 GHz or faster (e.g., an older Atom, Celeron, or even a 
Raspberry Pi Zero/1).
 * RAM: 512 MB to 1 GB. While some official docs suggest more, a stripped-down 
Linux install plus the proxy software can function with 512MB for simple tasks. 
1GB provides more comfort.
 * Storage: 4-8 GB of disk space (SSD is preferred for speed and longevity, but 
an HDD works for minimal installs). This is enough for the OS and the proxy 
software, plus some temporary download space.
 * Network: A working Ethernet port or Wi-Fi adapter.
1. HTTP/HTTPS Proxy Servers
 * Squid:
   * CPU: Single-core, 1 GHz or faster.
   * RAM: 512 MB to 1 GB is often sufficient for basic forwarding. If you 
enable significant caching, you'd want more RAM (e.g., 32 MB per GB of disk 
cache). For your download scenario, caching won't be heavily used.
   * Storage: 10 GB recommended if you enable caching, but for simple 
forwarding without much caching, 4-8 GB for the OS and Squid itself is fine.
   * Notes: Squid is powerful but can be resource-intensive if configured for 
high-volume caching. For a simple proxy, it's surprisingly light.
 * Nginx (as a Reverse Proxy):
   * CPU: Single-core, 1 GHz or faster.
   * RAM: 512 MB to 1 GB. Nginx is known for its efficiency and low memory 
footprint, especially when not serving a massive number of concurrent 
connections or complex configurations.
   * Storage: 4-8 GB.
   * Notes: Excellent choice for performance and flexibility.
 * TinyProxy:
   * CPU: Very low CPU usage, scales linearly with connections. Any basic 
single-core CPU is more than enough.
   * RAM: Extremely low. Around 2 MB footprint with glibc. 256 MB or even less 
for the system + TinyProxy could potentially work, but 512 MB is a safer 
minimum for the OS.
   * Storage: Very minimal, 4-8 GB is plenty for the OS and the TinyProxy 
binaries.
   * Notes: Designed for minimal resources, ideal if you want the absolute 
lightest proxy.
 * HAProxy:
   * CPU: 1 CPU core for light workloads.
   * RAM: 2 GB of RAM is often cited as a minimum for HAProxy Enterprise, but a 
standard HAProxy on a minimal Linux install for a simple single-download proxy 
could probably get by with 512MB to 1GB for the OS and HAProxy.
   * Storage: Minimal disk for OS and logs, 4-8 GB.
   * Notes: Typically used for high-performance load balancing; might be 
overkill for a simple one-off download proxy but certainly capable.
2. Wget/Curl on an Intermediary Server:
 * CPU: Any basic single-core CPU (e.g., an ARM-based single-board computer 
like a Raspberry Pi Zero/1/2/3/4, or an old desktop/laptop CPU).
 * RAM: 256 MB (for very minimal Linux + basic commands) to 512 MB.
 * Storage: 4-8 GB (enough for the OS and the downloaded file temporarily).
 * Notes: This is the most "manual" approach, but requires the least dedicated 
resources as it's just running command-line tools. You could even use an 
existing desktop/laptop if you have one lying around.
3. Cloud Storage with HTTP Access:
 * Hardware: You don't need any dedicated hardware for this! You use your 
existing computer to upload to the cloud, and your old system just needs basic 
network access to the internet to download from the cloud's HTTP link. This is 
perhaps the "minimal hardware" solution in terms of what you need to set up.
4. Self-Hosted "Download Bridge" Script (e.g., Python Flask):
 * CPU: Any single-core CPU capable of running Linux and Python (e.g., a 
Raspberry Pi Zero W, an old PC).
 * RAM: 512 MB to 1 GB. Python applications, especially Flask, are generally 
lightweight for simple tasks.
 * Storage: 4-8 GB.
 * Notes: This is flexible and can be very resource-efficient for simple 
requests. The Flask app itself uses very little memory when idle.
In summary, for most of these solutions, especially for minimal usage (a few 
downloads, not serving heavy traffic), you could typically get away with:
 * A Raspberry Pi 1, 2, 3, or even Zero W.
 * An old, low-power desktop PC or laptop.
 * A virtual machine (VM) with minimal allocated resources (e.g., 1 vCPU, 
512MB-1GB RAM).
The key is to use a lightweight operating system without a graphical user 
interface (like a server-focused Linux distribution) to keep the base system's 
resource consumption low.

Gemini said: 
You've hit on a common dilemma for retro-computing enthusiasts and users of 
FreeDOS! You're right, the challenge usually stems from a combination of 
"purism" (wanting to keep everything native to the old hardware/OS) and 
sometimes genuine "no money" situations, where buying a new system, even a 
Raspberry Pi, might be a hurdle for some.
The fact that they're doing email means they do have some form of networking, 
likely using:
 * Packet Drivers: These are low-level network interface card (NIC) drivers for 
DOS, like the Crynwr Packet Drivers. They provide the basic interface for 
TCP/IP.
 * DOS TCP/IP Stacks: Over the packet driver, they're running a DOS-specific 
TCP/IP stack. The most common modern one for FreeDOS is mTCP by Michael B. 
Brutman, which is highly regarded for its stability and modern features (for 
DOS). Older ones like WatTCP also exist.
 * DOS Email Clients: They're using DOS-native email clients that are 
configured to work with these TCP/IP stacks. Some very old ones exist, but more 
recently, projects like FlMail (mentioned in your search results) have emerged 
that specifically try to support SSL/TLS for SMTP and POP3/IMAP.
The HTTPS Download Problem for FreeDOS/DOS on 386/486: The TLS Barrier
Even with an excellent DOS TCP/IP stack like mTCP, the crucial hurdle for 
downloading from HTTPS sites is TLS (Transport Layer Security). Modern websites 
primarily use TLS 1.2 or TLS 1.3 for security.
 * Ancient OpenSSL/TLS Libraries: The problem isn't just the browser or wget 
itself, but the underlying cryptographic libraries it uses. To support TLS 
1.2/1.3, you need relatively modern OpenSSL versions (or equivalent).
 * CPU Requirements for Modern Crypto: Even if you could compile a somewhat 
modern OpenSSL for a 386/486, the cryptographic operations for TLS 1.2/1.3 are 
computationally intensive. A 386 or 486 would be incredibly slow, potentially 
taking minutes or even hours to establish a single secure connection, let alone 
download a large file. Modern CPUs have hardware acceleration for these 
operations that 386/486s completely lack.
 * Memory Footprint: The code and data structures for modern TLS are far larger 
than what typical 386/486 machines (with their limited RAM, e.g., 4MB-16MB) can 
comfortably handle.
Why Email Might Work When Web Browse/Downloading Fails:
 * Email is often less "chatty" HTTPS: While email clients like FlMail do claim 
TLS support, the connection negotiation might be simpler or less demanding than 
a full web browser constantly negotiating new connections and rendering complex 
pages.
 * Server Compatibility: Some email servers might still support older, less 
secure TLS versions (like TLS 1.0 or 1.1) for specific legacy clients, whereas 
most general web servers aggressively disable these for browser connections.
 * Limited Features: An email client just needs to send/receive text-based 
messages, not parse complex HTML, CSS, JavaScript, and dynamic content that 
modern web browsers require.
What the FreeDOS Community Actually Does (and why your "middle-man" idea is 
still the practical answer):
>From lurking on those lists, you'll find that for HTTPS web access or file 
>downloads, the overwhelming majority of FreeDOS users resort to one of the 
>following, which aligns with my previous recommendations:
 * "Sneakernet" / Manual Download: They download the file on a modern computer 
(Linux, Windows, Mac), put it on a USB stick, CD-R, or shared network drive, 
and then transfer it to the FreeDOS machine. This is the most common and 
reliable method.
 * Local HTTP Server on a Modern Machine: They download the file on a modern 
machine, then run a simple HTTP server (like Python's http.server or a quick 
Nginx config) on that modern machine. The FreeDOS machine then accesses the 
file via a plain HTTP request to the local IP address of the modern machine. 
This is the "middle-man" you described.
 * Very Old "Text-Mode" Browsers (with caveats): Some might try old text-mode 
browsers like Arachne or Links.
   * Arachne: Is a graphical DOS browser (or text-mode with graphics modes). It 
does have some SSL/TLS support, but it's often based on very old libraries and 
will struggle or fail with most modern HTTPS sites (again, due to TLS version 
incompatibility and cipher suites).
   * Links: The text-mode browser Links (and its graphical fork, ELinks) is 
still actively developed for modern systems, but the DOS port would be an older 
version, and its SSL/TLS capabilities would depend heavily on the underlying 
cryptographic libraries available and compilable for DOS. Again, TLS 1.2/1.3 is 
the blocker.
 * Specialized Tools/Scripts: Very occasionally, someone might try to build a 
custom tool, but it's typically a one-off effort that hits the same 
TLS/CPU/memory walls.
Conclusion for 386/486 and HTTPS:
It's not realistically possible for a 386 or 486 to act as the HTTPS-capable 
"middle-man" for a modern HTTPS download, nor can it directly access most 
modern HTTPS sites. The technical limitations of CPU architecture, memory, and 
the extreme age of the available software and cryptographic libraries make it a 
non-starter.
The "middle-man" must be a system capable of running modern software and 
libraries (even a Raspberry Pi Zero W from 2015 is vastly more capable in this 
regard than a 486). The FreeDOS machine then communicates with this modern 
"middle-man" using the protocols it can handle, typically HTTP or FTP.


On Wed, Jul 2, 2025, at 2:25 PM, sunbeamcar--- via Freedos-user wrote:
> Two bad things about new web sites.
> one they use HTML 5 which is like Chinese to old browsers
> 2nd certificates expire and new ones are not available for
> old browsers.
>   The web is rigged and I don't use it as much.
> For good searches duckduck is a lot better than Google.
>
>
>
> cheers
> DS
>
>
> _______________________________________________
> Freedos-user mailing list
> Freedos-user@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/freedos-user


_______________________________________________
Freedos-user mailing list
Freedos-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freedos-user

Reply via email to