Replies: 24 comments 18 replies
-
Note that Win32s has no console subsystem, no support for threads, and executes in a shared address space (requiring a relocatable executable.) |
Beta Was this translation helpful? Give feedback.
-
There are hacks to mimic win32 threads in win32s but that is ugly. To begin with Win 3.1 is a single core operating system that use co-operative multitasking which is in itself ugly. This is a problem shared when running rust on bare metal embedded systems where there also is nothing external to provide threading and usually no hardware support for parallel execution. There is a difference between concurrency and actual parallel execution where the latter requires hardware that run more than one task at a time such as having hardware threads or multiple cores. The hardware that Win 3.1 can run on and many embedded systems have no hardware threads and a single core so no parallelism unless you actually add multiple processors to a system. There were 386 systems with multiple processors but they all ran some variant of Unix or other workstation/server operating system. Windows 3.1 or 9x never had any support for something like that but Windows NT could run on dual Pentium system which came in some workstations. So technically Windows 9x have fake threads with slow preemptive multitasking. Windows NT (including up to Windows 11) has support for multiple processors and multiple cores meaning real threads. To achieve concurrency it is best to either rely on non-threading concurrency model or some kind of virtual threads inside rust. Even on modern system it is unwise to start more OS threads than there are cores or hardware threads capabilities. On PC hardware threads means HyperThreading where one actual core show up as two different cores. So if you have 4 real cores with hyperthreading you should never have more than 8 active threads in the entire operating system because that will have a significant negative impact on OS performance. And in many scenarios running 8 threads instead of four will have a serious impact. That is why many scientific applications like matlab is best run with hyperthreading inactive and one thread per physical core. A bit of a rant but the point is that on Windows 3.1 and windows 9x you should not have more than one thread. Use other concurrency like async await that plays well on a single core processor. The solution to is to make starting a new thread fail. Simple as that. There is no reason to have threads on Windows 3.1, or Windows 9x. |
Beta Was this translation helpful? Give feedback.
-
As for a console subsystem... just make a window and put text in it... we don't need a full ANSI terminal window. Sure if we are going to run a series of console application we may need a simple command interpreter but that is not difficult to implement. |
Beta Was this translation helpful? Give feedback.
-
As someone who lived and programmed through the Windows 9x to single-core Windows XP era, I'm calling [citation needed] on your blanket statement to steer away from threading with only one CPU core. Yes, it's inefficient to have more than cores+1 CPU-bound threads (hence the classic advice to feed
As someone who is now a retro-hobbyist, I'll say that one of the most noticeable ways Microsoft leap-frogged Apple with Windows 9x is that, even on Mac OS 9.2.2 on a 933MHz Power Mac G4 Quicksilver 2002, you can feel the UI struggling to remain responsive while something like the official Mac OS Finder file-copy dialog is running, thanks to the lack of preemption. By contrast, the UI remains snappy under Windows 98SE on my 133MHz Pentium 1. Is implementing preemption for Windows 3.1 overkill? Sure... but threading was one of the big things that made Windows 9x great. EDIT: TL;DR: Threading isn't just about CPU-bound parallelism. It's also akin to the "VLIW vs. runtime branch prediction" conundrum, where it turns out to be more difficult in practice than you expect to reliably and effectively pick yield points in I/O-bound tasks to ensure the the UI remains responsive. |
Beta Was this translation helpful? Give feedback.
-
*nod* That's how Open Watcom C/C++ does it if you ask it to build a Win16 app with console APIs. |
Beta Was this translation helpful? Give feedback.
-
I am talking out of my own experience. What I am saying is that running more than one ACTIVE thread/process at a time is less efficient than single-thread concurrency. GCC does not have single-thread concurrency but instead often becomes inactive waiting for IO operations etc. Unless we have a compiler that use async/await or any other thread-less concurrency to compare with we cannot use compilers to test my statement. There are compilers that are significantly faster than GCC such as Turbo Pascal however as that is proprietary code it is difficult to say why it is faster, but I doubt it is by spawning threads. I have written quite extensive single-tread GUI:s in C# using async/await and they do not struggle to be responsive. The cooperative multitasking is handled every time an async function calls another async function because that always pushes control back to dotnet's task scheduler. It is true that both windows 3.1 and MacOS had problems with stability due to tasks that becomes inactive without handing over control to the scheduler. Those are bugs and while they still can occur with async-await frameworks and similar they are a lot less frequent. It is similarly possible and frequently occurring that modern multi-threaded system grinds to a halt if software has bugs that the OS cannot handle. Which is why patterns that minimizes bugs are important no matter how fancy your OS is. Note that on these old operative system cooperative multitasking were implemented by having tasks returning control by making a specific call into the kernel. With modern cooperative multitasking (my experience is mostly with C# so I will reference that although I know that rust has similar mechanisms) any call from an async function to another async function returns control back to the scheduler. Whenever a non-async function are used and an async is available this will produce a compiler warning. That being said with cooperative multitasking you still of course need to know what you are doing and appropriately test your code. And finally my point is that with non-NT Windows it is better to use async/await patterns than to use threading. Same for modern systems unless you are actually spreading calculations across cores then you need threads, but this is something non-NT windows has no support for. Also not that there are a lot of embedded software that has a design similar to Windows 3.1 and old MacOS without having the problems that you pointed out with cooperative multitasking. The reason that they do not have these problems are testing, testing and more testing. With the testing frameworks the only cause for such a system to become unresponsive is if we take shortcuts with writing good tests, and of course running those tests regularly. We should remember that the cooperative desktop systems from the 1980:s and their software were written over a short period of time and practices like extensive testing were simply just not seen as important. This is why even the software that were part of these operating systems had plenty of bugs that had performance and stability implications that simply would not be there in for example a robot control system of similar architecture and complexity. What I am not claiming and isn't going to claim is that writing cooperative well-performing concurrent code is as easy as just spawning threads. But what I claim is that when you have such code and it contains no bugs that causes major inactivity, it performs a lot better than spawning threads. With the exception of when you actually need to offload to another hardware execution unit. Finally as a tool to write Windows 3.1 applications, using rust with async-await is going to be the easiest way to write a stable Windows 3.1 application. Just make the internal scheduler regularly hand over control to the OS. |
Beta Was this translation helpful? Give feedback.
-
That's basically my point. Telling people to just write correct cooperative multithreading on Windows 9x has a tone-deafness to it akin to saying that you only need Rust if you're working in a team and solo programmers should just write correct C++. |
Beta Was this translation helpful? Give feedback.
-
Well there is plenty of evidence that it is fully possible to do so. The difference between C++ and other languages are not between the possibility of writing correct code but about how much it costs. Writing correct cooperative multitasking using a modern framework like the one in C# and probably the one in Rust is going to cost a lot less than doing it using pretty much any compiler that were available for the discussed platforms in the era that they were popular. On the other hand a lot of the cooperative multitasking that are in embedded systems is written in C and and works very well. The main reason for this is simply that embedded developer spend a lot of time making sure that their code is correct and this of course makes such projects expensive (when done right). Automated testing and frameworks helps keeping the cost down compared to something written before those tools were available but you still need to have a significant budget for testing. Although that being said a lot of the Windows 3.1 software were remarkable stable given the fact the only method for making it so were having a developer that wrote correct code. So no I do not agree to this. There are to much code written in languages like C that use cooperative multitasking and works perfectly well without freezing and crashing. And no matter what compilers and tools you use in order to improve the development process no system are going to be fit for production use without appropriate testing. |
Beta Was this translation helpful? Give feedback.
-
Ah I didn't know that. I actually have plans of porting a console game to Win16 and were thinking about writing something like that, but if Watcom already has it I don't need to. |
Beta Was this translation helpful? Give feedback.
-
I don't know how rich an API it provides. I've only ever poked at it as a replacement for very basic use of CMD.EXE for output. |
Beta Was this translation helpful? Give feedback.
-
You're still assuming that the users are willing to reinvent or use Rust versions of everything they need which would otherwise be blocking-API'd if they're looking at the period dependencies. There's a reason tokio has |
Beta Was this translation helpful? Give feedback.
-
Not if those libraries contains bugs that causes the application to freeze no. |
Beta Was this translation helpful? Give feedback.
-
No everyone didn't. The embedded industry continued to use Win16 and many other cooperative solutions and are still using it. Win16 itself is rare these days as it's not available for licensing but there are plenty of other solutions. Microsofts themselves did reintroduce cooperative multitasking inside dotnet and there it's very common to use it. Also before dotnet, Java had green threads that's also cooperative multitasking and now it's reintroduced as virtual threads. |
Beta Was this translation helpful? Give feedback.
-
There are plenty of libraries from that era that are fully functional to todays standard. And plenty of shitty libraries being written right now. No matter when it's written, if it's crap I replace it. I've reverse engineered printer drivers for being crap so a simple library isn't going to stop me. However a far bigger problem with old libraries and drivers is that they're not available for Arm and other non-x86 hardware. Then I have no option but to replace them. |
Beta Was this translation helpful? Give feedback.
-
Well I tend to be good at both and quite often I don't have to because there are high quality open source libraries around. Premature optimization is when you optimize something that doesn't actually have a major impact of performance. Finding and fixing bugs that has a major impact on performance isn't premature optimization. My experience is that on average the simplest way to fix a badly behaving library is to replicate the behaviour that I need. Sometimes that implies reverse engineering code. Sometimes it implies listening in on communication between a driver and a device to reverse engineer the protocol while completely ignoring the flawed implementation. As for Python I absolutely hate it. Mostly I stick to C# and C while considering learning rust. C# is way better than Python when doing projects that are large enough to need structure. And C# code is often as fast as C code so no need to implement something like numpy using C. C on the other hand works on every processor and system that's worth using. |
Beta Was this translation helpful? Give feedback.
-
Also I don't really agree that premature optimization is bad. If you practice optimization it becomes a skill and you just automatically write code that is fast without spending extra time on it. Searching for performance bottlenecks in order to do optimization in a second phase that takes time. Sure while you are building the skill premature optimization is going to take more time than just coding without care. But when your skill reach a certain level it's going to save you time. You will also create fewer bugs because when writing optimized code you need to know what it's doing. |
Beta Was this translation helpful? Give feedback.
-
Oh and if you pull in Qt you can forget about memory safety. Unless you rewrite. Doesn't matter if your bindings in themselves are safe when the library isn't. |
Beta Was this translation helpful? Give feedback.
-
Well if you are unwilling to look at other toolkits than Qt that will limit you. Rust have plenty of UI:s that are implemented in Rust. C# has Avalonia. When trying to maximize platform coverage however something game related such as SDL may be a valid choice. Also don't dismiss the value of writing console applications. A console application written in C will work on everything from Commodore PET to a modern PC and this discussion established that Watcom will cover Windows 11. Even better you don't even need to compile for these platforms, a serial cable or network adapter and you can run a bbs, telnet or ssh server. I'm pretty sure that software for connecting to all three exists for win16 (Trumpet for example). It's difficult to find a retro computer without a terminal emulator and RS232. |
Beta Was this translation helpful? Give feedback.
-
Or here is another idea. If you must use Qt write the frontend in C++ and the backend in any language that has TCP. The bonus is that should the frontend crash it can just be restarted. And they can run on different computers. |
Beta Was this translation helpful? Give feedback.
-
Well you will also need to pick platform. I made some games where I just wanted to target everything and that enforced a combination of CC65, GCC and Watcom C. Going beyond text input simply wasn't reasonable given the very different graphics architectures. I would simply have to do several different frontends for different platforms. With a narrower selection target platforms godot is an excellent choice. I don't like the lack of control when running on top of a game engine so I'm probably gonna go with SDL. Although now I'm contemplating a bbs/telnet/ssh server because I'm working on something text based that needs a database backend. Lets say it's multiplayer but actually going to run on a single machine/server. Unfortunately there are nothing that easy to use for graphical applications unless you go for something browser based. If we stick to most modern platforms that can run either Firefox and Chromium with GPU capabilities then that's not a bad solution. And one that godot supports by the way. Although I'm leaning towards blazor server side rendering right now. |
Beta Was this translation helpful? Give feedback.
-
One of my currently idle projects is to replicate the experience of these old user interfaces in C# or something similar. A simulator rather than an emulator. Something that a user of old desktop environment would believe is real for quite some time. Perhaps a plugin system for adding applications. This is rather inspired by Java LaF and the Windows 98 simulator for Android. |
Beta Was this translation helpful? Give feedback.
-
I think that the future of the web is to use WebGPU and descendants to render user interfaces. This is currently mainly used by games and emulators but why not implemented a desktop environment that runs in the browser and renders using WebGPU. |
Beta Was this translation helpful? Give feedback.
-
Well to have any user experience the user must be able to run the application. Naturally very few use 8 bit computers like the C64 but I think that including it is fun. For people with limited budget I actually think that Raspberry Pi is far more widespread and while it can do graphics the earliest models are easily exhausted. Targeting so many platform does limit what user experience that can be achieved. Ill be limited in what I can implement but look at what DOS BBS systems could do and add RAD development with C# and Entity Framework... |
Beta Was this translation helpful? Give feedback.
-
I guess that depends a bit on electricity prices. While readline are useful for some use cases it's utility largely depend on what application you are writing. Also unfortunately it is not ported to all the platforms that I am currently targeting so I can't really use it. Some of my cli applications are non-interactive and would have no use for it, others are entirely menu-driven and have no use for it. When I switched to flat screen it was more about space. And weight. Moving those things around were not fun. Back then power were cheap so I didn't mind using such equipment as a source of heat but there were just a point where I had to clean out CRT TV:s and computer monitors. I usually keep things running, that's another reason I want stuff that are low power. I don't really have space for the hardware that I want but in the future I may start collecting some of the good stuff especially if there are more stuff like Orpheus II coming out. New retro hardware is not the old one but it is not emulators either. As I am more interested in running new software it is not as much of a problem to me that some old games will not run. But I think that eventually most of them will run. Some will also run because they are reverse engineered and ported to modern systems. I think that for example Diablo is fully reverse engineered and ported to other platforms. In other cases we will more likely have clones that are close enough to the original game. FreePascal/Lazarus is one of those "close enough" clones but that obviously have some work that needs to be done before it is as good as Turbo Pascal/Delphi. I don't think that it is as good as OpenWatcom on 32 bit and if it support 16 bit it is probably not good. FreePascal for legacy windows versions are being improved upon although we will probably never see official support. Command.com works for simple stuff but you are right that it get's annoying when more complex things has to be done. I've played around with ChatGPT since it were first released but it is not until the newest o3 it has become somewhat useful. Translating source code between languages or fitting it to weird compilers like the cc65 is rather useful but when projects become larger it will fail. It speeds up development but you certainly can never rely on it. It is not just about system requirements. Doing X11 over SSH adds a lot of latency due to the protocol being very badly designed and makes whatever you are rendering a lot slower. It is not necessarily that you run out of hardware capacity but more often X11 has a lot of roundtrips and stops in wait loops. VNC is better but only slightly. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
All reactions