One to One, Many to One, and Many to Many
Christopher Rodriguez
Colorado State University Global
22SC-CSC300-1: Operating Systems and Architecture
Dr. Jonathan Vanover

One to One, Many to One, and Many to Many

The task this paper seeks to complete is a recommendation for the upgrade of a computer laboratory from its current setup of individual workstations with one of three operating systems installed (IBM OS/2, Microsoft Windows NT, or Microsoft Windows 2000) that uses a one-to-one model of multi-threading to a setup utilizing current-day Operating Systems that support a model that same one-to-one model. This will better suit the use-case of a laboratory in an academic setting.

This paper presents the case that the best path forward is to purchase a single powerful server that will run a Virtual Machine Manager that will allow students to run various distributions of the GNU/Linux operating system in a containerized, isolated virtual environment, and that the One-to-One module of multi-threading is both the most future-proof and the best decision economically for this use-case.

Initial Consideration of Obsolescence

Before outlining the entirety of this proposal, it is worth offering a word of caution to the operator of this laboratory: Regardless of the reaction to this recommendation, please upgrade to some sort of operating system that is actively maintained and supported. IBM’s OS/2 has been considered by IBM as target to move away from for over a decade (“Migration Station,” 2010), “Windows NT” was supplanted by Windows 2000 at the turn of the millennium (“Microsoft Renames Windows NT 5.0 Product Line to Windows 2000,” 2014), and Windows 2000 has not had a active support since 2010 (“Microsoft Product Lifecycle Search,” 2013). From a maintainability perspective, it’s vital to keep systems updated with current security patches. It will also make the job of actually maintaining the lab much easier in the long run, as there will be more people familiar with operating system available—including a corporate help-desk that will be able to offer assistance, if needed.

Multi-threading Models

With the advent of multi-threaded computation, there came both user threads—which run above and independently of the kernel—and kernel threads—which are managed directly by the underlying operating system (Silberschatz et al., 2018, p. 166). However, both of these threads need to run on a CPU, and therefore must eventually be mapped into one unit so they can be processed. There are three common models for this.

Many to One

In the “Many to One” model of multi-threading multiple user threads are all mapped to a single kernel thread. This has the benefit of allowing all thread management to happen in user space, as opposed to needing the kernel to manage threads. This efficiency caused the Many to One model to develop a niche in the single-core CPU era. However, it comes with two major drawbacks: All user-space threads are halted if there is a blocking system call on the single kernel thread. And multi-core processors are not able to be utilized to their fullest extent in a Many to One model, because only a single user thread can access the kernel at a time (Silberschatz et al., 2018, Section 4.3.1).

One to One

The model currently in use in the lab for this project, a “One to One” relationship creates a kernel thread for each user thread, effectively ensuring each thread in user-space can always access the kernel, even when another process might be blocking it. This addresses both of the downsides in the Many to One model, but comes with another cost, albeit one rapidly becoming less and less severe: The One to One model creates the most kernel threads, as there will always be exactly the same number of kernel threads as there are user threads. This used to be much more problematic when processing resources were more limited, and so gave rise to the final model (Silberschatz et al., 2018, Section 4.3.2).

Many to Many

In the “Many to Many” model, user threads are multiplexed across a possibly smaller number of kernel threads, depending on the capabilities of the machine and the needs of the application. This sounds more complex than the two preceding examples because it is: There are a lot of variables in this process, and developers need to be aware of the relationships they can create on the machines their code will run on. However, this complexity solves all three of the above problems, as there will (theoretically) always be a reasonable number of kernel threads, and user threads are mapped to them based on need and availability, dynamically over time (Silberschatz et al., 2018, Section 4.3.3).

Staying One to One

Though alternatives exist, the “One to One” model is the clear winner from a pragmatic standpoint. First and foremost, in 2022 its drawback is nearly trivial: There are enough computing resources in the average machine, be it virtual or physical, to not have to worry about accidentally creating too many kernel threads. And secondly, the most common end-user operating systems—GNU/Linux, Microsoft Windows, and Mac OS/X—all use a One to One model (Silberschatz et al., 2018, p. 167).

Moving to one of the other models effectively means becoming nonstandard, and that will only make things more complicated for the end users.

Operating System

Knowing that we are staying with a One to One model of multi-threading, the next question naturally is which operating system to choose for the server, and which operating systems to provide to the students.

For the server, there is an argument that it doesn’t really matter what operating system is installed, passed a certain point of usability (Moss, 2008, p. 92). Indeed, since it will be largely serving virtual machines running various operating systems anyway, perhaps from a user standpoint it does not matter: a user might use any program inside of any operating system, whenever they like (Larkin, 2006).

However, from a system administration point of view, the clear choice for a recommendation is GNU/Linux.

Security by Obscurity

From the standpoint of security, there is at first little doubt that GNU/Linux and Microsoft Windows are fairly equal: Both are well known operating systems with large user bases that receive regular updates and have lots of people working on them. However, GNU/Linux edges out Microsoft Windows in two crucial areas: It’s generality, and it’s obfuscatability.

There is a common misconception that it’s possible to increase security by using lesser-known systems, sometimes referred to using the phrase “Security through Obscurity”, and there is a common axiom called Kerckhoffs’ Principle that asserts its falsehood. As shown by Pavlovic (2011), Kerckhoffs’ Principle is only broken once the model of security shifts from one of imperfect information to one of incomplete information (p. 134): That is, security is about generalization, misdirection, and obfuscation more than about ensuring an attacker’s unfamiliarity with the workings of a specific system.

If generalization is important, it can be said that any POSIX-compliant operating system might be secure—and even more-so GNU/Linux, which follows a standardized directory structure at the core, on which many services might add or run on top of (LSB Workgroup, 2015, p. 1). And as for obfuscation: Any technophile who has engaged in “Distro Hopping”—moving from one distribution of GNU/Linux to another, for the experience and novelty—can attest to the fact that each distribution behaves slightly differently, especially considering the variety of setups available for GNU/Linux and the speed with which software in its ecosystem is updated.

If GNU/Linux is the subversive bazaar as mentioned by , then Windows is the classic Cathedral in comparison: Proprietary, slow to change, and “official” in a way that makes its security flaws inobscure.


That said, all existing data will need to be ported over to a different filesystem, as NTFS—which is the filesystem all of the data is assumedly stored in currently—still lacks perfect support in GNU/Linux, with various warnings about data corruption still whispered around its recommended use (“NTFS-3G,” 2022). From an end-user perspective, however, file-system choice is largely opaque, unless less common options are chosen. A strong recommendation would be EXT4, which is the latest iteration on the extended filesystem, and can handle very large filesystems (Team, 2022).

Large filesystems will be very important, as this system is primarily going to be serving users entire virtual machines.


A cluster of servers—or for smaller applications such as this project, a single server acting as the sole host—running clusters of virtual machines provides a few widely-accepted benefits over running on real hardware: New Desktops or Servers can be easily provisioned in seconds, those same virtual machines can easily be moved to new hardware as it becomes necessary or desirable, students or professors can easily—and without needing assistance or approval from anyone—use the system, even if they need to provision an entirely new machine, and none of the above costs any more money than the original setup and running costs for the system as a whole (Yuan et al., 2013, p. 77).

Virtual Machines

When outlining the thought process behind developing Clojure, Hickey (2015) claimed that virtual machines, and not operating systems running on bare-metal hardware—were the platforms of the future. And when defining an environment to work in, Bellum and Linvega (2022) chose to implement a small virtual machine for ease of implementation and portability. It is clear that Virtual Machines offer some notable benefits over Operating Systems on bare-metal hardware.

Modern computers use an enormous number of layers of abstraction to provide the similar experience users expect across constantly varying hardware. As referenced by Munroe (2009), even something a deceptively simple as watching a video of a cat involves so many layers it is hard to count. Virtual Machines abstract most of this away, providing a consistent experience regardless of hardware, and allowing specifics—like peripherals and storage drives—to be changed at will. This provides a safe and customizable environment for any sort of work, as noted by Schocken (2009)at the end of their study: Modular components surrounding a virtual machine allow them to be reused and reworked for subsets of the original goal (p. 209).

The Hypervisor

Virtual Machines need a Hypervisor to provision them.

For this, as the system is already running GNU/Linux, the simplest recommendation is the Libvirt Virtualization API: An API that will support multiple varieties of Virtualization, including QEMU, Xen, LXC, and KVM (“Libvirt Virtualization API,” 2022). This system will not be using VMM, it is not using Windows as the host OS.

The typical downside mentioned when talking about Hypervisors is processing overhead (Dai et al., 2013, p. 111). As this is a computing laboratory and not a business environment, this is of little concern here: Quotas can be put in place to ensure no one person monopolizes the resources, if it becomes necessary.


But what hardware is best, to store the virtual machines and user data—after all, a system like this would be of little use if everything was lost when the machine shut down.

Hardware Recommendations

One upside to using GNU/Linux is the breadth of compatible hardware available: Long gone are the days of wondering if a machine will run a common variety of GNU/Linux. Therefore, the only considerations needed for hardware are cost and power. In particular, the KVM module of the Linux Kernel allows for any CPU with virtualization capabilities to directly be used for this purpose.

Therefore, any strong server capable of virtualization and of running GNU/Linux will do.

Virtualized Storage

According to Yuan et al. (2013), the shift toward virtualizing laboratories and data centers in both business and academia has largely been motivated by a desire to lower costs in general, but has had the unintended consequence of making such installations much more environmentally friendly as well (p. 76-77). It is little wonder then that the recommendation would be to utilize virtualized storage as well, as the costs and impact on the environment will be made much lower through this practice.

With this in mind and the use of the EXT4 filesystem as mentioned above, storage can easily be expanded and contracted per user as needed, and stored as files on a massive partition of the hard drive. These can be backed up offline on a regular schedule.


The concept of a “Redundant Array of Inexpensive Disks” has existed for a long time. With the advent of the cloud, less focus has been placed on this kind of storage than in past years…However, this project has been entirely local up to this point, and to continue that feature storage will be local as well.

When discussing the different varieties of RAID setup, there are generally a total of 7 RAID “levels” to consider—interestingly, the two at the extremes (RAID 0 and RAID 6) were not part of the original set of RAID organizations (Chen et al., 1994, p. 152).

It’s wise to immediately remove three levels from consideration: RAID 0, which is non-redundant and therefore no safer than a raw Disk Array1 and both RAID 2 and RAID 4, as while they are useful as conceptual stepping-stones they are strictly inferior to other choices: RAID 2 makes inefficient use of hardware with its disk-array-for-redundant-parity approach, and RAID 4 create a bottleneck out of its single parity disk (Chen et al., 1994, pp. 153–154). Following this, removing RAID 1 and RAID 3 comes naturally, as neither of these are particularly suited to a virtual machine based laboratory system.

This leaves RAID 5 and RAID 6.

RAID 5 is useful specifically when performance is the highest priority: In RAID 5, both data and parity are stored across disks in uniformly sized blocks. The effective capacity of one disk is dedicated to “check” blocks, which can be read in order to restore blocks from a failed disk—but as these are uniformly spread throughout all of the disks, no one disk ever taxed too heavily by the system (Thomasian & Blaum, 2009, 7:8). However, RAID 5 is still only 1DFT: If two or more disks fail, data is lost. This is a very popular arrangement in real-life, because it allows for both reliability and performance improvements: It is used in myriad cases, and might be thought of as the “default” RAID level.

RAID 6 is useful specifically when an increase in reliability is worth a small hit to performance: In RAID 6, everything is exactly the same as in RAID 5, except RAID 6 devotes the effective capacity of two disks to “check” blocks (Thomasian & Blaum, 2009, 7:8). It is therefore 2DFT. This difference makes it the clear choice of the basic RAID levels for data retention, and if feasible from a monetary standpoint, is likely the “best” general-use single RAID level.

If it is within budget, RAID 6 is definitely worth going for. Otherwise, stick with RAID 5.

1 RAID 0 is actually significantly less safe than a single disk, as noted by Chen et al. (1994): an N-disk RAID0 array is N times more likely to fail than a single disk—or put another way, RAID0 divides the longevity of Your storage by the number of disks (p. 147)!

Network Security

A final point of consideration should be security: If a bad actor were to access the system and the files stored within it, they could delete everything, siphon data from the students, or even use the system as a base from which to attack others. We will protect against this by following the recommendations of Popa (2022), by combining hardware and software solutions to ensure the system remains accessible to those who are meant to use it, and inaccessible to those who are not.


As Yampolskiy et al. (2021) asserted in their discussions of a security model for Additive Manufacturing, the traditional model of network security can be broken into three pieces: Confidentiality, Integrity, and Availability . Confidentiality is broken when outside access to private information is made. Integrity is broken when private information is altered. And Availability is broken when private information is prevented from being delivered to its recipient (p. 4).

Each one of these must be considered individually, as any of these breaches occurring would undermine the integrity of the system, and therefore on the work being done within it. However, with a proper application of hardware and software (and a modicum of user training) it’s possible to ensure no data is visible outside of those intended recipients of it.

An Unsolved Problem

During their recent Webinar entitled “Secure Computation in Practice”, Popa (2022) stated there was a gap between two separate, relatively secure areas in common networked computation: The two commonly protected areas are “in transit” (while uploading/downloading the data), and “at rest” (while storing the data). In between these, however, there is a period where the unencrypted data is “in use” (being transformed via computation) in the memory of the server ( 00:08:01).

This is a major attack vector that still has (seemingly) no good solution is to breach the security of a server responsible for performing calculations with some sensitive data and viewing the data as it changes in memory. Using this sort of attack vector, around 533 million (533,000,000) user records were compromised from’s servers in 2021 alone (Popa, 2022,  00:05:52). They go on to propose a combination of hardware enclaves—where there is effectively a gateway-encryption-module in between the processor and memory—and public key cryptography—so multiple people can access and decrypt the encrypted data—that will allow the data to stay encrypted the entire time.

Public Key Cryptography

And on the software side, Public Key Cryptography will be the answer: If each user generates such a key on first login to the system, and all of their data is encrypted with this key, then it will remain private while not in use or while in transit. The only time the data would be accessible unencrypted is while it is in use—and while Popa (2022) has outlined a solution, that solution is still being worked on. For now, this less than perfect solution will have to suffice.


As has been outlined, this system will update the computer lab to modern standards, as well as provide a cost-effective and easy-to-maintain solution for many years to come. Using a Linux Host on a server running RAID 6 with a hardware enclave, a public key cryptography system, and virtualized storage that is regularly backed up to serve Linux and Windows VMs to those who need them without the need for new hardware provisioning is the clear best path forward for this laboratory.