Thus far we have examined a number of coordination issues that arise in concurrent processing. For the most part, these have been presented with the implicit assumption that the processors involved in a conflict, be they humans or machines, are independent entities, each focusing full attention on whatever process it is following. However, in reality there are more concurrent processes than there are processors to follow them. Therefore, by necessity, some processors have to be engaged in many processes at once. Unfortunately, processes call for actions, and processors are generally incapable of performing many actions at once -- thus, it would seem impossible for this necessity to be met.
The answer lies in the fact that the actions required by processes are, by definition, in a sequence of steps. While multiple steps cannot be carried out simultaneously by the same processor, the processor can opt to alternate between processes, carrying out a limited number of steps in each process in each alternation. If a process is not completed during this time, then the processor will start with the first incomplete step during the next alternation. In this way, a processor's available processing time is divided, or shared, amongst all of the processes for which it is responsible. And, even though the processor is engaged in many processes at once, it still is required to perform only one action at a time.
It may intuitively seem that a processor using such a time sharing system would not accomplish its processing any faster than it would if it followed each process to completion all at once (in other words, if it were only engaged in one process at a time). After all, the processor has to perform all of the same actions in either case -- the time sharing system merely shuffles them around a bit. However, it is important to remember that in between the steps of a process there can be delays. It is these delays that time sharing processors can exploit to gain an advantage.
For instance, imagine a person that has two processes to carry out: making a cake and washing the dishes. The process for making the cake consists of a number of preparation steps, which will take half an hour, and a baking step, which will take one and a half hours, requiring a total of two hours to complete. The process for washing the dishes is repetitive, with steps for scrubbing, rinsing and drying each item, requiring a total of a half hour (because the dishes haven't been washed in a long time). If this person were to make the cake and then wash the dishes, it would take a total of two and a half hours. However, this would not be a very efficient use of his time, because the baking step for the cake simply involves putting the cake in the oven and waiting for an hour and a half. Rather than staring at the oven for an hour and a half, it would be much smarter to put the cake-making process on hold and use the time to do something else, such as wash the dishes. Then, the cake-making process can be resumed later, starting with the next action in sequence, removing the cake from the oven. In this case, both processes will still be completed, but now thirty minutes faster due to the time sharing.
So what does this have to do with computers? The processes which computers carry out are full of delays which can be used by their processors in this way. For example, most computer processes involve the use of slow devices, such as hard drives, network cards, and even the computer's user, who can not provide input with mouse and keyboard nearly as fast as the computer can accept it. If the processor were to simply stop and wait for these things to finish before continuing, a significant amount of processing time would be wasted. Of course, there are some computer processes, particularly those involving intense calculations, that do not have many delays. To prevent one such process from tying up the processor, safeguards can be put in place that limit the total amount of continuous time given to any process. In this way, all of the processes are given fair and regular chances to proceed.
Computer time sharing was first explored in the early days of multi-user operating systems. The goal was to allow many users, on separate terminals, to use a computer simultaneously, allowing each user to start and stop processes which the processor would carry out in an alternating fashion. Today, almost all operating systems implement some form of time sharing, enabling even single users to run many processes at once, a feat known as multitasking. Because computer processors operate at much higher speeds than humans can perceive, time sharing computers appear to actually be doing many things simultaneously -- in fact, they are merely switching rapidly between processes, giving each a small amount of time before suspending it and moving on.
So what, then, of the coordination problems described in earlier sections? If potentially conflicting processes are actually being carried out by the same processor, is there any chance that they can still interfere with each other? The answer is yes, for a number of reasons. Because processes are usually suspended while lengthy operations are in progress, other processes which utilize the same resources that are tied up in those operations can cause problems if they are resumed at inopportune times. Therefore, it is still necessary for the processor to use appropriate coordination measures to prevent race conditions and enforce exclusive access to critical resources. Synchronization is also still an issue, because processes can still have ordering requirements. And with these issues necessarily comes the need for reliable arbitration and deadlock avoidance.
For people who are carrying out multiple processes with time sharing, coordination measures are often simply intuitive. For example, if one were baking a cake in the oven at 350, it would be obvious that, in order to broil a pot roast in the same oven, the cake would have to finish baking first, because the oven, as a resource, requires exclusive access. However, the processes being carried out by computers are generally programmed by different people with no knowledge of each others' intentions. Thus, it would be an impossible task to anticipate all possible interactions intuitively, and more pessimistic coordination is often required. Most time sharing operating systems have built-in coordination mechanisms (such as semaphores) to ensure that all processes are using compatible methods. However, the same attention must be given to the proper use of these mechanisms, just as in true concurrent processing, if coordination problems are to be avoided.