Operating System Interview Questions

Operating System Interview Question

A list of top frequently asked Operating System interview questions and answers are given below.

1) What is an operating system?

The operating system is a software program that facilitates computer hardware to communicate and operate with the computer software. It is the most important part of a computer system without it computer is just like a box.


2) What is the main purpose of an operating system?

There are two main purposes of an operating system:

  • It is designed to make sure that a computer system performs well by managing its computational activities.
  • It provides an environment for the development and execution of programs.

3) What are the different operating systems?

  • Batched operating systems
  • Distributed operating systems
  • Timesharing operating systems
  • Multi-programmed operating systems
  • Real-time operating systems

4) What is a socket?

A socket is used to make connection between two applications. Endpoints of the connection are called socket.


5) What is a real-time system?

Real-time system is used in the case when rigid-time requirements have been placed on the operation of a processor. It contains a well defined and fixed time constraints.


6) What is kernel?

Kernel is the core and most important part of a computer operating system which provides basic services for all parts of the OS.


7) What is monolithic kernel?

A monolithic kernel is a kernel which includes all operating system code is in single executable image.


8) What do you mean by a process?

An executing program is known as process. There are two types of processes:

  • Operating System Processes
  • User Processes

9) What are the different states of a process?

A list of different states of process:

  • New Process
  • Running Process
  • Waiting Process
  • Ready Process
  • Terminated Process

10) What is the difference between micro kernel and macro kernel?

Micro kernel: micro kernel is the kernel which runs minimal performance affecting services for operating system. In micro kernel operating system all other operations are performed by processor.

Macro Kernel: Macro Kernel is a combination of micro and monolithic kernel.


11) What is the concept of reentrancy?

It is a very useful memory saving technique that is used for multi-programmed time sharing systems. It provides functionality that multiple users can share a single copy of program during the same period.

It has two key aspects:

  • The program code cannot modify itself.
  • The local data for each user process must be stored separately.

12) What is the difference between process and program?

A program while running or executing is known as a process.


13) What is the use of paging in operating system?

Paging is used to solve the external fragmentation problem in operating system. This technique ensures that the data you need is available as quickly as possible.


14) What is the concept of demand paging?

Demand paging specifies that if an area of memory is not currently being used, it is swapped to disk to make room for an application's need.


15) What is the advantage of a multiprocessor system?

As many as processors are increased, you will get the considerable increment in throughput. It is cost effective also because they can share resources. So, the overall reliability increases.


16) What is virtual memory?

Virtual memory is a very useful memory management technique which enables processes to execute outside of memory. This technique is especially used when an executing program cannot fit in the physical memory.


17) What is thrashing?

Thrashing is a phenomenon in virtual memory scheme when the processor spends most of its time in swapping pages, rather than executing instructions.


18) What are the four necessary and sufficient conditions behind the deadlock?

These are the 4 conditions:

1) Mutual Exclusion Condition: It specifies that the resources involved are non-sharable.

2) Hold and Wait Condition: It specifies that there must be a process that is holding a resource already allocated to it while waiting for additional resource that are currently being held by other processes.

3) No-Preemptive Condition: Resources cannot be taken away while they are being used by processes.

4) Circular Wait Condition: It is an explanation of the second condition. It specifies that the processes in the system form a circular list or a chain where each process in the chain is waiting for a resource held by next process in the chain.


19) What is a thread?

A thread is a basic unit of CPU utilization. It consists of a thread ID, program counter, register set and a stack.


20) What is FCFS?

FCFS stands for First Come, First Served. It is a type of scheduling algorithm. In this scheme, if a process requests the CPU first, it is allocated to the CPU first. Its implementation is managed by a FIFO queue.


21) What is SMP?

SMP stands for Symmetric MultiProcessing. It is the most common type of multiple processor system. In SMP, each processor runs an identical copy of the operating system, and these copies communicate with one another when required.


22) What is RAID? What are the different RAID levels?

RAID stands for Redundant Array of Independent Disks. It is used to store the same data redundantly to improve the overall performance.

Following are the different RAID levels:

RAID 0 - Stripped Disk Array without fault tolerance

RAID 1 - Mirroring and duplexing

RAID 2 - Memory-style error-correcting codes

RAID 3 - Bit-interleaved Parity

RAID 4 - Block-interleaved Parity

RAID 5 - Block-interleaved distributed Parity

RAID 6 - P+Q Redundancy


23) What is deadlock? Explain.

Deadlock is a specific situation or condition where two processes are waiting for each other to complete so that they can start. But this situation causes hang for both of them.


24) Which are the necessary conditions to achieve a deadlock?

There are 4 necessary conditions to achieve a deadlock:

  • Mutual Exclusion: At least one resource must be held in a non-sharable mode. If any other process requests this resource, then that process must wait for the resource to be released.
  • Hold and Wait: A process must be simultaneously holding at least one resource and waiting for at least one resource that is currently being held by some other process.
  • No preemption: Once a process is holding a resource ( i.e. once its request has been granted ), then that resource cannot be taken away from that process until the process voluntarily releases it.
  • Circular Wait: A set of processes { P0, P1, P2, . . ., PN } must exist such that every P[ i ] is waiting for P[ ( i + 1 ) % ( N + 1 ) ].

Note: This condition implies the hold-and-wait condition, but it is easier to deal with the conditions if the four are considered separately.


25) What is Banker's algorithm?

Banker's algorithm is used to avoid deadlock. It is the one of deadlock-avoidance method. It is named as Banker's algorithm on the banking system where bank never allocates available cash in such a manner that it can no longer satisfy the requirements of all of its customers.


26) What is the difference between logical address space and physical address space?

Logical address space specifies the address that is generated by CPU. On the other hand physical address space specifies the address that is seen by the memory unit.


27) What is fragmentation?

Fragmentation is a phenomenon of memory wastage. It reduces the capacity and performance because space is used inefficiently.


28) How many types of fragmentation occur in Operating System?

There are two types of fragmentation:

  • Internal fragmentation: It is occurred when we deal with the systems that have fixed size allocation units.
  • External fragmentation: It is occurred when we deal with systems that have variable-size allocation units.

29) What is spooling?

Spooling is a process in which data is temporarily gathered to be used and executed by a device, program or the system. It is associated with printing. When different applications send output to the printer at the same time, spooling keeps these all jobs into a disk file and queues them accordingly to the printer.


30) What is the difference between internal commands and external commands?

Internal commands are the built-in part of the operating system while external commands are the separate file programs that are stored in a separate folder or directory.


31) What is semaphore?

Semaphore is a protected variable or abstract data type that is used to lock the resource being used. The value of the semaphore indicates the status of a common resource.

There are two types of semaphore:

  • Binary semaphores
  • Counting semaphores

32) What is a binary Semaphore?

Binary semaphore takes only 0 and 1 as value and used to implement mutual exclusion and synchronize concurrent processes.


33) What is Belady's Anomaly?

Belady's Anomaly is also called FIFO anomaly. Usually, on increasing the number of frames allocated to a process virtual memory, the process execution is faster, because fewer page faults occur. Sometimes, the reverse happens, i.e., the execution time increases even when more frames are allocated to the process. This is Belady's Anomaly. This is true for certain page reference patterns.


34) What is starvation in Operating System?

Starvation is Resource management problem. In this problem, a waiting process does not get the resources it needs for a long time because the resources are being allocated to other processes.


35) What is aging in Operating System?

Aging is a technique used to avoid the starvation in resource scheduling system.


36) What are the advantages of multithreaded programming?

A list of advantages of multithreaded programming:

  • Enhance the responsiveness to the users.
  • Resource sharing within the process.
  • Economical
  • Completely utilize the multiprocessing architecture.

37) What is the difference between logical and physical address space?

Logical address specifies the address which is generated by the CPU whereas physical address specifies to the address which is seen by the memory unit.

After fragmentation


38) What are overlays?

Overlays makes a process to be larger than the amount of memory allocated to it. It ensures that only important instructions and data at any given time are kept in memory.


39) When does trashing occur?

Thrashing specifies an instance of high paging activity. This happens when it is spending more time paging instead of executing.


40) What is a Batch Operating System?

Batch Operating System is a type of Operating System which creates Batches for the execution of certain jobs or processes.

The Batches contains jobs of such a kind that the jobs or processes which are very similar in the procedure to be followed by the jobs or processes. The Batch Operating System has an operator which performs these tasks.

An operator groups together comparable jobs or processes that have the same criteria into batches. The operator is in charge and takes up the job of grouping jobs or processes with comparable requirements.

Batch Operating System follows First Come First Serve principle.


41) Do the Batch Operating System interact with Computer for processing the needs of jobs or processes required?

No, this is not that kind of Operating Systems which tries to interact with the computer. But, this job is taken up by the Operator present in the Batch Operating Systems.


42) What are the advantages of Batch Operating System?

Advantages of Batch Operating System:

  1. The time which the Operating System is at rest is very small or also known as Idle Time for the Operating System is very small.
  2. Very big tasks can also be managed very easily with the help of Batch Operating Systems
  3. Many users can use this Batch Operating Systems.
  4. It is incredibly challenging to estimate or determine how long it will take to finish any task. The batch system processors are aware of how long a work will take to complete when it is in line.

43) What are the disadvantages of Batch Operating System?

Disadvantages of Batch Operating System:

  1. If any work fails in the Batch Operating System, the other jobs will have to wait for an indeterminate period of time.
  2. Batch Operating systems are very challenging to debug,
  3. Batch Operating Systems can be expensive at times
  4. The computer operators who are using Batch Operating Systems must to be knowledgeable with batch systems.

44) Where is Batch Operating System used in Real Life

They are used in Payroll System and for generating Bank Statements.


45) What is the Function of Operating System?

The most important functions of Operating Systems are:

  1. File Management
  2. Job Management
  3. Process Management
  4. Device Management
  5. Memory Management

46) What are the Services provided by the Operating System?

The services provided by the Operating Systems are:

  1. Security to your computer
  2. Protects your computer from external threats
  3. File Management
  4. Program Execution
  5. Helps in Controlling Input Output Devices
  6. Useful in Program Creation
  7. Helpful in Error Detection
  8. Operating System helps in communicating between devices
  9. Analyzes the Performance of all devices

47) What is a System Call in Operating Systems?

Programs can communicate with the operating system by making a system call. When a computer application requests anything from the kernel of the operating system, it performs a system call.System call uses Application Programming Interfaces(API)to deliver operating system services to user programs


48) What are the Types of System Calls in Operating Systems?

The System Calls in Operating Systems are:

  1. Communication
  2. Information Maintenance
  3. File Management
  4. Device Management
  5. Process Contro

49) What are the functions which are present in the Process Control and File Management System Call?

The Functions present in Process Control System Calls are:

  1. Create
  2. Allocate
  3. Abort
  4. End
  5. Terminate
  6. Free Memory

50) What are the functions which are present in the File Management System Call?

The Functions present in File Management System Calls are:

  1. Create
  2. Open
  3. Read
  4. Close
  5. Delete

51) What is a Process in Operating Systems?

A process is essentially software that is being run on the Operating System. The Process is a Procedure which must be carried out in a sequential manner.

The fundamental unit of work that has to be implemented in the system is called a process.

An active program known as a process is the basis of all computing. Although relatively similar, the method is not the same as computer code. A process is a "active" entity, in contrast to the program, which is sometimes thought of as some sort of "passive" entity.


52) What are the types of Processes in Operating Systems?

The types of Operating System processes are:

  1. Operating System Process
  2. User Process

53) What is Process Control Block (PCB)?

A data structure called a Process Control Block houses details about the processes connected to it. The term "process control block" can also refer to a "task control block," "process table entry," etc.

As data structure for processes is done in terms of the Process Control Block (PCB), it is crucial for process management. Additionally, it describes the operating system's present condition.


54) What are the Data Items in Process Control Block?

The Data Items in Process Control Block are:

  1. Process State
  2. Process Number
  3. Program Counter
  4. Registers
  5. Memory Limits
  6. List of Open Files

55) What are the Files used in the Process Control Block?

The Files used in Process Control Block are:

  1. Central Processing Unit (CPU) Scheduling Information
  2. Memory Management Information
  3. Accounting Information
  4. Input Output Status Information

56) What are the differences between Thread and Process

ThreadProcess
Threads are executed within the same processProcesses are executed in the different memory spaces
Threads are not independent of each otherProcesses are independent of each other

57) What are the advantages of Threads in Operating Systems?

The advantages of Threads in Operating System are:

  1. Threads are executed very faster than Switches
  2. Threads ensure that the communication between threads are very easier
  3. The Throughput of the system is increased if the process is divided into multiple threads
  4. When a thread in a multi-threaded process completes its execution, its output can be returned right away.

58) What are the disadvantages of Threads in Operating Systems?

The disadvantages of Threads in Operating System are:

  1. The code becomes more challenging to maintain and debug as there are more threads.
  2. The process of creating threads uses up system resources like memory and CPU.
  3. Because unhandled exceptions might cause the application to crash, we must manage them inside the worker method.

59) What are the types of Threads in Operating System?

The types of Threads in Operating System are:

  1. User Level Threads
  2. Kernel Level Threads

60) What is User Kernel Thread?

The kernel is unaware of the user-level threads since they are implemented at the user level.

They are treated like single-threaded processes under this system. User Kernel Threads are smaller and quicker than kernel-level threads

The User Kernel Threads are represented by a Small Process Control Block (PCB), Stack, Program Counter (PC), Stack.

Here, the User Kernel Threads are independent of Kernel Involvement in Synchronization.


61) What are User Kernel Threads Advantages and Disadvantages?

The following are a few advantages of User Kernel Threads:

  1. Creating user-level threads is quicker and simpler than creating kernel-level threads. They are also simpler to handle.
  2. Any operating system may be used to execute user-level threads.
  3. Thread switching in user-level threads does not need kernel mode privileges.

The following are a few drawbacks of User Kernel Threads:

  1. Multiprocessing cannot be used effectively by multithreaded applications in user-level threads.
  2. If one user-level thread engages in a blocking action, the entire process is halted.

62) What is Kernel Level Thread?

Kernel Level Threads are the threads which are handled by the Operating System directly. The kernel controls both the process's threads and the context information for each one. As a result, kernel-level threads execute more slowly than user-level threads.


63) What are Kernel Level Threads Advantages and Disadvantages?

The following are some benefits of using kernel-level threads:

  1. Kernel-level threads allow the scheduling of many instances of the same process across several CPUs.
  2. The kernel functions also support multithreading.
  3. Another thread of the same process may be scheduled by the kernel if a kernel-level thread is stalled.

The following list includes several drawbacks of kernel-level threads:

  1. To pass control from one thread in a process to another, a mode switch to kernel mode is necessary.
  2. Compared to user-level threads, kernel-level threads take longer to create and maintain.

64) What is Process Scheduling In Operating Systems?

The task of the process manager that deals with removing the active process from the CPU and choosing a different process based on a certain strategy is known as process scheduling.

An integral component of a multiprogramming operating system is process scheduling. Such operating systems provide the simultaneous loading of numerous processes into the executable memory, each of which utilizes temporal multiplexing to share the CPU.


65) What are types of Process Scheduling Techniques in Operating Systems?

The types of Process Scheduling Techniques in Operating Systems are:

  1. Pre Emptive Process Scheduling
  2. Non Pre Emptive Process Scheduling

66) What is Pre Emptive Process Scheduling in Operating Systems?

In this instance of Pre Emptive Process Scheduling, the OS allots the resources to a process for a predetermined period of time. The process transitions from running state to ready state or from waiting state to ready state during resource allocation. This switching happens because the CPU may assign other processes precedence and substitute the currently active process for the higher priority process.


67) What is Non Pre Emptive Process Scheduling in Operating Systems?

In this case of Non Pre Emptive Process Scheduling, the resource cannot be withdrawn from a process before the process has finished running. When a running process finishes and transitions to the waiting state, resources are switched.


68) What is Context Switching?

Context switching is a technique or approach that the operating system use to move a process from one state to another so that it can carry out its intended function using system CPUs.

When a system performs a switch, it maintains the status of the previous operating process in the form of registers and allots the CPU to the new process to carry out its operations.

The old process must wait in a ready queue while a new one is running in the system. At the point when another process interrupted it, the old process resumes execution.

It outlines the features of an operating system that supports numerous workloads at once without the use of extra processors by allowing several processes to share a single CPU.


69) What is the use of Dispatcher in Operating Systems?

After the scheduler completes the process scheduling, a unique application called a dispatcher enters the picture. The dispatcher is the one who moves a process to the desired state or queue once the scheduler has finished its selection task. The module known as the dispatcher is what grants a process control over the CPU once the short-term schedule has chosen it.


70) What is the Difference between Dispatcher and Scheduler?

DispatcherScheduler
Dispatcher is the one who moves the process to the desired stateScheduler is the one which selects a process which is feasible to be executed at this point of time.
The time taken by Dispatcher is known as Dispatch LatencyThe Time taken by the Scheduler is not counted basically
Dispatcher is dependent on SchedulerScheduler is not dependent of Dispatcher
Dispatcher allows Context Switching to occurScheduler only allows the process to the ready queue

71) What is Process Synchronization in Operating Systems?

Process synchronization, often known as synchronization, is the method an operating system uses to manage processes that share the same memory space. By limiting the number of processes that may modify the shared memory at once via hardware or variables, it helps ensure the consistency of the data.


72) What are the Classical Problems of Process Synchronization?

The Classical Problems of Process Synchronization are:

  1. Bound Buffer Problem or Consumer Producer Problem
  2. Dining Philosopher's Problem
  3. Readers and writers Problem
  4. Sleeping Barber Problem

73) What is a Peterson's Solution?

Peterson's solution to the critical section issue is a traditional one. The critical section problem makes sure no two processes or jobsalter or modify the value of a resource at the same time.


74) What are the Operations in Semaphores?

The Operations in Semaphores are:

  1. Wait or P Function ()
  2. Signal or V Function ()

75) What is a Critical Section Problem?

The section of a program known as the Critical Section attempts to access shared resources.

The operating system has trouble authorizing and preventing processes from entering the crucial part because more than one process cannot operate in the critical area at once.


76) What are the methods of Handling Deadlocks?

The methods of handling deadlock are:

  1. Deadlock Prevention
  2. Deadlock Detection and Recovery
  3. Deadlock Avoidance
  4. Deadlock Ignorance

77) How can we avoid Deadlock?

We can avoid Deadlock by using Banker's Algorithm.


78) How can we detect and recover the Deadlock occurred in Operating System?

First, what we need to do is to allow the process to enter the deadlock state. So, it is the time of recovery.

We can recover the process from deadlock state by terminating or aborting all deadlocked processes one at a time.

Process Pre Emption is also another technique used for Deadlocked Process Recovery.


79) What is paging in Operating Systems?

Paging is a storage mechanism. Paging is used to retrieve processes from secondary memory to primary memory.

The main memory is divided into small blocks called pages. Now, each of the pages contains the process which is retrieved into main memory and it is stored in one frame of memory.

It is very important to have pages and frames which are of equal sizes which are very useful for mapping and complete utilization of memory.


80) What is Address Translation in Paging?

Logical and physical memory addresses, both of which are distinct, are the two types of memory addresses that are employed in the paging process. The logical address is the address that the CPU creates for each page in the secondary memory, but the physical address is the actual location of the frame where each page will be allocated. We now require a technique known as address translation carried out by the page table in order to translate this logical address into a physical address.


81) What is Translational Look Aside Buffer?

Whenever logical address is created by the Central Processing Unit (CPU), the page number is stored in the Translational Look Aside Buffer. Along, with the page number, the frame number is also stored.


82) What are Page Replacement Algorithms in Operating Systems?

The Page Replacement Algorithms in Operating Systems are:

  1. First In First Out
  2. Optimal
  3. Least Recently Used
  4. Most Recently Used

83) In which Page Replacement Algorithm does Belady's Anomaly occur?

In First in First out Page Replacement Algorithm Belady's Anomaly occurs.


84) What are Process Scheduling Algorithms in Operating System?

The Process Scheduling Algorithms in Operating Systems are:

  1. First Come First Serve CPU Scheduling Algorithm
  2. Priority Scheduling CPU Scheduling Algorithm
  3. Shortest Job First CPU Scheduling Algorithm
  4. Round Robin CPU Scheduling Algorithm
  5. Longest Job First CPU Scheduling Algorithm
  6. Shortest Remaining Time First CPU Scheduling Algorithm
  7. Multiple Queue CPU Scheduling Algorithm

85) What is Round Robin CPU Scheduling Algorithm?

Round Robin is a CPU scheduling mechanism whose cycles around assigning each task a specific time slot. It is the First come First Serve CPU Scheduling method prior Pre Emptive Process Scheduling approach. The Round Robin CPU algorithm frequently emphasizes the Time Sharing method.


86) What is Disk Scheduling in Operating Systems?

Operating systems use disk scheduling to plan when Input or Output requests for the disk will arrive. Input or Output scheduling is another name for disk scheduling.


87) What is the importance of Disk Scheduling in Operating Systems?

  1. One Input or Output request may be fulfilled by the disk controller at a time, even if several Input or Output requests could come in from other processes. Other Input or Output requests must thus be scheduled and made to wait in the waiting queue.
  2. The movement of the disk arm might increase if two or more requests are placed far apart from one another.
  3. Since hard disks are among the slower components of the computer system, they must be accessible quickly.

88) What are the Disk Scheduling Algorithms used in Operating Systems?

The Disk Scheduling Algorithms used in Operating Systems are:

  1. First Come First Serve
  2. Shortest Seek Time First
  3. LOOK
  4. SCAN
  5. C SCAN
  6. C LOOK

89) What is Monitors in the context of Operating Systems?

A feature of programming languages called monitors and ithelps in controlledaccess to shared data. The Monitor is a collection of shared actions, data structures, and synchronization between parallel procedure calls. A monitor is therefore sometimes referred to as a synchronization tool. Some of the languages that support the usage of monitors are Java, C#, Visual Basic, Ada, and concurrent Euclid. Although other processes cannot access the monitor's internal variables, they can invoke its methods