Data Structure MCQs (401-500)
401. Hashing is used to:
A) Perform quick search and retrieval
B) Sort data
C) Traverse data sequentially
D) None
Answer: A
402. The function used to compute an address from a key in hashing is called:
A) Hash function
B) Mapping function
C) Index function
D) Search function
Answer: A
403. A good hash function should:
A) Minimize collisions
B) Produce same hash for all keys
C) Be complex
D) Use random numbers
Answer: A
404. In hashing, collision occurs when:
A) Two keys map to the same location
B) Same key maps to different locations
C) Table is full
D) None
Answer: A
405. Which method handles collisions by finding the next empty slot?
A) Linear probing
B) Chaining
C) Quadratic hashing
D) None
Answer: A
406. Quadratic probing is used to:
A) Reduce clustering
B) Increase collisions
C) Simplify searching
D) None
Answer: A
407. In double hashing, the second hash function must never evaluate to:
A) 0
B) 1
C) Key value
D) None
Answer: A
408. Load factor in a hash table is defined as:
A) n / m (number of keys / table size)
B) m / n
C) n × m
D) None
Answer: A
409. The average time complexity for search in a hash table is:
A) O(1)
B) O(n)
C) O(log n)
D) O(n²)
Answer: A
410. Chaining in hashing uses:
A) Linked lists for collision resolution
B) Arrays
C) Stacks
D) None
Answer: A
411. Memory fragmentation occurs when:
A) Memory is used inefficiently
B) Memory is full
C) Stack overflows
D) None
Answer: A
412. In dynamic memory allocation, “free list” refers to:
A) List of available memory blocks
B) List of used blocks
C) Stack area
D) None
Answer: A
413. The memory allocation strategy that assigns the smallest possible free block is:
A) Best fit
B) Worst fit
C) First fit
D) Next fit
Answer: A
414. The “first fit” strategy allocates:
A) First available block that fits the request
B) Largest block
C) Smallest block
D) None
Answer: A
415. The “worst fit” strategy chooses:
A) The largest available block
B) The smallest block
C) First free block
D) None
Answer: A
416. Garbage collection is used to:
A) Reclaim memory no longer in use
B) Free up CPU
C) Compress data
D) None
Answer: A
417. Stack overflow occurs when:
A) Stack pointer exceeds stack limit
B) Stack pointer underflows
C) No recursion used
D) None
Answer: A
418. Memory leaks occur when:
A) Allocated memory is not freed
B) Stack overflows
C) Heap is full
D) None
Answer: A
419. The heap memory is used for:
A) Dynamic memory allocation
B) Function calls
C) Static variables
D) None
Answer: A
420. The stack memory is used for:
A) Function calls and local variables
B) Dynamic data
C) Static data
D) None
Answer: A
421. Recursive functions use:
A) Stack
B) Heap
C) Queue
D) None
Answer: A
422. Every recursive function can be written as:
A) Iterative function
B) Non-returning function
C) Static function
D) None
Answer: A
423. Tail recursion can be optimized into:
A) Iteration
B) Recursion
C) Stack operations
D) None
Answer: A
424. The base case in recursion:
A) Terminates recursion
B) Causes infinite recursion
C) Doubles the calls
D) None
Answer: A
425. The recursive function’s time complexity depends on:
A) Number of recursive calls
B) Number of loops
C) Function name
D) None
Answer: A
426. Mutual recursion occurs when:
A) Two functions call each other
B) Function calls itself directly
C) Function has no parameters
D) None
Answer: A
427. Recursion is preferred when:
A) The problem can be divided into similar subproblems
B) Problem is iterative
C) Loops are unavailable
D) None
Answer: A
428. The factorial of a number using recursion requires:
A) Stack frame for each call
B) Queue
C) Array
D) None
Answer: A
429. Infinite recursion leads to:
A) Stack overflow
B) Heap overflow
C) Memory leak
D) None
Answer: A
430. Recursion is not preferred when:
A) Stack size is limited
B) Problem is small
C) Base case exists
D) None
Answer: A
431. Computational complexity measures:
A) Resource usage by an algorithm
B) Programming difficulty
C) Number of variables
D) None
Answer: A
432. Space complexity refers to:
A) Amount of memory used
B) Execution time
C) CPU utilization
D) None
Answer: A
433. If an algorithm has time complexity O(n³), doubling the input size increases runtime by:
A) 8 times
B) 2 times
C) 4 times
D) None
Answer: A
434. If T(n) = 3T(n/2) + n, time complexity is:
A) O(n^(log₂3))
B) O(n log n)
C) O(n²)
D) None
Answer: A
435. The notation “O(1)” means:
A) Constant time
B) Linear time
C) Logarithmic time
D) None
Answer: A
436. If an algorithm has both best and worst cases as O(n), it is:
A) Linear
B) Quadratic
C) Logarithmic
D) None
Answer: A
437. For large n, the term that grows fastest among n, n log n, and n² is:
A) n²
B) n
C) n log n
D) None
Answer: A
438. Algorithm analysis helps to:
A) Compare efficiency
B) Debug errors
C) Reduce syntax errors
D) None
Answer: A
439. The best-case complexity of binary search is:
A) O(1)
B) O(log n)
C) O(n)
D) None
Answer: A
440. The worst-case complexity of binary search is:
A) O(log n)
B) O(1)
C) O(n)
D) None
Answer: A
441. Time complexity of merge sort in all cases is:
A) O(n log n)
B) O(n²)
C) O(n)
D) None
Answer: A
442. The best-case complexity of insertion sort is:
A) O(n)
B) O(n²)
C) O(log n)
D) None
Answer: A
443. The worst-case complexity of quick sort is:
A) O(n²)
B) O(n log n)
C) O(log n)
D) None
Answer: A
444. The average-case complexity of quick sort is:
A) O(n log n)
B) O(n²)
C) O(n)
D) None
Answer: A
445. The space complexity of recursive quick sort is:
A) O(log n)
B) O(n)
C) O(1)
D) None
Answer: A
446. Merge sort is stable because:
A) Equal elements maintain their order
B) It uses recursion
C) It is in-place
D) None
Answer: A
447. Heap sort is not stable because:
A) Order of equal elements may change
B) It uses extra space
C) It is recursive
D) None
Answer: A
448. Counting sort works only for:
A) Non-negative integer keys
B) Strings
C) Floats
D) None
Answer: A
449. Bucket sort is efficient when:
A) Input is uniformly distributed
B) Input is random
C) Input is sorted
D) None
Answer: A
450. The master theorem is applicable for:
A) Divide and conquer recurrences
B) Linear recursions
C) Hashing functions
D) None
Answer: A
451. The class of problems that can be solved in polynomial time is called:
A) P
B) NP
C) NP-hard
D) NP-complete
Answer: A
452. Problems for which a solution can be verified in polynomial time belong to:
A) NP
B) P
C) NP-hard
D) None
Answer: A
453. Every problem in P is also in:
A) NP
B) NP-hard
C) NP-complete
D) None
Answer: A
454. NP-complete problems are both:
A) NP and NP-hard
B) P and NP
C) P and NP-easy
D) None
Answer: A
455. If any NP-complete problem can be solved in polynomial time, then:
A) P = NP
B) P ≠ NP
C) NP = NP-hard
D) None
Answer: A
456. The traveling salesman problem (TSP) belongs to:
A) NP-hard
B) P
C) Linear problems
D) None
Answer: A
457. The Boolean satisfiability (SAT) problem is:
A) NP-complete
B) P
C) Linear
D) None
Answer: A
458. NP-hard problems are:
A) At least as hard as NP-complete problems
B) Easier than NP problems
C) Always solvable in polynomial time
D) None
Answer: A
459. The problem “sorting numbers” belongs to which class?
A) P
B) NP
C) NP-hard
D) None
Answer: A
460. The Hamiltonian cycle problem is:
A) NP-complete
B) P
C) Linear
D) None
Answer: A
461. Hash tables provide average time complexity for search as:
A) O(1)
B) O(log n)
C) O(n)
D) O(n²)
Answer: A
462. Perfect hashing ensures:
A) No collisions
B) Minimum collisions
C) Random collisions
D) None
Answer: A
463. Universal hashing minimizes collisions by:
A) Choosing hash function randomly from a family
B) Using the same hash function
C) Storing all data sequentially
D) None
Answer: A
464. In open addressing, when a table is half full, the load factor is:
A) 0.5
B) 0.75
C) 1
D) None
Answer: A
465. Rehashing is done when:
A) Load factor exceeds threshold
B) Table becomes empty
C) Keys are unique
D) None
Answer: A
466. Separate chaining is preferred when:
A) Table size is small
B) Collisions are frequent
C) Keys are sorted
D) None
Answer: A
467. In a hash table of size m, with n keys, if n > m, then:
A) Collisions must occur
B) No collision occurs
C) Table is empty
D) None
Answer: A
468. The birthday paradox is used to estimate:
A) Collision probability in hashing
B) Time complexity
C) Memory usage
D) None
Answer: A
469. The modulo operation in hashing may cause clustering when:
A) Table size is a power of 2
B) Table size is prime
C) Table size is odd
D) None
Answer: A
470. To reduce collisions, the table size should be:
A) A prime number
B) A power of 2
C) An even number
D) None
Answer: A
471. A directed acyclic graph (DAG) can be used to represent:
A) Scheduling problems
B) Cyclic dependencies
C) Circular references
D) None
Answer: A
472. In Dijkstra’s algorithm, negative edge weights:
A) Are not allowed
B) Are allowed
C) Are ignored
D) None
Answer: A
473. Bellman-Ford algorithm works correctly even with:
A) Negative edge weights
B) Negative cycles
C) Loops
D) None
Answer: A
474. The Floyd-Warshall algorithm computes:
A) All-pairs shortest paths
B) Single-source shortest path
C) Spanning tree
D) None
Answer: A
475. The adjacency matrix representation of a graph requires space:
A) O(V²)
B) O(V + E)
C) O(E log V)
D) None
Answer: A
476. The adjacency list representation of a graph requires space:
A) O(V + E)
B) O(V²)
C) O(V log V)
D) None
Answer: A
477. A graph with self-loops cannot be:
A) Simple
B) Connected
C) Directed
D) None
Answer: A
478. A graph with no edges is called:
A) Null graph
B) Empty graph
C) Isolated graph
D) None
Answer: A
479. A complete undirected graph with n vertices has:
A) n(n – 1) / 2 edges
B) n² edges
C) n edges
D) None
Answer: A
480. A tree with n vertices always has:
A) n – 1 edges
B) n edges
C) n + 1 edges
D) None
Answer: A
481. The greedy method works by:
A) Making locally optimal choices
B) Exploring all possibilities
C) Using recursion
D) None
Answer: A
482. Huffman coding is based on which algorithmic paradigm?
A) Greedy method
B) Dynamic programming
C) Divide and conquer
D) None
Answer: A
483. The fractional knapsack problem can be solved using:
A) Greedy method
B) Dynamic programming
C) Backtracking
D) None
Answer: A
484. The 0/1 knapsack problem is solved using:
A) Dynamic programming
B) Greedy method
C) Divide and conquer
D) None
Answer: A
485. Dijkstra’s algorithm is an example of:
A) Greedy method
B) Dynamic programming
C) Recursion
D) None
Answer: A
486. The minimum spanning tree problem uses:
A) Greedy approach
B) Backtracking
C) Dynamic programming
D) None
Answer: A
487. Backtracking is suitable for:
A) Constraint satisfaction problems
B) Sorting
C) Searching
D) None
Answer: A
488. The N-Queens problem is solved using:
A) Backtracking
B) Dynamic programming
C) Greedy
D) None
Answer: A
489. The subset-sum problem can be solved using:
A) Backtracking or dynamic programming
B) Greedy
C) Hashing
D) None
Answer: A
490. Backtracking can be optimized using:
A) Pruning
B) Recursion
C) Loops
D) None
Answer: A
491. Branch and Bound is used for:
A) Optimization problems
B) Searching
C) Sorting
D) None
Answer: A
492. In Branch and Bound, the search space is represented as a:
A) State-space tree
B) Binary tree
C) Heap
D) None
Answer: A
493. In the 8-puzzle problem, the state-space tree is used in:
A) Branch and Bound
B) Recursion
C) Divide and conquer
D) None
Answer: A
494. The job sequencing with deadlines problem uses:
A) Greedy approach
B) Dynamic programming
C) Backtracking
D) None
Answer: A
495. The traveling salesman problem can be approximated using:
A) Branch and Bound
B) Greedy
C) Linear programming
D) None
Answer: A
496. The complexity of BFS traversal is:
A) O(V + E)
B) O(V²)
C) O(log V)
D) None
Answer: A
497. The complexity of DFS traversal is:
A) O(V + E)
B) O(V²)
C) O(V log V)
D) None
Answer: A
498. Graph coloring is an example of:
A) NP-complete problem
B) P problem
C) NP-hard
D) None
Answer: A
499. The clique problem belongs to:
A) NP-complete
B) P
C) NP-hard
D) None
Answer: A
500. The halting problem is:
A) Undecidable
B) NP
C) P
D) None
Answer: A