Data Structure MCQs (301-400)

301. In a graph, if every vertex is connected to every other vertex, it is called a:
A) Complete graph
B) Bipartite graph
C) Sparse graph
D) Tree
Answer: A

302. The number of edges in a complete undirected graph with n vertices is:
A) n(n – 1) / 2
B) n²
C) n(n + 1)
D) n
Answer: A

303. A connected acyclic graph is called a:
A) Tree
B) Forest
C) Cycle
D) Graph
Answer: A

304. A forest is a collection of:
A) Disjoint trees
B) Cyclic graphs
C) Complete graphs
D) Directed graphs
Answer: A

305. The number of edges in a tree with n vertices is:
A) n – 1
B) n + 1
C) n²
D) 2n
Answer: A

306. In a directed acyclic graph (DAG), which of the following is true?
A) No cycles exist
B) Every vertex has equal in-degree and out-degree
C) It has at least one cycle
D) None
Answer: A

307. A vertex with in-degree 0 in a DAG is called a:
A) Source vertex
B) Sink vertex
C) Root
D) Leaf
Answer: A

308. A vertex with out-degree 0 in a DAG is called a:
A) Sink vertex
B) Source vertex
C) Root
D) None
Answer: A

309. Topological sorting of a DAG is not possible if:
A) The graph has a cycle
B) The graph is disconnected
C) The graph is weighted
D) None
Answer: A

310. The minimum spanning tree of a connected graph is always:
A) Unique if all edge weights are distinct
B) Non-unique always
C) Always multiple
D) None
Answer: A


311. The dynamic programming approach works by:
A) Solving subproblems and storing their results
B) Ignoring overlapping subproblems
C) Recursive calls without memory
D) None
Answer: A

312. Dynamic programming is mainly used to avoid:
A) Recomputing subproblems
B) Using loops
C) Iterations
D) Recursion
Answer: A

313. Which algorithm uses dynamic programming?
A) Floyd–Warshall
B) Binary search
C) Quick sort
D) DFS
Answer: A

314. The time complexity of the Floyd–Warshall algorithm is:
A) O(V³)
B) O(V²)
C) O(V log V)
D) O(E log V)
Answer: A

315. The dynamic programming technique for shortest path is used in:
A) Bellman-Ford algorithm
B) Prim’s algorithm
C) Dijkstra’s algorithm
D) Kruskal’s algorithm
Answer: A

316. Dynamic programming and divide and conquer are similar because:
A) Both solve subproblems
B) Both store all results
C) Both require recursion
D) None
Answer: A

317. In dynamic programming, overlapping subproblems occur in:
A) Recursive algorithms
B) Linear algorithms
C) Hashing
D) Sorting
Answer: A

318. Memoization stores:
A) Results of previously computed subproblems
B) All inputs
C) Only recursive calls
D) None
Answer: A

319. The principle of optimality is used in:
A) Dynamic programming
B) Sorting
C) Hashing
D) Searching
Answer: A

320. The knapsack problem can be solved using:
A) Dynamic programming
B) Queue
C) Recursion only
D) None
Answer: A


321. In file organization, sequential files are best for:
A) Processing large volumes of data sequentially
B) Random access
C) Graph traversal
D) None
Answer: A

322. Indexed sequential file organization combines:
A) Sequential and direct access
B) Random and hashing
C) Binary search and recursion
D) None
Answer: A

323. In indexed file organization, the index stores:
A) Key and pointer to data record
B) Data only
C) Record only
D) None
Answer: A

324. Direct access files are most suitable for:
A) Online transaction processing
B) Batch processing
C) Sequential reading
D) None
Answer: A

325. Hashing in file organization is used for:
A) Direct access of records
B) Sequential access
C) Index creation
D) None
Answer: A

326. Overflow chaining in file organization handles:
A) Collisions
B) Redundant data
C) Duplicates
D) None
Answer: A

327. The index in a file is similar to:
A) An array of pointers
B) A queue
C) A tree
D) None
Answer: A

328. Which file organization uses B+ Trees?
A) Indexed sequential
B) Direct access
C) Sequential
D) None
Answer: A

329. Which access method is fastest for fixed-length records?
A) Direct access
B) Sequential
C) Indexed
D) None
Answer: A

330. In file systems, overflow areas are used to handle:
A) Collisions or overflows
B) Deleted records
C) Cache
D) None
Answer: A


331. The average time complexity for binary search is:
A) O(log n)
B) O(n)
C) O(n²)
D) O(1)
Answer: A

332. In interpolation search, the position is estimated using:
A) Key value range
B) Midpoint
C) Index
D) Random value
Answer: A

333. Jump search uses step size approximately equal to:
A) √n
B) log n
C) n/2
D) n²
Answer: A

334. Exponential search is efficient for:
A) Sorted, unbounded arrays
B) Unsorted arrays
C) Linked lists
D) None
Answer: A

335. Interpolation search performs best when:
A) Elements are uniformly distributed
B) Elements are random
C) Data is unsorted
D) None
Answer: A

336. The worst-case complexity of interpolation search is:
A) O(n)
B) O(log n)
C) O(1)
D) O(n log n)
Answer: A

337. Ternary search divides the array into:
A) Three parts
B) Two parts
C) Four parts
D) None
Answer: A

338. The time complexity of ternary search is:
A) O(log₃ n)
B) O(n)
C) O(1)
D) O(n log n)
Answer: A

339. Linear search is best used when:
A) Array is small or unsorted
B) Array is large
C) Elements are unique
D) None
Answer: A

340. Binary search cannot be applied to:
A) Unsorted arrays
B) Sorted arrays
C) Linked lists (without indexing)
D) Both A and C
Answer: D


341. The asymptotic notation Θ(n) represents:
A) Tight bound
B) Upper bound
C) Lower bound
D) None
Answer: A

342. Big-O gives the:
A) Worst-case complexity
B) Best case
C) Average case
D) None
Answer: A

343. Omega (Ω) gives the:
A) Lower bound
B) Upper bound
C) Exact bound
D) None
Answer: A

344. If an algorithm runs in 2n steps, its complexity is:
A) O(2ⁿ)
B) O(n²)
C) O(log n)
D) None
Answer: A

345. If T(n) = T(n/2) + 1, then time complexity is:
A) O(log n)
B) O(n)
C) O(n²)
D) None
Answer: A

346. If T(n) = 2T(n/2) + n, then time complexity is:
A) O(n log n)
B) O(n²)
C) O(log n)
D) None
Answer: A

347. The Master Theorem is used for:
A) Solving recurrence relations
B) Sorting
C) Hashing
D) None
Answer: A

348. The space complexity of recursion depends on:
A) Recursion depth
B) Input size only
C) Stack size
D) Both A and C
Answer: D

349. Amortized analysis is used to:
A) Average the cost over multiple operations
B) Reduce time complexity
C) Increase memory
D) None
Answer: A

350. The amortized time complexity of push and pop operations in a dynamic array stack is:
A) O(1)
B) O(log n)
C) O(n)
D) None
Answer: A

351. A binary search tree (BST) stores elements in such a way that:
A) Left subtree < Root < Right subtree
B) Right subtree < Root < Left subtree
C) All nodes are equal
D) None
Answer: A

352. The time complexity for searching an element in a balanced BST is:
A) O(log n)
B) O(n)
C) O(1)
D) O(n²)
Answer: A

353. Inserting elements in sorted order into a BST results in:
A) Skewed tree
B) Balanced tree
C) Full tree
D) Complete tree
Answer: A

354. The inorder traversal of a BST produces:
A) Sorted sequence
B) Reverse order
C) Random order
D) Level order
Answer: A

355. The maximum number of nodes in a BST of height h is:
A) 2^(h+1) – 1
B) 2^h
C) h²
D) h
Answer: A

356. Deletion of a node with two children in a BST requires:
A) Replacing with inorder successor or predecessor
B) Simply deleting the node
C) Rotating subtrees
D) None
Answer: A

357. Which traversal is used to copy a BST?
A) Preorder
B) Inorder
C) Postorder
D) Level order
Answer: A

358. A complete binary tree is one in which:
A) All levels are filled except possibly the last
B) Every node has two children
C) All leaves are at the same level
D) None
Answer: A

359. The height of a complete binary tree with n nodes is:
A) ⌊log₂n⌋
B) n
C) n/2
D) log₁₀n
Answer: A

360. In a binary search tree, the smallest element can be found by:
A) Following left child pointers
B) Following right child pointers
C) Traversing inorder
D) None
Answer: A


361. A min-heap always has:
A) Parent ≤ Children
B) Parent ≥ Children
C) Root as maximum
D) None
Answer: A

362. A max-heap always has:
A) Parent ≥ Children
B) Parent ≤ Children
C) Root as minimum
D) None
Answer: A

363. The root node of a max-heap contains:
A) The largest key
B) The smallest key
C) The average key
D) None
Answer: A

364. Insertion in a heap requires:
A) Percolating the new element up
B) Percolating down
C) Deleting root
D) None
Answer: A

365. Deletion from a heap always removes:
A) Root element
B) Last leaf
C) Random node
D) None
Answer: A

366. The time complexity of inserting an element into a heap is:
A) O(log n)
B) O(n)
C) O(1)
D) O(n log n)
Answer: A

367. Heap sort first builds a:
A) Max-heap
B) Min-heap
C) AVL tree
D) Binary tree
Answer: A

368. The space complexity of heap sort is:
A) O(1)
B) O(n)
C) O(log n)
D) O(n²)
Answer: A

369. The best case for heap sort is:
A) O(n log n)
B) O(n²)
C) O(n)
D) O(1)
Answer: A

370. Priority queues are most efficiently implemented using:
A) Heaps
B) Queues
C) Arrays
D) Linked lists
Answer: A


371. A graph traversal that visits nodes level by level is:
A) Breadth-first search (BFS)
B) Depth-first search (DFS)
C) Dijkstra’s traversal
D) None
Answer: A

372. A graph traversal that uses recursion is typically:
A) DFS
B) BFS
C) Dijkstra’s
D) Kruskal’s
Answer: A

373. In Dijkstra’s algorithm, the distance of the source node from itself is:
A) 0
B) ∞
C) 1
D) None
Answer: A

374. The Bellman-Ford algorithm runs in time:
A) O(V × E)
B) O(V²)
C) O(E log V)
D) None
Answer: A

375. Which algorithm can detect negative weight cycles?
A) Bellman-Ford
B) Dijkstra’s
C) Prim’s
D) Kruskal’s
Answer: A

376. The number of edges in a minimum spanning tree with n vertices is:
A) n – 1
B) n
C) n²
D) log n
Answer: A

377. A disconnected graph can have:
A) A forest of spanning trees
B) One spanning tree
C) No edges
D) None
Answer: A

378. Kruskal’s algorithm sorts edges based on:
A) Weight
B) Vertex degree
C) Edge direction
D) None
Answer: A

379. Prim’s algorithm uses:
A) Priority queue or min-heap
B) Stack
C) Array
D) None
Answer: A

380. The shortest path tree and minimum spanning tree:
A) Are different concepts
B) Are always the same
C) Never coexist
D) None
Answer: A


381. The traveling salesman problem is an example of:
A) NP-hard problem
B) P problem
C) Linear problem
D) None
Answer: A

382. The time complexity of the naive TSP solution is:
A) O(n!)
B) O(2ⁿ)
C) O(n²)
D) O(n log n)
Answer: A

383. Dijkstra’s algorithm can be optimized using:
A) Min-heap or priority queue
B) Stack
C) Queue
D) Binary search
Answer: A

384. The Floyd–Warshall algorithm uses:
A) Dynamic programming
B) Divide and conquer
C) Recursion
D) None
Answer: A

385. The adjacency matrix of an undirected graph is:
A) Symmetric
B) Asymmetric
C) Diagonal
D) None
Answer: A

386. The number of possible edges in a simple directed graph with n vertices is:
A) n(n – 1)
B) n²
C) n(n + 1)/2
D) None
Answer: A

387. The chromatic number of a bipartite graph is:
A) 2
B) 3
C) 4
D) 1
Answer: A

388. The degree of a vertex in an undirected graph is:
A) Number of edges incident on it
B) Number of outgoing edges
C) Number of incoming edges
D) None
Answer: A

389. A tree is a special case of:
A) Graph
B) Queue
C) Heap
D) None
Answer: A

390. The total number of edges in a tree with n nodes is always:
A) n – 1
B) n
C) n + 1
D) log n
Answer: A


391. The best sorting algorithm for nearly sorted data is:
A) Insertion sort
B) Quick sort
C) Merge sort
D) Heap sort
Answer: A

392. Which sorting algorithm works well for linked lists?
A) Merge sort
B) Quick sort
C) Bubble sort
D) None
Answer: A

393. Counting sort is most efficient when:
A) Key range is small
B) Key range is large
C) Data is random
D) None
Answer: A

394. Radix sort can be used to sort:
A) Integers and strings
B) Only integers
C) Only floating-point numbers
D) None
Answer: A

395. In-place sorting algorithms include:
A) Quick sort and Heap sort
B) Merge sort
C) Counting sort
D) None
Answer: A

396. External sorting is used when:
A) Data does not fit into memory
B) Data is small
C) Sorting is recursive
D) None
Answer: A

397. The best sorting algorithm for external sorting is:
A) Merge sort
B) Quick sort
C) Bubble sort
D) None
Answer: A

398. Stability in sorting means:
A) Equal elements retain their relative order
B) Faster sorting
C) Constant memory
D) None
Answer: A

399. Which sorting algorithm is not stable?
A) Quick sort
B) Merge sort
C) Insertion sort
D) Bubble sort
Answer: A

400. The complexity of bucket sort in the best case is:
A) O(n + k)
B) O(n²)
C) O(log n)
D) None
Answer: A

Leave a Comment

Your email address will not be published. Required fields are marked *

You cannot copy content of this page

Scroll to Top