/
100% ⧉ New Tab ⬇ Save

Loading PDF…

📄
Cannot Display PDF Inline

Your browser or server settings are blocking the inline PDF viewer. Open it in a new tab or download it — it's the same guide!

🖥️ C Programming — Q&A Notes

Topic-Wise Interview Questions

Most frequently asked C programming questions — Structures, Macros, Compilation, Volatile/Const, and Interrupts.

Structures, Unions & Bitfields
Q1. How is structure memory layout decided?
Struct

The compiler decides the memory layout based on alignment rules of the target architecture. Each member is placed at an address that is a multiple of its own size (natural alignment). Padding bytes are inserted between members or at the end to satisfy these alignment requirements.

struct Example {
    char  a;   // 1 byte  → offset 0
               // 3 bytes padding
    int   b;   // 4 bytes → offset 4
    char  c;   // 1 byte  → offset 8
               // 3 bytes padding (end padding)
};
// sizeof(Example) = 12 (not 6)
Q2. What is padding in structures?
Struct

Padding is extra unused bytes inserted by the compiler between structure members (or at the end) to align each member to its natural alignment boundary. This makes memory access faster since CPUs can fetch aligned data in fewer operations.

The unused bytes are called holes in the structure.

Q3. How to reduce structure size?
Struct

Method 1 — Reorder members from largest to smallest to minimize padding:

// BAD (12 bytes due to padding)
struct Bad  { char a; int b; char c; };

// GOOD (8 bytes — no wasted padding)
struct Good { int b; char a; char c; };

Method 2 — Use #pragma pack(1) or __attribute__((packed)) to force no padding (may hurt performance):

#pragma pack(1)
struct Packed { char a; int b; char c; };
// sizeof = 6 (no padding)

Method 3 — Use bit fields to pack multiple small values into one word.

Q4. Difference between Structure and Union?
AspectStructureUnion
MemorySeparate memory for each memberAll members share one memory location
SizeSum of all members (+ padding)Size of the largest member
AccessAll members accessible simultaneouslyOnly one member valid at a time
Keywordstructunion
Use caseGroup related but independent dataSave memory when only one field needed at a time
Q5. When should we use union?
Union
  • When only one field is used at a time and you want to save memory.
  • For type punning — reinterpreting raw bytes as different types (e.g., extracting bytes of a float).
  • In protocol parsing or hardware register access where the same memory is interpreted differently.
  • In variant records / tagged unions where a type tag identifies which member is active.
union Data {
    uint32_t raw;
    struct { uint8_t b0, b1, b2, b3; } bytes;
};
// Access raw 32-bit value OR individual bytes
Q6. What are bitfields?
Bitfield

Bitfields are structure members that specify the number of bits to allocate rather than a full data type size. They allow packing multiple flags or small values into a single word efficiently.

struct Flags {
    unsigned int enable  : 1;  // 1 bit
    unsigned int mode    : 3;  // 3 bits (0–7)
    unsigned int speed   : 4;  // 4 bits (0–15)
    unsigned int         : 8;  // 8 bits padding (unnamed)
};
// Total: fits in 16 bits
Q7. Limitations of bitfields?
  • You cannot take the address of a bitfield member (& operator not allowed).
  • Not portable — bit order (MSB/LSB first) is implementation-defined.
  • Cannot be used in arrays: field[n] : 1 is illegal.
  • May introduce unexpected padding between bitfields of different types.
  • Cannot use sizeof() on a bitfield member.
Q8. Are bitfields portable across compilers?

No. The C standard leaves several aspects implementation-defined:

  • Whether bitfields are allocated MSB-first or LSB-first.
  • Whether a bitfield can straddle a storage unit boundary.
  • The size of the underlying storage unit for a bitfield.
  • Signedness of plain int bitfields.

For portable hardware register access, using explicit masks and bit shifts is preferred over bitfields.

Q9. What is a nested structure?

A structure that contains another structure as a member is called a nested structure.

struct Address {
    char city[30];
    int  pin;
};

struct Employee {
    int           id;
    char          name[50];
    struct Address addr;   // nested structure
};

struct Employee e;
e.addr.pin = 560001;
Q10. Why is typedef used with structures?

typedef creates an alias for a structure type so you don't have to write struct keyword every time you declare a variable.

// Without typedef
struct Point { int x, y; };
struct Point p1;

// With typedef
typedef struct { int x, y; } Point;
Point p1;   // cleaner!

It also helps with self-referential structures (linked lists) and makes APIs cleaner and more readable.

Preprocessor & Macros
Q1. Difference between macro and const variable?
AspectMacro (#define)const Variable
StagePreprocessor (text substitution)Compiler (type-checked)
Type safetyNone — no type checkingFully type-safe
MemoryNo memory allocatedMemory allocated (usually)
DebuggableNot visible in debuggerVisible in debugger
ScopeFrom definition to end of fileFollows C scoping rules
#define MAX 100        // macro — no type
const int max = 100;  // typed, debuggable
Q2. What are function-like macros?
Macro

Macros that take arguments like functions but are expanded inline by the preprocessor — no function call overhead.

#define SQUARE(x)   ((x) * (x))
#define MAX(a, b)   ((a) > (b) ? (a) : (b))

int r = SQUARE(5);     // expands to ((5) * (5)) = 25
int m = MAX(3, 7);     // expands to ((3) > (7) ? (3) : (7)) = 7

Always wrap arguments and the whole expression in parentheses to avoid operator precedence bugs.

Q3. Why are macros dangerous?
  • No type checking — wrong types pass silently.
  • Multiple evaluation — arguments with side effects evaluated more than once (e.g., SQUARE(i++)).
  • Operator precedence bugs — without parentheses, expansion can break.
  • Not debuggable — preprocessor replaces them before compilation.
  • No scope — a macro defined in one header pollutes all files that include it.
#define DOUBLE(x) x + x
int r = DOUBLE(3) * 2;
// Expands to: 3 + 3 * 2 = 9 (not 12!)
Q4. What is the macro multiple evaluation problem?
Macro

When a macro argument has a side effect (like i++), and the macro uses the argument more than once, the side effect happens multiple times — causing unexpected behavior.

#define SQUARE(x)  ((x) * (x))

int i = 5;
int r = SQUARE(i++);
// Expands to: ((i++) * (i++))
// i++ executed TWICE → undefined behavior!

Solution: Use inline functions instead of macros for expressions with side effects.

static inline int square(int x) { return x * x; }
// i++ evaluated only ONCE
Q5. What is #ifdef used for?

#ifdef checks if a macro is defined and includes the code block only if it is. Used for conditional compilation.

#define DEBUG

#ifdef DEBUG
    printf("Debug: x = %d
", x);  // compiled only if DEBUG defined
#endif

#ifndef RELEASE
    // compiled only if RELEASE is NOT defined
#endif

Common uses: enabling debug logs, platform-specific code, feature toggles.

Q6. What is conditional compilation?

Conditional compilation allows selective inclusion of code blocks based on preprocessor conditions. The excluded code is never compiled — reducing executable size.

#if defined(ARM_PLATFORM)
    init_arm();
#elif defined(X86_PLATFORM)
    init_x86();
#else
    #error "Unknown platform"
#endif

Used for: platform portability, debug vs release builds, feature flags.

Q7. Difference between #include <> and #include ""?
FormSearch OrderUse For
#include <file.h>System/compiler include paths onlyStandard library headers
#include "file.h"Current directory first, then system pathsProject/user-defined headers
Q8. What is a header guard and why is it needed?
Preprocessor

A header guard prevents a header file from being included more than once in a compilation unit, avoiding duplicate declaration errors.

// myheader.h
#ifndef MYHEADER_H
#define MYHEADER_H

void myFunction(void);
typedef struct { int x; } Point;

#endif  // MYHEADER_H

Alternative (non-standard but widely supported): #pragma once

Q9. Can macros be debugged?

No, not directly. Since macros are replaced by the preprocessor before compilation, the debugger sees the expanded code — not the macro name. You cannot step into a macro or inspect it by name in GDB.

To inspect macro expansion, use:

gcc -E source.c -o source.i   # see preprocessed output
# or
gcc -dM -E source.c           # list all defined macros

This is why inline functions are preferred — they are type-safe and fully debuggable.

Q10. What is token pasting and stringification?
Macro

Token Pasting (##) — concatenates two tokens into one during macro expansion:

#define CONCAT(a, b)  a##b
int CONCAT(my, Var) = 10;   // becomes: int myVar = 10;

Stringification (#) — converts a macro argument to a string literal:

#define STRINGIFY(x)  #x
printf("%s
", STRINGIFY(hello));  // prints: hello
printf("%s
", STRINGIFY(3+4));    // prints: 3+4
Compilation, Linking & Build
Q1. What are the stages of C compilation?
Compilation
StageToolInput → OutputResponsibility
1. Preprocessingcpp.c → .iExpand macros, include headers, strip comments
2. Compilationcc1.i → .sSyntax/semantic check, generate assembly
3. Assemblyas.s → .oConvert assembly to machine opcode (object file)
4. Linkingld.o → executableLink libraries, resolve symbols, create final binary
Q2. What happens in the preprocessing stage?
  • Macro expansion — all #define macros are replaced with their values.
  • Header file inclusion#include files are copy-pasted in.
  • Comment removal — all /* */ and // comments are stripped.
  • Conditional compilation#ifdef / #endif blocks resolved.
  • Line control#line directives processed.
gcc -E main.c -o main.i   # stop after preprocessing
Q3. What is an object file?

An object file (.o) is the output of the assembler stage. It contains:

  • Machine code (compiled instructions) for that translation unit.
  • Symbol table — list of defined and referenced symbols.
  • Relocation information — placeholders for addresses not yet resolved.
  • Debugging information (if compiled with -g).

Object files are not directly executable — they must be linked to resolve external symbol references.

Q4. What is the linker responsible for?
  • Symbol resolution — matches function calls/variable references to their definitions across object files.
  • Library linking — pulls in needed functions from static (.a) or shared (.so) libraries.
  • Relocation — fills in final addresses for all symbols.
  • Creates the executable — combines all object files into one runnable binary.

Linker errors (undefined reference to...) occur when a function is called but never defined.

Q5. What are multiple definition errors?

A multiple definition error occurs when the linker finds two or more definitions of the same symbol across object files.

// file1.c
int counter = 0;   // definition

// file2.c
int counter = 0;   // another definition — LINKER ERROR!

// Fix: declare in header, define in ONE .c file
// header.h:  extern int counter;
// file1.c:   int counter = 0;
Q6. What is a symbol table?

A symbol table is a data structure maintained by the compiler and linker that maps each identifier (function name, variable name) to information about it: type, size, storage class, address/offset.

  • Compiler uses it to check types and resolve names.
  • Linker uses it to match references across object files.
  • Debugger uses it for meaningful variable/function names.
nm my_program.o   # view symbol table of an object file
Q7. Difference between static and dynamic linking?
AspectStatic LinkingDynamic Linking
WhenAt compile/link timeAt runtime
Library included in exe?Yes — library code copied into binaryNo — binary references shared lib
Executable sizeLargerSmaller
Memory sharingEach process has its own copyOne copy shared by all processes
PortabilitySelf-contained — no dependencyRequires correct .so on system
Extension.a (archive).so / .dll
Q8. What is Makefile used for?
Build

A Makefile automates the build process — it defines rules for compiling source files, linking, and other build tasks. The make tool reads the Makefile and only rebuilds files whose dependencies have changed (incremental build).

CC = gcc
CFLAGS = -Wall -O2

app: main.o utils.o
	$(CC) -o app main.o utils.o

main.o: main.c
	$(CC) $(CFLAGS) -c main.c

clean:
	rm -f *.o app
Q9. What is incremental build?

Incremental build means recompiling only the source files that have changed since the last build, rather than rebuilding everything. This saves significant build time in large projects.

Make achieves this by comparing the timestamps of source files and their corresponding object files — if the source is newer, it recompiles.

Q10. Why are compiler flags important?
Compiler

Compiler flags control how the compiler processes source code:

FlagPurpose
-Wall -WextraEnable all warnings — catch bugs early
-O0 / -O2 / -O3Optimization level (debug → release)
-gInclude debug symbols for GDB
-std=c99Enforce specific C standard
-DDEBUGDefine a macro (conditional compilation)
-I./includeAdd header search path
Volatile, Const & Embedded-Specific
Q1. Why do we use volatile?
volatile

The volatile keyword tells the compiler that a variable's value can change at any time without any action by the code it can see — so the compiler must not cache it in a register or optimize away reads/writes.

volatile int sensor_data;  // always read from memory

while (sensor_data == 0) {
    // Without volatile, compiler might optimize this to infinite loop
    // WITH volatile, it re-reads sensor_data from memory each iteration
}
Q2. Where should volatile be used in embedded systems?
  • Hardware registers (memory-mapped I/O) — value changes due to hardware, not code.
  • ISR-shared variables — modified inside an ISR, read in main code.
  • Multi-threaded/multi-core shared variables — modified by another thread/core.
  • DMA buffers — modified by DMA hardware, not CPU code.
#define UART_STATUS  (*(volatile uint32_t *)0x40013800)
// Hardware register — must always read actual hardware value
Q3. Difference between volatile and const volatile?
QualifierMeaningUse Case
volatileValue can change unexpectedly — always re-readISR variables, shared flags
const volatileCannot be written by code AND must always be re-readRead-only hardware status registers
// Status register: hardware changes it, CPU should never write it
const volatile uint32_t *STATUS_REG = (uint32_t *)0x40020010;

// Read it — OK (volatile ensures fresh read)
uint32_t val = *STATUS_REG;

// Write to it — compiler ERROR (const prevents this)
// *STATUS_REG = 0xFF;  // ERROR!
Q4. Why is volatile needed for ISR variables?
ISR

When a variable is modified inside an ISR and read in main(), the compiler doesn't know the ISR can modify it. Without volatile, the compiler may cache the variable in a register and never re-read it from memory, causing the main code to see a stale value.

volatile int flag = 0;    // MUST be volatile

void ISR_handler(void) {
    flag = 1;             // set by ISR
}

int main(void) {
    while (!flag) {       // must re-read from memory each time
        // wait...
    }
    // process event
}
Q5. What happens if volatile is not used?

Without volatile, the compiler may:

  • Cache the variable in a register — never re-read from memory.
  • Eliminate the read/write entirely — if it looks "unused" or redundant.
  • Reorder accesses — changing the intended sequence of operations.

Result: the program appears to hang, miss interrupts, or behave incorrectly — bugs that only appear with optimization enabled (-O1 or higher) and disappear in debug builds.

Q6. Can volatile prevent race conditions?

No. volatile only prevents compiler optimizations — it does not guarantee atomicity or provide any synchronization/locking mechanism.

  • A multi-byte volatile read/write is still not atomic — a context switch can happen in the middle.
  • To prevent race conditions you need: mutexes, spinlocks, atomic operations (_Atomic / __sync_*), or disabling interrupts.
// volatile alone is NOT safe for 32-bit value on 8-bit CPU:
volatile uint32_t counter;  // read could be interrupted mid-way

// Safe approach: disable interrupts or use atomic
__disable_irq();
counter++;
__enable_irq();
Q7. What is memory-mapped I/O?
Embedded

Memory-mapped I/O (MMIO) is a technique where hardware peripheral registers are mapped into the CPU's address space. The CPU reads and writes to them using normal memory load/store instructions — no special I/O instructions needed.

// STM32 GPIOA ODR register at address 0x40020014
#define GPIOA_ODR  (*(volatile uint32_t *)0x40020014)

GPIOA_ODR |=  (1 << 5);  // Set pin 5 HIGH
GPIOA_ODR &= ~(1 << 5);  // Set pin 5 LOW

The volatile qualifier is mandatory for MMIO registers to prevent the compiler from optimizing away the accesses.

Q8. How are hardware registers accessed in C?

Hardware registers are accessed by casting their fixed address to a volatile pointer and dereferencing it:

// Method 1: Macro (common in embedded)
#define REG_CONTROL  (*(volatile uint32_t *)0x40021000)
REG_CONTROL = 0x01;         // write
uint32_t val = REG_CONTROL; // read

// Method 2: Struct overlay (used in CMSIS / HAL)
typedef struct {
    volatile uint32_t CR;
    volatile uint32_t SR;
    volatile uint32_t DR;
} UART_TypeDef;

#define UART1  ((UART_TypeDef *)0x40013800)
UART1->CR = 0x200C;
Q9. Why are status registers const volatile?

A read-only hardware status register should be declared const volatile because:

  • volatile — forces the compiler to always re-read from the actual hardware register (never cache).
  • const — prevents your code from accidentally writing to it (a write could crash the system or be a no-op).
// UART status register — hardware sets flags, software only reads
const volatile uint32_t *UART_SR = (uint32_t *)0x40013800;
if (*UART_SR & (1 << 5)) {
    // TX empty — safe to send next byte
}
Q10. Can the compiler reorder volatile accesses?

The C standard guarantees that volatile accesses are not reordered relative to each other by the compiler. However:

  • Volatile accesses can be reordered relative to non-volatile accesses.
  • The hardware/CPU itself may still reorder accesses (requires memory barriers on some architectures).
// These two volatile reads happen in order:
uint32_t a = REG_A;   // 1st
uint32_t b = REG_B;   // 2nd — compiler won't swap these

// For hardware barriers (e.g., ARM):
__DMB();  // data memory barrier — prevents CPU reordering
Interrupts & Concurrency Concepts
Q1. What is an ISR?
ISR

An ISR (Interrupt Service Routine) is a special function called automatically by the CPU when a hardware or software interrupt occurs. The CPU pauses normal execution, saves its state, runs the ISR, then resumes.

// ARM Cortex-M example (GCC)
void __attribute__((interrupt)) TIMER0_IRQHandler(void) {
    TIMER0->SR &= ~(1 << 0);  // clear interrupt flag
    // handle timer event
}

ISRs are registered in the interrupt vector table at fixed memory addresses.

Q2. Rules to follow while writing an ISR?
  • Keep it short and fast — do minimal work; defer heavy processing to main loop.
  • No blocking calls — never use delay(), printf(), malloc(), or OS calls.
  • Clear the interrupt flag — otherwise the ISR triggers again immediately.
  • Use volatile for all variables shared with main code.
  • Be reentrant-safe — avoid modifying global state without protection.
  • No floating-point (unless FPU context saved) — many ISRs don't save FPU registers.
  • Use a flag/buffer pattern — set a flag in ISR, process in main loop.
Q3. Why should ISR be short?
  • Latency — a long ISR delays other interrupts and normal code execution.
  • Missed interrupts — if an interrupt fires again while the ISR is still running, it may be missed (depending on architecture).
  • System responsiveness — in real-time systems, deadlines can be missed if ISRs take too long.
  • Stack usage — ISRs use stack space; long ISRs with local variables risk stack overflow.

Best practice: Set a flag or write to a ring buffer in ISR, then process in main loop (deferred processing).

Q4. What is reentrancy?
Reentrancy

A function is reentrant if it can be safely interrupted mid-execution and called again (recursively or from an ISR) without corrupting its state.

A reentrant function:

  • Uses only local variables (on stack) — no global or static state.
  • Does not call non-reentrant functions.
  • Does not modify shared resources without protection.
// Reentrant — uses only local variables
int add(int a, int b) { return a + b; }

// NOT reentrant — uses static variable
int counter(void) {
    static int cnt = 0;  // shared state — dangerous in ISR!
    return ++cnt;
}
Q5. What is a race condition?
Concurrency

A race condition occurs when two or more execution contexts (threads, ISR + main) access shared data concurrently, and the final result depends on the timing/order of execution.

volatile int count = 0;

void ISR_handler(void) { count++; }  // ISR
void main_task(void)  { count++; }  // main

// If both execute "count++" simultaneously:
// 1. Both read count (= 0)
// 2. Both add 1 → both write 1
// Result: count = 1 (should be 2!) — race condition!

Fix: disable interrupts or use atomic operations around the access.

Q6. What is a critical section?

A critical section is a block of code that accesses shared resources and must not be executed by more than one context at a time.

// Embedded: protect with interrupt disable/enable
__disable_irq();          // ENTER critical section
    shared_counter++;     // protected operation
__enable_irq();           // EXIT critical section

// RTOS: protect with mutex
xSemaphoreTake(mutex, portMAX_DELAY);
    shared_resource++;
xSemaphoreGive(mutex);
Q7. How to protect shared data between ISR and main?
ISR
  • Declare as volatile — prevents compiler from caching the variable.
  • Disable interrupts around multi-step read-modify-write in main.
  • Use atomic types (_Atomic in C11) for single-variable updates.
  • Use lock-free ring buffers for data transfer from ISR to main.
  • Minimize shared state — ISR sets flag, main reads once and clears.
volatile uint8_t rx_flag = 0;
volatile uint8_t rx_data = 0;

void UART_ISR(void) {
    rx_data = UART->DR;   // read hardware register
    rx_flag = 1;          // signal main loop
}

int main(void) {
    if (rx_flag) {
        rx_flag = 0;
        process(rx_data);
    }
}
Q8. Why not use blocking calls in ISR?
  • Blocking calls wait (e.g., delay(), scanf(), mutex wait) — the ISR never returns, freezing the system.
  • printf() uses malloc() internally and is not reentrant — can corrupt the heap if called from ISR.
  • RTOS blocking APIs like xQueueReceive(portMAX_DELAY) can deadlock the OS scheduler from within an ISR.
  • Long ISRs increase interrupt latency for all other interrupts.
Q9. What is priority inversion?
RTOS

Priority inversion occurs when a high-priority task is blocked by a low-priority task that holds a resource (mutex) the high-priority task needs. A medium-priority task can then preempt the low-priority one, effectively blocking the high-priority task indefinitely.

// Classic scenario:
// Low task (L) locks mutex M
// High task (H) tries to lock M → blocked
// Medium task (Med) preempts L → H still blocked!
// Med runs freely while H waits → inversion!

Solution: Priority Inheritance — temporarily raise L's priority to H's level while L holds M. Used in RTOS mutexes.

Q10. Difference between polling and interrupt?
AspectPollingInterrupt
MechanismCPU continuously checks a flag/registerHardware signals CPU only when event occurs
CPU usageHigh — CPU busy even with no eventsLow — CPU free until event fires
LatencyDepends on poll frequencyFast — immediate response
ComplexitySimpleMore complex (ISR, priority, shared data)
PowerHigh power (CPU always running)CPU can sleep between events
Use caseSimple, fast loops; when event rate is highInfrequent events, low-power systems