Ch 1FREE

Computing Fundamentals and Program Development

13 min

Chapter 1: Computing Fundamentals and Program Development

Overview

This chapter introduces the foundational concepts you'll need to understand how computers work and how programming fits into that ecosystem. You'll learn about the physical machinery that makes computing possible, the languages we use to give computers instructions, different philosophies for solving problems, and the process of turning an idea into a working program. Think of this as building your mental model for everything that comes next in this book.


Computer System Architecture

The Two Essential Components

Every computer system consists of two fundamental parts working together:

Computer Hardware: The physical equipment that comprises a computer—processors, memory chips, storage devices, and peripherals that you can touch.

Computer Software: The collection of programs and instructions that direct the hardware to accomplish useful work.

Without hardware, there's nothing to execute instructions. Without software, hardware is just expensive electronics doing nothing. They depend on each other completely.

Hardware: The Six Building Blocks

Computer hardware has six key components that work together through a bus (a communication pathway connecting all parts):

Central Processing Unit (CPU)

The CPU is the brain of the computer. It consists of three parts:

  • The arithmetic-logical unit (ALU) executes calculations and comparisons
  • The control unit acts as a traffic director, coordinating all system operations
  • Registers are tiny, ultra-fast storage locations that temporarily hold data being processed

The control unit deserves special attention—it's like an orchestra conductor, ensuring every component does its job at the right time.

Primary Memory

Primary memory (also called main memory or RAM) is where programs and data live temporarily while the computer is working. Key characteristics:

  • Each storage location has a unique address, like a street address, so the CPU can find information
  • Memory is volatile—when you turn off the computer, everything stored here disappears
  • The size of memory is typically measured in gigabytes on modern computers
  • Different amounts of memory can be accessed at once depending on the system (typically 1, 2, or 4 bytes in personal computers)

Word: When multiple bytes are accessed together at one time, the unit is called a word rather than a byte. This is common in larger computer systems.

Secondary Storage

This is the permanent filing cabinet of your computer. Unlike primary memory, secondary storage keeps your data safe even when the power is off.

  • Hard disk drives (HDDs) are the most common
  • Solid-state drives (SSDs) are increasingly popular
  • Optical media like CDs and DVDs can store data
  • USB flash drives offer portability

Once you save a file to secondary storage, it stays there until you deliberately delete it.

Input System

Computers need a way to receive information from the outside world:

  • Keyboards are the primary input device for most users
  • Mice, touchscreens, styluses, and audio input devices are also common
  • These devices convert human actions into data the computer can process

Output System

After processing, the computer must communicate results back to you:

  • A monitor displays output as a soft copy (temporary, on screen)
  • A printer produces output as a hard copy (permanent, on paper)
  • Speakers can output audio
  • Any display of results counts as output

Communication System

Modern computers rarely work in isolation. Communication devices enable networking:

  • Network interface cards connect to the internet
  • Modems handle data transmission
  • These devices allow your computer to exchange information with others

📝 Section Recap: Hardware consists of six interconnected components—the CPU processes instructions, primary memory holds temporary data, secondary storage preserves files permanently, input and output systems connect the computer to users, and communication systems enable networking.


Computer Software Categories

System Software vs. Application Software

Software falls into two broad categories based on its purpose:

System Software

System software manages the computer's resources and ensures everything runs smoothly. It consists of three types:

  1. Operating System: Provides the user interface, manages files and databases, and handles communication. It's the foundation that allows you and your applications to use the hardware efficiently.

  2. System Support: Includes utilities like disk formatting programs and system tools. These programs help maintain your computer and provide performance statistics.

  3. System Development: Contains language translators (compilers and assemblers), debugging tools, and software engineering tools that help programmers write and refine code.

Application Software

Application software solves specific user problems. There are two types:

  1. General-Purpose Software: Can be used for many different tasks. Word processors, spreadsheet applications, and databases are examples. You buy them once and use them for various projects.

  2. Application-Specific Software: Designed for one particular job and cannot be adapted for other purposes. Accounting systems, payroll software, and manufacturing planning tools are examples. They excel at their specific task but aren't useful for anything else.

📝 Section Recap: System software manages hardware resources and provides development tools, while application software helps users solve problems—either general-purpose applications that handle many tasks or specialized software for specific needs.


Evolution of Computer Languages

From Machines to Humans: The Language Journey

Programming languages have evolved dramatically to make computing more accessible to humans. Understanding this evolution helps you appreciate why modern languages like C++ exist.

Machine Languages: The Original

  • In the earliest days of computing (1940s–1950s), programs had to be written in machine language—streams of 0s and 1s that directly represent hardware instructions
  • Each type of computer has its own machine language
  • While computers still ultimately execute machine language, humans no longer write in it because it's impossibly tedious and error-prone

Machine Language: Streams of binary digits (0s and 1s) that are the only language a computer truly understands.

Symbolic Languages: The First Breakthrough

In the 1950s, Grace Hopper, a computer scientist and mathematician in the U.S. Navy, recognized that programmers shouldn't have to write in binary. She developed the concept of an assembler—a program that translates symbolic code into machine language.

Symbolic Language: Uses human-readable symbols and mnemonics (like "ADD" or "LOAD") instead of binary to represent machine instructions. Also called assembly language.

Key improvements:

  • Much easier to read and write than binary
  • Still very closely tied to the specific computer's hardware
  • Every single machine instruction must be written individually
  • Required deep understanding of how the hardware worked

Assembly languages are still used today for system-level programming where hardware control is critical, but they're rarely used for application development.

High-Level Languages: Making Programming Accessible

The push for greater programmer productivity led to high-level languages starting in the 1950s.

High-Level Language: A programming language designed to be independent of specific hardware, allowing programmers to focus on solving problems rather than managing hardware details.

Major milestones:

  • FORTRAN (1957): Created by John Backus and an IBM team; first widely used high-level language
  • COBOL (1960): Business-oriented language; Grace Hopper was instrumental in its development
  • BASIC, Pascal, Ada, C, C++, Java: All developed to address different programming needs over subsequent decades

High-level languages offer crucial advantages:

  • Portable across different computers—write once, run on many systems
  • Programmers focus on the problem, not the hardware
  • Much faster to write and easier to maintain
  • Still must be converted to machine language through a process called compilation

C++ emerged as one of the most popular high-level languages for both system software and application development, balancing power with programmer productivity.

📝 Section Recap: Programming languages evolved from binary machine code to symbolic assembly languages to high-level languages, each generation making programming more human-friendly. C++ represents a modern high-level language that blends efficiency with accessibility.


Language Paradigms: Different Ways of Thinking

What Is a Paradigm?

Paradigm: A model or framework that describes how a program handles data and solves problems. Different paradigms represent fundamentally different philosophies about programming.

There are four main paradigms. Understanding them helps you appreciate how C++ can be used in multiple ways.

The Procedural Paradigm

In procedural programming (also called imperative programming), you think of a program as a series of commands that transform the state of memory.

The Core Concept:

  • Define data storage locations (variables)
  • Issue commands that change what's stored in those locations
  • Repeat until the problem is solved

Example: To find the sum of two numbers:

  1. Reserve three memory locations: aaa, bbb, and sum\text{sum}sum
  2. Input the first number into aaa
  3. Input the second number into bbb
  4. Execute: sum=a+b\text{sum} = a + bsum=a+b
  5. Output sum\text{sum}sum

Each command changes the memory state, moving you closer to the solution.

Overcoming Inefficiency:

Early procedural programs were repetitive—the same code appeared in multiple places. The paradigm evolved to allow packaging:

  • Procedures (Functions): Code that performs a specific task can be written once and reused everywhere it's needed. Libraries store common procedures for quick access.
  • Data Packaging: Multiple related data items can be grouped (arrays or records) so they're processed together rather than one item at a time.

After packaging, the same example becomes three simple lines:

input (list);
sort (list);
output (list);

The Object-Oriented Paradigm

Object-oriented programming recognizes that in the procedural paradigm, there's no explicit connection between the data and the procedures that operate on it. You must choose data, then find the right procedures to process it.

Object-Oriented Paradigm: A programming approach where procedures and the data they operate on are packaged together into units called objects.

Object: A package containing both data and all the operations (procedures) that can be applied to that data.

The Key Innovation:

Instead of having data separate from operations, object-oriented programming bundles them together. Think of a real-world analogy: a dishwasher.

  • The data is the set of dishes inside
  • The operations are washing, rinsing, and drying
  • All of these are built into one object—the dishwasher itself
  • You don't search for separate procedures; they come with the dishwasher

In object-oriented programming, you create objects like a list object that knows how to sort itself, search itself, and print itself. The operations are tied directly to the data.

Implementation Detail: In actual code, while individual objects only hold their own data, the procedure code is shared among all objects of the same type—a memory-efficient design.

The Functional Paradigm

In functional programming, a program is viewed as a mathematical function—a machine that transforms inputs into outputs.

Functional Paradigm: An approach where programs are built from primitive functions (like add, subtract, multiply) that are combined mathematically to create new functions.

Key Characteristics:

  • Not concerned with commands that change memory state
  • Instead, focuses on the mathematical relationship between inputs and outputs
  • Primitive functions handle basic operations (arithmetic, list creation, element extraction)
  • New functions are built by combining these primitives

Example: To add two numbers in functional style:

  • You have a primitive function add\text{add}add
  • You have primitive functions that manipulate lists: first()\text{first}()first() extracts the first element, rest()\text{rest}()rest() returns all elements except the first
  • To add numbers in list (6,8)(6, 8)(6,8): combine these functions to get add(first(6,8),first(rest(6,8)))\text{add}(\text{first}(6,8), \text{first}(\text{rest}(6,8)))add(first(6,8),first(rest(6,8))) which yields 141414

This paradigm is particularly useful for mathematical and symbolic computation.

The Logic Paradigm

Logic programming is based on formal logic and works differently from the others.

Logic Paradigm: An approach where you provide facts and rules to the program, then ask queries. The program answers by applying the rules to the facts.

Structure:

  • Facts: Statements that are known to be true (e.g., "Fay is the parent of Tara")
  • Rules: Logical relationships between facts (e.g., "If X is the parent of Y and Y is the parent of Z, then X is the grandparent of Z")
  • Queries: Questions you ask the program (e.g., "Is Fay the grandparent of Benji?")

The program deduces answers by applying rules to facts, like solving a logical puzzle.

C++ and Paradigms

C++ is fundamentally based on the procedural paradigm, but it also supports object-oriented programming through classes and objects.

  • In early chapters of C++, you'll use it primarily as a procedural language (with the exception of input/output, which uses objects)
  • In later chapters, you'll shift to using C++ as an object-oriented language
  • This flexibility makes C++ powerful for many different types of problems

📝 Section Recap: Four programming paradigms offer different mental models—procedural focuses on state changes, object-oriented bundles data with operations, functional treats programming as mathematics, and logic uses facts and rules to deduce answers. C++ primarily supports procedural and object-oriented approaches.


Program Design: From Problem to Solution

A Two-Step Process

Good programming requires discipline and careful planning before writing code.

Program Design: A two-step process of understanding the problem completely and then developing a solution before writing any code.

This is similar to an architect creating detailed blueprints before construction begins. Many programmers rush to start coding before fully understanding the problem—a critical mistake that leads to wasted effort and incorrect results.

Step 1: Understanding the Problem

Before you can solve a problem, you must understand it thoroughly.

The Process:

  1. Read the requirements statement carefully
  2. Review your understanding with the person requesting the solution
  3. Ask clarifying questions about unclear aspects
  4. Document your understanding in writing

Example Problem: "Find the largest number in a list of numbers."

This simple statement raises clarifying questions:

  • What type of numbers? Integers only, or decimal values?
  • Are the numbers in any particular order?
  • How many numbers can there be?
  • What should happen if the list is empty?

Without answers, you might deliver a solution that's correct mathematically but wrong for the actual use case. Taking time to clarify prevents serious misunderstandings, especially in large projects with hundreds of detailed requirements.

Step 2: Developing the Solution

Once you fully understand the problem, you create an algorithm.

Algorithm: A set of logical steps necessary to solve a problem.

Important Characteristics of Algorithms:

  • Independent of any specific computer system (can be implemented manually or on any computer)
  • Accepts input data and processes it to produce output
  • Can be expressed in pseudocode (informal, English-like steps) before writing actual code

Example: Finding the Largest Number

Starting with a concrete example (five numbers: 13, 7, 19, 29, 23):

  1. Input the first number (13) and call it the largest
  2. Input the second number (7); since 7 < 13, largest stays 13
  3. Input the third number (19); since 19 > 13, update largest to 19
  4. Input the fourth number (29); since 29 > 19, update largest to 29
  5. Input the fifth number (23); since 23 < 29, largest stays 29
  6. Output largest (29)

Generalizing the Algorithm:

The steps above work only for five numbers. To generalize for any number of inputs:

  1. Input the first number; set largest to this value
  2. Repeat the following while more numbers exist:
    • Input the next number
    • If this number is greater than largest, update largest
  3. Output largest

This generalized version handles any quantity of numbers—one, ten, or one million.

Why Design Before Coding?

Designing before coding provides several benefits:

  • Raises questions that reveal gaps in understanding
  • Allows you to think through the logic clearly
  • Makes coding faster because you know exactly what to do
  • Reduces errors and rework

The excitement of seeing code work is tempting, but starting to code before design is complete almost always leads to problems.

Unified Modeling Language (UML)

Unified Modeling Language (UML): A standard tool for designing and documenting computing systems, programs, and object relationships.

UML is particularly valuable for designing large, complex systems and visualizing how objects interact in object-oriented programs. You'll encounter UML in later chapters when designing more sophisticated programs.

📝 Section Recap: Program design requires fully understanding the problem through clarifying questions, then developing an algorithm—a step-by-step solution independent of specific hardware. Designing before coding prevents errors and improves efficiency.


Program Development: From Design to Execution

The Multi-Step Transformation

Once you've designed a solution and written code, transforming that code into a working program involves several steps. These steps are repeated during development as you refine and debug code.

The Four Main Stages:

  1. Write and Edit the Program
  2. Compile the Program
  3. Link the Program
  4. Execute the Program

Step 1: Writing and Editing

Programmers use text editors to write code.

Text Editor: Software that helps you enter, modify, and save character data organized around lines of code.

Helpful Editor Features:

  • Search and replace commands to locate and modify statements
  • Copy-and-paste to reuse code sections
  • Syntax highlighting (colors) to display key program elements
  • Auto-formatting to align and indent code properly

After writing, the code is saved as a source file—the input that will be processed by the compiler.

Step 2: Compilation

The compiler translates your human-readable source code into machine language.

Compiler: A program that translates high-level source code into machine language the computer can execute.

This is a critical step—any syntax errors (incorrect language usage) will be caught here as compiler errors.

Step 3: Linking

A program typically uses many functions:

  • Some you write yourself (in your source file)
  • Others are pre-written library functions (input/output, mathematics, string handling, etc.)

Linker: A program that combines your compiled code with library functions and system code to create a complete executable file.

This step assembles all the pieces into one cohesive executable program.

Step 4: Execution

Finally, your program is ready to run.

Two Sub-Steps:

  1. Loading: The loader (part of the operating system) reads the executable file from disk into main memory
  2. Running: Control passes to your program, and it begins executing instructions

During Execution:

  • Your program reads input data (from keyboard, file, or other source)
  • It processes the data according to your instructions
  • It produces output (to monitor, file, or device)
  • When finished, the operating system reclaims memory and control returns

📝 Section Recap: Program development involves four sequential stages—editing source code, compiling to machine language, linking with libraries, and executing. These stages may be repeated many times as you test and refine your program.


Testing and Quality Assurance

Why Testing Matters

Testing is not optional—it's a critical part of programming. You're responsible for ensuring your program works correctly in all situations, not just the happy path.

Program Testing: The process of verifying that a program executes correctly and produces correct results for all possible inputs and conditions.

Testing is often tedious and time-consuming, but it's essential. A program with hidden bugs in production is far more costly than the time spent testing it thoroughly.

Designing Test Cases

Test data should be developed throughout design and development, not just at the end.

Creating a Test Plan:

  1. During Design: Identify what situations need testing
  2. While Coding: Note additional test cases you think of
  3. Before Testing: Organize all cases into a logical test plan

For example, testing "Find Largest" should include:

  • Normal cases (multiple numbers in random order)
  • Edge cases (only one number, all numbers the same)
  • Boundary conditions (numbers already in order, numbers in reverse order)

Even simple programs require multiple test cases. Large projects often need 20, 30, or more test cases to validate thoroughly.

Regression Testing: When you modify a program later, you should re-run all original test cases to ensure your changes didn't break anything.

Ensuring Comprehensive Testing

While you can never be 100% certain a program is fully tested, follow these principles:

  1. Line Coverage: Verify that every line of code executes at least once
  2. Branch Coverage: Ensure every conditional statement executes both true and false paths
  3. Range Testing: For conditions with ranges, test the first value, last value, and values outside the range (these are where mistakes often occur)
  4. Error Testing: If your program handles errors, test all error conditions

Programming tools exist to help verify coverage automatically.

Three Types of Errors

Specification Errors

These occur when the problem definition is incorrect or misunderstood.

  • Caught during design review with users and analysts
  • Prevented by thorough problem understanding before coding

Code Errors

These are violations of the language syntax rules.

  • Caught by the compiler as error or warning messages
  • The easiest to fix—the compiler tells you exactly where and what's wrong
  • Even warnings should be addressed, not ignored

Logic Errors

These are the most insidious—the code runs but produces wrong results.

  • Examples: forgetting to initialize a variable, dividing by zero, incorrect algorithm
  • Detected only through testing
  • Before running any test, determine what the correct answer should be—don't assume the computer's answer is right

📝 Section Recap: Thorough testing is essential, requiring comprehensive test plans that cover normal cases, edge cases, and boundaries. Three error types exist—specification errors caught in design, code errors caught by compilers, and logic errors found only through testing.


Key Concepts Summary

Computer System: Made of hardware (CPU, memory, storage, input/output, communication) and software (system and application)

Programming Languages: Evolved from machine language (binary) through assembly languages (symbolic) to high-level languages (like C++) that are portable and human-readable

Programming Paradigms: Different approaches to solving problems—procedural (state changes), object-oriented (objects bundle data and operations), functional (mathematical functions), and logic (facts and rules)

Program Design: Understanding the problem thoroughly, then developing an algorithm before writing any code

Program Development: Four sequential stages of writing, compiling, linking, and executing code

Testing: Comprehensive validation ensuring every line executes, branches work correctly, edge cases are handled, and logic is sound


Review Questions

Before moving to the next chapter, you should be able to answer:

  1. What are the six hardware components and their primary functions?
  2. How do system software and application software differ?
  3. Why did symbolic languages emerge as an improvement over machine language?
  4. What fundamental differences exist between the procedural and object-oriented paradigms?
  5. What are the two essential steps in program design?
  6. What happens during compilation and linking?
  7. Why is testing before deployment critical, and what are the three error types programmers must guard against?