- KALEAB's Newsletter
- Posts
- What CS50 Taught Me About How Computers Think
What CS50 Taught Me About How Computers Think
CS50
CS50 was more than just a course—it was a deep dive into the inner workings of computers and AI. As I continue exploring neural networks and artificial intelligence, a few lessons from CS50 stand out, reshaping how I see technology.
1. Everything is Just Numbers
Computers don’t understand words, images, or sounds the way humans do. At their core, everything boils down to binary—a language of 0s and 1s. Why? Because computers rely on transistors, tiny electrical switches that are either ON (1) or OFF (0).
Numbers: Decimal 7 is 111 in binary, while 01000001 represents the letter ‘A’ in ASCII.
Text & Unicode: ASCII maps English characters, while Unicode enables global languages and even emojis.
Images & Colors: Pixels store color using RGB values (24-bit per pixel).
Sound & Video: Audio is converted into numerical waveforms, and videos are just a rapid sequence of image frames.
Understanding this blew my mind—everything we interact with on screens is just layers of numbers working together.
2. Algorithms: The Secret Behind Speed
Computers don’t just store data; they process it efficiently. One of the biggest eye-openers was learning about search algorithms and how they make computing so fast.
Linear Search (O(n)): Checks each item one by one (slow for large data sets).
Binary Search (O(log n)): Repeatedly cuts the dataset in half, making it exponentially faster.
The phone book analogy illustrated this perfectly: Instead of flipping through names page by page, you jump to the middle, then narrow your search—this is why binary search is incredibly powerful.
Algorithms are the backbone of computing, making AI and complex systems possible.
3. AI & Neural Networks: Teaching Machines to Think
One of my biggest takeaways was understanding how AI learns. Unlike traditional programming (where you explicitly tell a machine what to do using "if-else" rules), machine learning allows AI to learn patterns from data.
Neural Networks are inspired by the human brain, using layers of "neurons" (mathematical functions) to process and improve predictions.
Large Language Models (LLMs) like ChatGPT don’t “think” but analyze vast datasets and predict the most likely output.
AI isn’t magic—it’s just math at scale, processing millions of probabilities to generate human-like responses.
Learning this made me wonder: If AI can already process information this efficiently, what’s next? The future of AI isn’t just about making machines smarter; it’s about how we harness that intelligence.
Final Thoughts
CS50 didn’t just teach me how computers work—it changed the way I think about problem-solving and efficiency. From binary to AI, it’s all about breaking things down into smaller, logical steps. And in that process, I discovered something amazing:
Computers aren’t just machines. They’re proof that numbers, when used right, can create intelligence.
Reply