You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What matters: how the computer works. What every programmer focuses on: TDD, design patterns, monads, microservices, REST APIs, reactive UIs...
Programming paradigms
There's 3 paradigms for general-purpose programming: procedural, object-oriented, and functional. There's strongly typed and dynamically typed languages in each of the 3 paradigms.
The biggest mistake you can do here is to commit yourself to a single paradigm/language and try to solve every problem in it, when a different paradigm would work better.
Object-oriented people suffer the most from this. If you're already deep into design patterns, TDD, SOLID principles, etc. then I'm sorry, you're lost in the woods. That's where 90% of the industry is, so you probably shouldn't feel too bad about it. The bad news is, it will be hard to unlearn all that and get back on track. It will probably take a while before you even start.
The ladder of abstraction
Your computer is like a pyramid of abstraction. At the base is the hardware. On it runs the Operating System which runs your program (if it's compiled), or an interpreter running your program. Having an idea about what needs to happen at each level of the pyramid in order to, say, put a character on the screen, or save a record in a database, will make you better than most programmers even if you only work in a high-level language/environment and you have little control over what happens in the layers below.
Operating systems
Operating systems do 3 things: 1) they put common APIs over diverse hardware components so that you don't have to program specifically for different makes and models of the same type of component, 2) they partition the system's limited resources (CPU, RAM, disk) so that multiple programs can run at the same time without stepping on each other or stealing information from each other and 3) they provide a UI (called shell) from which to start and kill programs.
Things to know here: how the hardware helps the OS run multiple processes at the same time, swap to disk, prevent processes from looking at each other's memory or crashing the OS, run the same program from different locations in memory, etc,
How the CPU works
Developing a mental model about CPU, RAM, disk, network and GPU will get you to a whole different level than most programmers. The relevant things are:
how to code for the CPU cache -- by far the most important thing in a post-Moore's Law era. See with your own eyes for how important that is.
branch prediction -- this can get you guilt-free inner-loop conditionals but only if you know how it works.
If you program in JavaScript, you can still make relevant portions of your program run reasonably fast, if you learn a few basic things about JIT compilers (which run on somewhat similar principles as the superscalar CPUs we use today).
Mentally model linear math on moving values
Most of programming is done with linear math, so the four basic operations of addition, subtraction, multiplication and division. The difference is that in programming, these operations work on moving values (variables that change in time), so you need to think about them in terms of their general effect on a range of numbers rather than just one number.
This way addition can be visualized as translating (moving the number) to the right (or up or down or whatever), subtraction is moving in the opposite direction, multiplication is scaling or zooming, making things proportionally bigger, while division is making things proportionally smaller. This is all you can do in a one-dimension space, and most programming happens in various 1D spaces. If you put together all these 1D transformations into a single function, you get the lerp() function, aka linear interpolation, aka cross-multiplication, which you can conceptualize as proportionally mapping a value from one interval into another (so it can do both translation and scaling).
Floating point semantics
Some people get confused about floating point because you can't store decimal fractions like 0.1, 0.2, 0.3 exactly with it, and that results in seemingly odd behavior like 0.1 + 0.2 != 0.3. This makes them (wrongly) conclude that floating point is not good at representing money because it uses base-2 instead of base-10 for representing fractions. In reality the choice of base for fractions really doesn't matter. Base-10, base-6 or base-13 wouldn't be any better or worse than base-2 at storing fractions, because floating point is not about exactness (i.e. choosing to represent perfectly some fractions at the detriment of others), it's about accuracy (i.e. how to get as close as possible to the quantity you're looking for after a math operation, given the fixed amount of bits that you have). So it doesn't matter that 0.1 + 0.2 is 0.30000000000000004 inside, because up to the 2 decimals that you need for money, these numbers behave exactly like decimal fixed-point numbers. So question your expectation for exactness, give up that expectation because it's wrong anyway (in programming terms it means replacing a == b with abs(a - b) < epsilon), and floating points will never bother you again.
Division
Division is still 4 times slower than multiplication on current computers, but note that dividing by a constant is an operation that can be sometimes turned into a multiplication by most C compilers and some JIT compilers. Dividing by powers-of-two is always converted.
Computer latency chart
Moore's Law is dead
RAM is the new HDD
How to code for L1/L2
Closures + lexical scoping + gc + hash maps
Probably the most useful tech combo in programming history. For exploiting this properly you need 1) a language that gets it right like Lua or JavaScript, and 2) a mindset of constant refactoring and finding new ways of putting closures to work, because I don't think you can learn this from a recipe book.
Data structures
Know how to build a binary tree and navigate it using recursion, breadth-first, depth-first.
If you're in C, definitely make a hash map. Make a linked-list-based one and an open-addressing one. Make a bump allocator.
Databases
CAP theorem
distributed systems fallacies
ACID
database normalization
Get yourself out of the microservices/noSQL/map-reduce mindset and other garbage,
Recursive-descent parsing
Regular expressions
Concurrency
Threads and synchronization to avoid race conditions. What's up with thread-safe libraries.
If you're into the web async craze, definitely check out the cool 1958 concept of a coroutine, and note how few languages get that right in 2020.
The time complexity of algorithms
One thing about time complexity is that non-linear growth patterns like geometric (quadratic, cubic), exponential and logarithmic are quite non-intuitive for humans. These come up mostly in search algorithms, where you find that you need to traverse the same data more than once in order to figure something out, like if you want to find the position of a substring in a string, or when the search space is multi-dimensional like in a raytracing algorithm or in a database join algorithm and the naive algorithm has quadratic or worse complexity and you need a different algorithm altogether. Many times though, quadratic complexity comes from poor code organization (computing invariant portions of an algorithm redundantly in the inner loop) and it can be avoided by just storing and reusing intermediary results in a computation aka "trading space for time". You can also trade time for time for instance when you sort your input once in O(n log n) time and then search it multiple times using binary search in a very non-intuitive O(log n) time (takes 20 steps to find a value in an array of one million, and only 10 steps more for an array of one billion).
Another thing to mention is the humble hash map, probably the best invention in computing, ever, and a great time complexity killer. Every program uses one somewhere and it's the bread'n'butter data structure of most hi-level languages for this reason.
Leveraging compiler optimizations
Generally speaking, you want to know about compiler optimizations in order to help the compiler generate faster code. But have you thought about leveraging this knowledge for simplifying the code rather than for optimization? Here's some examples:
constant expressions are folded so use them in the code instead of pre-computing them with a calculator.
compilers can hoist constant branches out of loops so you can actually specialize loop kernels with if/else branches inside loops instead of hoisting the branches yourself in the code.
simple math is faster than memory access, so storing the results of simple computations in temp vars to avoid computing them twice may actually have worse performance (because of register spills + cache misses) than just repeating the math.
Stuff that you shouldn't bother with
If you go by what academia produces every year you'll get a very skewed picture of what's important and what's not in computer science simply because researchers have very different goals and incentives than most programmers. Their career depends on the amount (not quality or relevancy, because how can you measure that?) of papers they produce each year and some problems in computer science like parsing or type systems, although fertile ground for papers, are of little practical utility.
Stuff that every IT manager needs to know, but they never do
Conway's Law
Brooks's Law
Stuff that every economist/psychologist needs to know
People follow incentives.
The text was updated successfully, but these errors were encountered:
[DRAFT]
Stuff that every programmer needs to know
What matters: how the computer works.
What every programmer focuses on: TDD, design patterns, monads, microservices, REST APIs, reactive UIs...
Programming paradigms
There's 3 paradigms for general-purpose programming: procedural, object-oriented, and functional. There's strongly typed and dynamically typed languages in each of the 3 paradigms.
The biggest mistake you can do here is to commit yourself to a single paradigm/language and try to solve every problem in it, when a different paradigm would work better.
Object-oriented people suffer the most from this. If you're already deep into design patterns, TDD, SOLID principles, etc. then I'm sorry, you're lost in the woods. That's where 90% of the industry is, so you probably shouldn't feel too bad about it. The bad news is, it will be hard to unlearn all that and get back on track. It will probably take a while before you even start.
The ladder of abstraction
Your computer is like a pyramid of abstraction. At the base is the hardware. On it runs the Operating System which runs your program (if it's compiled), or an interpreter running your program. Having an idea about what needs to happen at each level of the pyramid in order to, say, put a character on the screen, or save a record in a database, will make you better than most programmers even if you only work in a high-level language/environment and you have little control over what happens in the layers below.
Operating systems
Operating systems do 3 things: 1) they put common APIs over diverse hardware components so that you don't have to program specifically for different makes and models of the same type of component, 2) they partition the system's limited resources (CPU, RAM, disk) so that multiple programs can run at the same time without stepping on each other or stealing information from each other and 3) they provide a UI (called shell) from which to start and kill programs.
Things to know here: how the hardware helps the OS run multiple processes at the same time, swap to disk, prevent processes from looking at each other's memory or crashing the OS, run the same program from different locations in memory, etc,
How the CPU works
Developing a mental model about CPU, RAM, disk, network and GPU will get you to a whole different level than most programmers. The relevant things are:
If you program in JavaScript, you can still make relevant portions of your program run reasonably fast, if you learn a few basic things about JIT compilers (which run on somewhat similar principles as the superscalar CPUs we use today).
Mentally model linear math on moving values
Most of programming is done with linear math, so the four basic operations of addition, subtraction, multiplication and division. The difference is that in programming, these operations work on moving values (variables that change in time), so you need to think about them in terms of their general effect on a range of numbers rather than just one number.
This way addition can be visualized as translating (moving the number) to the right (or up or down or whatever), subtraction is moving in the opposite direction, multiplication is scaling or zooming, making things proportionally bigger, while division is making things proportionally smaller. This is all you can do in a one-dimension space, and most programming happens in various 1D spaces. If you put together all these 1D transformations into a single function, you get the lerp() function, aka linear interpolation, aka cross-multiplication, which you can conceptualize as proportionally mapping a value from one interval into another (so it can do both translation and scaling).
Floating point semantics
Some people get confused about floating point because you can't store decimal fractions like 0.1, 0.2, 0.3 exactly with it, and that results in seemingly odd behavior like 0.1 + 0.2 != 0.3. This makes them (wrongly) conclude that floating point is not good at representing money because it uses base-2 instead of base-10 for representing fractions. In reality the choice of base for fractions really doesn't matter. Base-10, base-6 or base-13 wouldn't be any better or worse than base-2 at storing fractions, because floating point is not about exactness (i.e. choosing to represent perfectly some fractions at the detriment of others), it's about accuracy (i.e. how to get as close as possible to the quantity you're looking for after a math operation, given the fixed amount of bits that you have). So it doesn't matter that 0.1 + 0.2 is 0.30000000000000004 inside, because up to the 2 decimals that you need for money, these numbers behave exactly like decimal fixed-point numbers. So question your expectation for exactness, give up that expectation because it's wrong anyway (in programming terms it means replacing
a == b
withabs(a - b) < epsilon
), and floating points will never bother you again.Division
Division is still 4 times slower than multiplication on current computers, but note that dividing by a constant is an operation that can be sometimes turned into a multiplication by most C compilers and some JIT compilers. Dividing by powers-of-two is always converted.
Computer latency chart
Closures + lexical scoping + gc + hash maps
Probably the most useful tech combo in programming history. For exploiting this properly you need 1) a language that gets it right like Lua or JavaScript, and 2) a mindset of constant refactoring and finding new ways of putting closures to work, because I don't think you can learn this from a recipe book.
Data structures
Know how to build a binary tree and navigate it using recursion, breadth-first, depth-first.
If you're in C, definitely make a hash map. Make a linked-list-based one and an open-addressing one. Make a bump allocator.
Databases
Get yourself out of the microservices/noSQL/map-reduce mindset and other garbage,
Recursive-descent parsing
Regular expressions
Concurrency
Threads and synchronization to avoid race conditions. What's up with thread-safe libraries.
If you're into the web async craze, definitely check out the cool 1958 concept of a coroutine, and note how few languages get that right in 2020.
The time complexity of algorithms
One thing about time complexity is that non-linear growth patterns like geometric (quadratic, cubic), exponential and logarithmic are quite non-intuitive for humans. These come up mostly in search algorithms, where you find that you need to traverse the same data more than once in order to figure something out, like if you want to find the position of a substring in a string, or when the search space is multi-dimensional like in a raytracing algorithm or in a database join algorithm and the naive algorithm has quadratic or worse complexity and you need a different algorithm altogether. Many times though, quadratic complexity comes from poor code organization (computing invariant portions of an algorithm redundantly in the inner loop) and it can be avoided by just storing and reusing intermediary results in a computation aka "trading space for time". You can also trade time for time for instance when you sort your input once in O(n log n) time and then search it multiple times using binary search in a very non-intuitive O(log n) time (takes 20 steps to find a value in an array of one million, and only 10 steps more for an array of one billion).
Another thing to mention is the humble hash map, probably the best invention in computing, ever, and a great time complexity killer. Every program uses one somewhere and it's the bread'n'butter data structure of most hi-level languages for this reason.
Leveraging compiler optimizations
Generally speaking, you want to know about compiler optimizations in order to help the compiler generate faster code. But have you thought about leveraging this knowledge for simplifying the code rather than for optimization? Here's some examples:
Stuff that you shouldn't bother with
If you go by what academia produces every year you'll get a very skewed picture of what's important and what's not in computer science simply because researchers have very different goals and incentives than most programmers. Their career depends on the amount (not quality or relevancy, because how can you measure that?) of papers they produce each year and some problems in computer science like parsing or type systems, although fertile ground for papers, are of little practical utility.
Stuff that every IT manager needs to know, but they never do
Conway's Law
Brooks's Law
Stuff that every economist/psychologist needs to know
People follow incentives.
The text was updated successfully, but these errors were encountered: