Skip to content

Commit

Permalink
docs: more blogs
Browse files Browse the repository at this point in the history
  • Loading branch information
ashgw committed Dec 5, 2024
1 parent ae63f38 commit 9b1646c
Show file tree
Hide file tree
Showing 33 changed files with 7,271 additions and 2,433 deletions.
8,010 changes: 6,042 additions & 1,968 deletions pnpm-lock.yaml

Large diffs are not rendered by default.

56 changes: 22 additions & 34 deletions public/blogs/ab-testing.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,59 +2,47 @@
title: Pitfalls of A/B Testing
seoTitle: Pitfalls A/B tests for complex decisions
summary: The overreliance on A/B tests for complex decisions
isReleased: false
isReleased: true
isSequel: false
lastModDate: 2024-01-22T09:15:00-0401
firstModDate: 2024-01-22T09:15:00-0401
lastModDate: 2023-07-22T09:15:00-0401
firstModDate: 2023-07-22T09:15:00-0401
minutesToRead: 5
tags:
- 'testing'
---

<C>
Sometimes, conclusions are presented as definitive winners, but these conclusions are often based on the best interpretations possible with the available data, data, that is not quite accurately interpreted.
</C>
A/B testing is often hailed as the holy grail of data-driven decision-making. It's that one magical method that promises to provide clear, actionable insights on everything from design tweaks to feature changes. **But here’s the problem:** it’s not as foolproof as people think. In fact, it can actually be downright misleading.

<C>
What's missing is the acknowledgment that the final decision wasn't actually based on a logical understanding of the collected metrics, but merely on a shallow assumptions.
</C>

<H2>The Illusion of Clear Winners</H2>
<C>
The reason behind emphasizing certain data over others can vary greatly, and mostly this selective use of data or let's say misinterpretation of test results can benefit individuals who want to support a particular idea for their [own benefit](), which leads to decisions being made based on incomplete or let's say biased interpretations of the data, rather than a thorough understanding of the metrics and their true implications.
</C>
Let’s get real for a second: sometimes, conclusions are presented as definitive winners, but the truth is far murkier. Those “winners” are typically based on the best interpretation of incomplete data, now data that’s often misinterpreted. And what's worse, people don't want to admit that the decisions made weren’t based on a thorough analysis of metrics, but rather on shallow assumptions crafted to justify a point of view.

<H2>What Are You Talking About?</H2>
Why? Because the selective use of data often benefits certain individuals who need the results to support their own agenda. When this happens, you're not making decisions based on a comprehensive understanding of user experience, you're making them based on a biased, incomplete picture of it.
</C>
<H2>What The Hell Are You Talking About?</H2>
<C>
Say there's a product with multiple complex features, each costly to maintain and very resource-intensive. And now, two stakeholders debate the fate of these features, one of them argues for their removal, the other advocates for fixes. To resolve the dispute, they decide to conduct A/B tests to gauge user responses to feature removal.
</C>
Ok hear me out, you’ve got a product with several complex features, each costly to maintain and resource-heavy. Two stakeholders are at odds, one says remove the features, the other argues for fixing them. So, they decide to settle it with an A/B test. Smart, right? They’ll see which way the users lean, based on data.

They start removing features, one at a time, testing each change. **First test:** feature removed, users remain unaffected. **Second test:** no change. **Third test:** same result. They keep going, gradually removing features based on the lack of visible user dissatisfaction.

<C>
They removed one feature and conducted the first test, the user remained unaffected, they did the second one, the user remained unaffected, they did it the third time, the same result, so they kept going, progressively removing features based on the previous "unaffected user" results.
But then… **users start leaving.** Fast. The gradual removal of features has begun to erode the user experience **cumulatively**. Sure, each individual test showed no immediate impact, but the bigger picture is what A/B tests fail to capture. User satisfaction didn’t drop in isolation, it happened over time as the product’s value dwindled. But this wasn’t caught in the tests because those tests only measured the short-term, isolated effects.
</C>


<H2>The Fallacy of "Enough Data"</H2>
<C>
Eventually, a noticeable number of users began to leave, and they were leaving fast. The removal of features has impacted user satisfaction **cumulatively**, not just **individually**. The final test indicates increased user attrition, but attributing this solely to the last removed feature overlooks the broader context of user experience erosion over time.
</C>
The main flaw in this approach? The assumption that enough data was collected to make a decisive call. Here's the truth: users might be dissatisfied, but they won’t always show it immediately. They’ll keep using the service for a while, searching for alternatives to compensate for the missing features. Until one day, they’ve been forced to adapt so much that they jump ship entirely.


<C>
The primary fallacy in such cases is in the assumption that sufficient data was collected to support decisive actions. User dissatisfaction resulting from feature removals can be multifaceted, users may not be satisfied, but they keep using the service anyway, and since they're dissatisfied, they keep looking for other alternatives as a complement to the missing
feature(s), until one day, they remove too much, the user has no option but to leave, this behavior will definitely not get picked up with this form of A/B tests. The tests fail to capture the overall user experience trajectory.
That behavior? **Not picked up** in an A/B test. Because the test doesn’t account for the long-term, **evolving** experience of the user. It doesn't capture how frustration builds up over time. So yeah, A/B testing in this scenario is utterly inadequate.
</C>
<H2>Where A/B Testing Works (and Where It Doesn’t)</H2>
<C>
Don’t get me wrong: A/B testing has its place. It’s great for simple changes, stuff like testing which CTA converts better, or optimizing a landing page layout. But when it comes to evaluating the **overall user experience**, especially around complex feature changes, A/B testing starts to break down. It just can’t capture the full picture.


<C>
I mean, don't get me wrong, A/B testing is a valuable tool when applied appropriately, such as comparing sign-up buttons, landing on webpages, UI changes and stuff like that, it has limitations though, in capturing user satisfaction specifically.
</C>
A/B testing isn’t all bad. It’s a useful tool, **when used for the right reasons.** But applying it to make complex, high-stakes decisions is like putting a band-aid on a gunshot wound. It can reinforce preconceived notions, but it’s unlikely to reveal the nuanced truths of user satisfaction and behavior.

<H2>Conclusion</H2>
<C>
So caution is therefore advised in relying solely on these tests for complex decisions, it can inadvertently reinforce preconceived notions rather than reflect nuanced realities. It is after all a valuable analytical tool, its application should be tempered with an understanding of its limitations.
</C>
When you’re dealing with intricate user experiences, the picture isn’t as simple as testing one button against another. The truth is messy, multifaceted, and often not fully captured in a single test.

<C>
Satisfaction and user behavior are intricate and multifaceted phenomena, way beyond the scope of simple A/B tests. It is important to approach testing with humility and awareness of its potential blind spots.
</C>
So, the next time someone suggests you rely solely on A/B testing for complex decisions, hit pause. Approach with caution, and remember that while A/B testing is valuable, it’s not the whole picture.

</C>
22 changes: 9 additions & 13 deletions public/blogs/async-python.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ isReleased: true
isSequel: false
lastModDate: 2020-02-02T09:15:00-0401
firstModDate: 2020-02-02T09:15:00-0401
minutesToRead: 7
minutesToRead: 9
tags:
- 'python'
- 'async'
Expand All @@ -21,15 +21,15 @@ So, what's the deal with async functions in Python? We all know they pause execu
</C>
<H2>Iterators</H2>
<C>
An iterator serves as a programming object that facilitates the systematic traversal of a <L href="https://en.wikipedia.org/wiki/Container_(abstract_data_type)">container</L>, granting access to its elements one at a time. It maintains the state of the traversal, allowing the iterator to pause execution after processing a specific item. To continue, a call to the `next()` method, which is typically named by convention, is made. This call prompts the iterator to resume execution and retrieve the next element in the sequence. This iterative process persists until the iterator reaches the end of the container, and this approach is commonly referred to as lazy evaluation.
An iterator serves as a programming object that facilitates the systematic traversal of a <L href="https://en.wikipedia.org/wiki/Container_(abstract_data_type)">container</L>, granting access to its elements one at a time. It maintains the state of the traversal, allowing the iterator to pause execution after processing a specific item. To continue, a call to the `next()` method (which is typically named by convention) is made. This call prompts the iterator to resume execution and retrieve the next element in the sequence. This iterative process persists until the iterator reaches the end of the container, and this approach is commonly referred to as lazy evaluation.
</C>
<S3/>
<C>
The key concepts here are the actions of pausing, resuming, and awaiting the next element. These notions become particularly relevant and meaningful when correlated with the concept of asynchronous functions, the idea of pausing and resuming execution aligns with the asynchronous nature of handling tasks, which allows the program to efficiently manage and switch between various operations without blocking the entire execution flow. The term "await" further emphasizes this waiting aspect, where the program can await the completion of a specific asynchronous task before proceeding, we call this task a ``Future``, more on this <L href="#related-objects">later.</L>
The key concepts here are the actions of **pausing**, **resuming**, and **awaiting** the next element. These notions become relevant and meaningful when correlated with the concept of asynchronous functions, the idea of pausing and resuming execution aligns with the asynchronous nature of handling tasks, which allows the program to efficiently manage and switch between various operations without blocking the entire execution flow. The term "await" further emphasizes this waiting aspect, where the program can await the completion of a specific asynchronous task before proceeding, we call this task a ``Future`` in Python. More on this <L href="#related-objects">later.</L>
</C>
<H2>How Do I Create An Iterator</H2>
<C>
To define your own iterator you have to adhere to the iterator <L href='/blog/python-protocols'>protocol</L> meaning you have to define `__iter__` and `__next__` methods
To define your own iterator you have to adhere to the iterator <L href='/blog/python-protocols'>protocol</L> meaning you have to define `__iter__` and `__next__` dunder methods
</C>
<H3>**``__iter__``**</H3>
<C>
Expand Down Expand Up @@ -81,14 +81,14 @@ except StopIteration:
language="python"
showLineNumbers={false}
/>
<C>In this example, the `__iter__` method <L href="https://docs.python.org/2/library/functions.html#open">opens</L> the file, and the `__next__` method reads one line at a time until the end of the file is reached. This approach ensures that only one line is loaded into memory at a time, by only loading parts of the file upon request, the request in this case is the act of calling the `next()` method which makes it memory-efficient, even for large files.</C>
<C>Here, the `__iter__` method <L href="https://docs.python.org/2/library/functions.html#open">opens</L> the file, and the `__next__` method reads one line at a time until the end of the file is reached. This approach ensures that only one line is loaded into memory at a time, by only loading parts of the file upon request, the request in this case is the act of calling the `next()` method which makes it memory efficient, even for large files.</C>
<H2>Async Iterators</H2>
<C>
The extend the concept of iterators to asynchronous operations. Async iterators implement two key methods:
</C>
<H3>**``__aiter__``**</H3>
<C>
Similar to its synchronous counterpart, this method returns the async iterator object itself, making it compatible with `async for` loops instead of traditional `for` loops.
Similar to its synchronous counterpart, this method returns the async iterator object itself, which makes it compatible with `async for` loops instead of traditional `for` loops.
</C>
<H3>**``__anext__``**</H3>
<C>
Expand All @@ -99,11 +99,11 @@ The invocation of `__anext__` involves the use of the `async` and `await` keywor
</C>
<H2 id="what-is-an-awaitable?">What Exactly Is An **``awaitable``**</H2>
<C>
An ``awaitable`` is an object that can be used with the `await` keyword within an async function. It typically represents an operation that may not be immediately completed, such as an asynchronous task or a <L href="#related-objects">``Future``</L> object.
An ``awaitable`` is an object that can be used with the `await` keyword within an async function. It represents an operation that may not be immediately completed, such as an asynchronous task or a <L href="#related-objects">``Future``</L> object.
</C>
<H2>How Do I Define An **``awaitable``**</H2>
<C>
Define an object that implements the `__await__()` dunder method which should return a generator object
Define an object that implements the `__await__` dunder method which should return a generator object
</C>
<H2>What Are Generators ?</H2>
<C>
Expand All @@ -115,7 +115,7 @@ When encountering the '`yield`' statement, an event occurs: The current state of
</C>
<S3/>
<C>
Upon calling the generator again, whether through a `for` loop or the `next()` function, the generator resumes execution from where it was paused by the last '`yield`'. This resumption allows the generator to produce the next value in the sequence. This process repeats until the generator function completes or encounters a '`return`' statement.
Upon calling the generator again, whether through a `for` loop or the `next()` function, the generator resumes execution from where it was paused by the last '`yield`'. This resumption allows the generator to produce the next value in the sequence. This process repeats until the generator function completes or encounters a `return` statement.
</C>
<S3/>
<C>
Expand Down Expand Up @@ -144,7 +144,3 @@ showLineNumbers={false}
<C>
I've referred to `Future`, so what's a future ? it's a combination of both `iterable` and `awaitable` , you might have also encountered an `asynciterable`, this is an object that defines and `aiter` method and returns an `asynciterator`. Lastly there's `AsyncGenerator`, just like `Generator` but it actually it inherits from an ``AsyncIterator`` instead of `Ìterator`
</C>
<H2>Read Again</H2>
<C>
Seems confusing doesn't it ?
</C>
48 changes: 48 additions & 0 deletions public/blogs/attention-to-detail.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
---
title: Attention to Detail
seoTitle: Attention to Detail Is The Real Difference Between Good and Bad Engineers
summary: The real difference between good and bad engineers
isReleased: true
isSequel: false
lastModDate: 2024-07-05T08:15:00-0400
firstModDate: 2024-07-05T08:15:00-0400
minutesToRead: 8
tags:
- 'software engineering'
- 'best practices'
- 'career advice'
---

<C>
What separates a good engineer from a bad one isn’t the number of frameworks they know or how many years they’ve worked in the field—it’s their attention to detail. It’s not about survival in high-pressure environments or pumping out code under unreasonable deadlines. It’s about what happens when you’re given the time and resources to do your job right. The real question is: how much do you care about getting it right?

There’s a lot of noise in the industry about tech stacks, years of experience, and trendy frameworks. These metrics are misleading at best. Bragging about “five years of React” or “deep expertise in TypeScript” doesn’t tell me anything meaningful. Technology evolves, but the fundamentals don’t. Frameworks and libraries come and go; they’re just tools, iterations of the same concepts. What matters is how you approach solving problems, not how many buzzwords you can list on your resume.
</C>

<C>
College degrees often fall into the same trap. Most academic programs don’t teach students how to be great engineers. Instead, they teach how to pass exams and cover surface-level knowledge of tools that students could learn independently in days. The real value lies in teaching the concepts—the hard stuff like systems design, algorithmic thinking, and how to build complex systems from the ground up. Instead, universities often churn out graduates who have to learn the important things on their own.
</C>

<C>
Attention to detail is what sets good engineers apart. It’s not about nitpicking or aiming for perfection at the expense of progress. It’s about taking pride in your work and ensuring it’s done properly. In front-end development, this might mean ensuring the user interface matches the design exactly, with consistent margins, proper alignments, and functioning interactions. In back-end development, it’s about writing efficient, scalable, and secure code. It’s about thinking through edge cases, preparing for future growth, and handling potential failures gracefully. The details might not always be visible, but they’re what make a system reliable and maintainable.
</C>

<C>
When an engineer cares about these things, the results show. A product with attention to detail is stable, user-friendly, and scalable. But when the details are neglected, the cost becomes apparent over time. Onboarding new developers becomes a struggle because nothing is documented. Teams waste time fixing bugs that should never have been introduced. Deadlines slip as the technical debt from sloppy work piles up. It’s not just inefficient—it’s expensive.
</C>

<C>
The key to avoiding these pitfalls is to prioritize process over individual heroics. Documentation should be non-negotiable. It’s not enough to write clean code; you need to document the decisions behind it, the context in which it was made, and the rationale for the approach. This ensures that anyone—whether they join the project tomorrow or two years from now—can understand the system without relying on tribal knowledge.
</C>

<C>
Teams should also rotate responsibilities. No one person should “own” a part of the codebase. This isn’t just about preventing bottlenecks; it’s about building resilience. When multiple people understand a system, the team can function even if someone leaves. Centralizing discussions is equally important. Every decision, update, or issue should be logged in a shared space—like a ticketing system—so the entire team has access to the history and context of the project.
</C>

<C>
Finally, pay people what they’re worth. Engineers who care about details aren’t just workers—they’re craftsmen. They take pride in their work and want to do it well, but that commitment needs to be valued. Pay based on results and expertise, and as team members grow, their compensation should reflect that growth. People who are treated well and respected for their skills are more motivated to deliver their best work.
</C>

<C>
Attention to detail is the real differentiator in this industry. It’s what ensures a product doesn’t just meet expectations but exceeds them. It’s not about years of experience, fancy degrees, or memorizing the latest frameworks—it’s about putting in the effort to do things right. When you hire for attention to detail, document your decisions, and build processes that distribute knowledge and responsibility, you create a team and a codebase that can withstand anything. That’s how you build something that lasts.
</C>
Loading

0 comments on commit 9b1646c

Please sign in to comment.