Skip to content

xnan-dev/The-Rogue-Engineer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 

Repository files navigation

                 ** The art of Engineering **
	    
	            * maturing prototypes *
		  
		  

                             Spark

ideas,initial exploration,viability,energy,concept model,prototypes


The physicist programmer might want to make a computer simulation 
for a given model he has in mind, the mathematician could be give 
a try to code his ideas to a computer generated graphical output. 
The hobbyist programmer could want to build a small tool or a video 
game, or a programming language, or whatnot. So a small program pops 
from their fingers, out of pure will, not mandate, of desire 
not of request, not during strict work day ours, it might just aid 
to take something out of insomnia, It comes not to support a business 
or an organization, comes without a budgets, a schedule, without supervisors,
managers, even users. It comes not to satisfy a menial task required for a 
big corp, not ch satisfy an external demand, not to have a definite place to
fill in a coral construction, it doesn’t start from a request of a huge paid
engineering team, collaborating pushed by market demands.

But starts, nonetheless, and if runs kind of right specially under minimal 
expectations, if it shows something interesting, even if barely working, 
the physicist, the hobbyist, the mathematician, gets something back. 
Feedback, a desired or unsuspected response, a digital oracle or an 
abstract machine with its fuel and its work. It gives back a perspective 
on the chosen topic, the program retrieves a panoramic view to the curious 
in exchange of the tribute in lines of code. And crucially, it gives back 
energy to go deeper.

The mental model we had gets more definite, it distills richer questions 
for the oracle, the machine envisioned develops and starts rolling, and 
the creature demands more exploration.

This voracious creature feeds from fingers, mind, time, and code.



                          then light

concept model,prototypes,prototype iteration,viability exploration,
minimality,flow,energy,version control systems


As closer we are to the a concept model and to the prototype, 
we flight bare bones, fast to code and explore alternatives, 
careless discard of non promising paths and iterate, keeping all 
things simple, pure KISS, architecture to the point of almost 
not existing,  with complete disregard of future changes, expansions, 
and integration to other components. To flight we need lightweight. 
To get a grasp, and a taste, to satisfy our kraftlust, get impulse, 
go head first for an optimistic maybe or even an optimistic yes, 
if you are like me, it will suffice a folder, a zero lag editor 
for programmers, and a non verbose scripting language we know 
for ages, which doesn’t nag us with type declaration, resource 
devouring slow IDEs,  slow compilers. 

No. Just a folder, an editor, and a script.

Then the spark, the file, divides like that magic cell which 
started life. Functions born. Classes enclose affine functions 
as villages. Our spark file gets uncomfortable and splits.

We are moody creatures. 

No corp is taming us to standard compliance. An anarchic spirit 
flows in the code. No one rules our little digital world.

As the song from Michael Nyman claims: 
the heart demanded pleasure first. 

After a few days, if we are sensible fellows, we should feel 
unease of making destructive mistakes, losing files, entering 
a short but painful pathway or a rabbit hole we cannot 
return from. So, a version control system enters the game and 
we get bored we it’s simple albeit bureaucratic setup. 


	                     Protos

simplicity,prototypes,real world,hard wiring, miomic perspective


If we are lucky, and our mind is, at least this time, focal and our 
adventure path is not too foggy, we have a broad direction to trip away. 
If we have a dormant engineer side, a prototype idea can spawn.  A prototype, 
which after construction, in a benevolent environment and the aid of crafty 
hands, can solve a riddle, confirms enough the possibility of an interesting 
machine, a useful one, it can offer a machine that at least partially works 
as desired and could be the base better design explorations. A prototype 
creation that with the proper components around and enough adjustments, 
could bear a real world load and be elevated from a little experiment 
out of curiosity to a key piece to solve a major puzzle, ours or not.

A sensible prototype building grows better and faster with simplicity 
in mind, it is expected to be imperfect and incomplete, to work 
in almost ideal circumstances, its better built lightweight keeping in
mind we might throw it away in a blink, we must waste little 
construction material.

Our prototype material is unfortunately caved from precious time mines.

It’s by nature unstable since its ways to function get clearer only 
after we go deeper. It can foresee internal consistency and structure 
in favor of close season to harvest. It can neglect configuration since 
is has to offer proof of utility on key scenario, not multitudes. 
It serves better from brief, partial hardwired passive, crafted little 
tight function-like components, than from mature but complex components 
hard to integrate.


		          Our Domains

unknowns, learning, new tools,new components, new languages,
incomplete documentation, boredom, stagnation, frustration, 
building vs. learning, experienced use, optimistic to novelty,
manual procedures,flow breaking,back pedaling, building vs. using


As a gaucho rastreador in la pampa,a kind of cawboy in the remote 
south of the 19th century, we know our land, what tools we have, 
languages, storage, command line tools, API services, OS, components 
and math. We might adventure beyond our land while prototyping, it's 
a sane exercise to look for components to aleviate the task at hand.
We can find a simple tool with an online search which excuses us from
the mind effort and time to develop something we are in need, and more 
importantly to avoid the hassle of mantaining it after our primal 
tinkering, to mature it, to ensure it meets it's required quality. 
We should, howevever, be aware that we can end up spending more time 
and headaches trying to make work some complex or badly documented
component. That it can take us to a boring path, to feel stalled, 
to hit too many walls and frustrate. We are not doing corp engineered, 
we have limited energy, we are moody, no paycheck outweights 
or dissapointments. 

Worst than taking hard to tame beasts to support ours, is be tempted 
to mix in a  flaming crisol athigh temperatures our wish to learn 
and our determination to build. It's ok to adventure learning throughout
a simple exploratory project, when your objective is to learn, to take 
a shot on a shiny beast to ground our lectures with naif experience but 
we our goal is to build something real, we must expect a delayed timeline 
and a sub par quality due to the lake of expertise we have, due  to be
ignorant of the shadows below the shiny. 

If we are serius about or project, if is probably wiser to stick to 
our domains, and expand it with care, taking advantage of any gently 
fellow's work, who has been kind to share it's work or to expend in
what can saves of much grinding. We can be bold but should be careful 
enough to avoid too many cliffs and foul us to a land which we are 
not competent enough. 

We might lnow that we will benefit from using containers for better 
distributing our load in production but if we are rockies can end up 
losing weeks to get anything done, same for other sexy tech out there. 
it can tiring, disappointing and be unacceptable on our expected 
resources and timeline. Take into account the trade offs you make. 
Avoid mixing too much unknowns, we are optimists to start walking 
unknown paths, and get lost or doom our prototype with unrealiable 
dependencies. Be wise, know your land, delay complexity. 


Big, flexible components from the viewpoint of a little prototype more 
common than not, derives in poor integration, tedious manual procedures 
that ruin rapid tinkering, trials and much needed adjustments. .We must 
be careful to avoid picking seemingly great components alien to our 
experience, with voracious error prone setup times, complex class 
hierarchies, partial documentation or simply put: too many ways of misuse. 

The corp engineer, well seasoned and savant,  will finger at us: do not 
reinvent the wheel, you novice. And that’s an undeniable part of the big 
truth. But still, insist: we can code out of the way our cranky ten lines 
of code logger, or two lines database backup script, our fifty lines 
tightly integrated profiler tool if they sum a tenth of the time and 
the headaches we have to invest to setup a complex and flexible standard 
tool or component. 

Even if a component does a terrific job for a big or even a small corp,  
we don’t forget a corp might count with productive systems experience 
using them, with seasoned specialists with knowledge on the nuances of 
those tool sets, allowing them to have a quick setup and proper user. 
They have also enough work force to withstand and dilute an additional 
load on each member of the team. And with a paycheck, a painful 
development experience can be easily neglected but in our untamed 
project, it can might not be so. 

 We are then likely better of them, we can custom tailor our own tools, 
 as we need them, doing it’s eagle focused minimal jobs, providing it’s 
 exact feedback where need them as long as we are perfectly aware than 
 we will eventually or even likely drop them later when the life 
 expectancy of the prototype or even the project expands. We must be 
 aware of the trade off, if the tool gets too complex, hard to maintain 
 or time consuming, then it’s time to get rid of them. If what seemed 
 simple to craft probed not so, then go for something done, we 
 nonetheless must be wise, and go for something simple enough. 

As always, we have to ponder the best strategy, to build, to take 
something simple, unknown or complex or flexible, and more importantly, 
we must to be open to revert our decision when reality teach us wrong.

Then our child grows. Masons enter to craft their medieval game 
and a cathedral dominates the village. 

Our beloved prototype must suffer a brutal dissection. 


                                   Hardwired

exceptions, tangled functionality, APIs, isolation, simplicity,
future guessing,design patterns,static code,dynamic code, foreing
API independence, IoC


Unforntunely, there are aspects of what takes us beyond a prototype 
that are unconvinient to isolate. Exception handling is one, it just 
polutes our code, it intermixed our success paths with peripherial 
details. A less complex one but still a mess is proper logging. 
We should refrain to fall tempted on depend upon proxies to form 
onions which expands the core with fine grained control. we don't 
use language expansions like aspect oriented programming to enable 
separation of our wired layers. It end ups hard to mantain anyway 
and slow us down. 

Whenever possible, we should try to separate the point from the 
ellipsis, and isolate aspects that might later change, components 
that we depend on, which introduces a foreign API and can pervade 
our code base,slike torage solutions that have that downside. 
We take care to not expose too much our dependencies througout 
or simple, relevant API. 

That doesn't mean to build complex hierarchies and settings files 
to allow the uncertain, we won't foul as with a magic sphere like 
a manusch luck teller, we use simple classes and methods, we prefer 
the static from the dynamic, the particular from the generic, our 
domain from external vocabulary. We shovel, tough  the accesory 
appart when doable, always with simplicity, to avoid getting 
punched repeatedly later through.

We hide our means to retain control of our domain API, we avoid 
to be dominated by an uncomfortable exoskelleton, an unwanted framework. 
We avoid abstracting and interfacing early, we don't create to factories 
unless we must, we don't  face with IoC from start:to avoid cumbersome 
and troubled experience of code, hard to debug code we stay simple, 
we conquer with humble code, as the ancient master Confusius said, 
stay low, be like, the water conquers the rock, be like water.

               
                     3.   Masonry


Role play starts. The artisan, rogue programmer encounters 
the mason architect. We are brought in the dawn of architecture.  
Our beloved prototype must suffer a brutal dissection. 

We get the growingly amorphous albeit promising prototype a look 
from the distance, above, and the cliff. 

Architecture is built around and inside the prototype, major surgery. 
Our inner mason now rules. Our rogue programmer loss or at least 
share its power. To support an ample set of orbiting satellites 
that must to be in place to make it realistically usable; some 
typical ones: make it viable on a production environment, which 
doesn’t necessarily match the comfortable place we use to develop to 
the expected load. Besides, when things go south, take proper care 
of unexpected places we end up, be sure to be prepared to unavoidable 
fall, given enough hints for forensic analysis when or beast hits the 
floor hard. We have to be sure we can meet benchmarks to make it usable. 
We have to make it to play well with robust noble friendly beats, like 
components that proven not to be rusted by use and time. To be robust, 
to fit a good design for our purpose code refactor has to be done or 
-deities save us- rewrote from scratch.

Then the flow starts disappointingly to slow down, someone need to take 
care of boring stuff (hopefully not us, the quick and dirty rogue programmers). 

As Gothic cathedrals needed the invention of contra fortes to aid 
the thin walls to support the outward derivative forces cúpula and 
pinnacles to to direct forces downward; the program needs its structures 
albeit purely abstract, practical on every other sense. 

The architect as person or just as a role play must enter the game 
at the right time, if it comes to early we risk a myriad of non 
useful design attempts since the prototype hasn’t solidified and we 
loose precious, unpaid time. But if he enters to late, the code might 
gone wild, and refactor could be harder than restart from scratch. 
If so, we stall if not just abort the mission and throw a new skeleton 
to the closet. Both mistakes entail risks. 

But get back to the role. The architect, the mason, imposes a load over 
the rogue programmer, the mason robs him to rise  its tribute to please 
the gods of craftlust to an otherwise impossible height. 

If the mason is wise, he doesn’t take too much time from building activities, 
since the workers are few, if not just one; if the imposed structure 
is too big, complex, or hard to navigate, or just to abstract or even to 
metaphorical outside the domain of the project at hand, the constructors 
suffers, slow down, debugging gets cumbersome, and the flow disappears. 

When the structure is untamed, the constructor bears the pain of lurking 
over and over in its own faulty memory to know were to go, to build, 
to debug. Bridging the structural abstractions to factual, active code 
jumps over long rivers. 

Hints of these hard to fix pitfalls appear in the form top down designs. 
Appears in the form of long stack traces, injected code into classes, 
generic alien words like containers, managers, policies, providers 
and factories. Appears, also in sequences of one line function call chain 
and proliferation of interfaces with just a single implementation 
and pattern oriented design. Also appears, in external configuration files 
to prevent future changes to require the slightest code change at all, 
eluding the fact that for a programmer its simpler to write code than 
settings files from XML, YAML, and property files. All the these come 
from a too imaginative, envisioned architect. 

But, the corp, big engineer would say: nonsense! all that stuff 
you mention has its utility and we have successfully built great software 
with clever, powerful and flexible designs. And that is also true, 
for a large team, with tens, hundred or thousand of engineers architect 
and developers the trade off can pay off but don’t forget that you count 
with a hierarchical structure that can force compliance, submission to 
an arbitrarily large or time consuming super structure, easily overlooked 
in company with a paycheck. 

For the rogue folks which builds free from the control and enforcement that 
hierarchy imposes, and without the ability to dilute the work overload over 
a large team and a heart motín can crash the ship, don’t forget: we creative 
programmers on the wild, are moody creatures. 

So, should we build grad cathedrals or modest chapels ? Well, the particulars 
matter, we should judge the trade offs. It seems wiser to build the floor plans, 
to architect, with a bottom up approach which helps to keep the design grounded 
tight to its current demands. And this is closer to a truth as closer we are to 
the prototype. Once the project needs to support divergent demands the converse 
might be more nonsensical.

If we are role playing, a sensible approach is to avoid playing the programmer 
and the architect at the same time. One can derive benefits by holding a notebook 
for sketching design ideas both for current and distant needs. This keep the mason 
at bay. His ideas get heard so he doesn’t get furious and take over the a good 
looking prototype and turn into a design monster.



                        4.	 The Crystal Castle

The prototype works flawless, it flies likes like an eagle, no fail so far, we checked 
and re-checked the math, we played with it's console commands,  it's UI works 
as its due,  we were meticulous safeguarding our little beast from misuse. But it is still 
in the sandbox, no real, heavy load has been lifted. It might work perfectly yet 
it's made from crystal glass, it's brittle in its purity, a like shock will smash 
it into thousand pieces. To stand the teeth of time we must ensure that systemic 
failure won't happen.  What if a networked resource is unavailable or worst,it fails 
after allocation amidst its duty, can we recover ? if our realm needs to work restless,
pleasing moon and sun, we should then give them it's due tribute. We, then, expect failure,
and prepare. Every allocation can fail, should we have redudant resources? then they 
must be properly setup, and taken into account at allocation times. 

The prototype then matures to stand. 

We shall be wise, not making this task casually while programming but taking it appart
we it's due thinking, designs, and programming time, implemented gradually if needed. We
shall keep role playing.

What if a components becomes unrelayable, it starts to lag, it's aid is not ready when needed. 
it times out. Does it break our UI, does the app hangs ? Do we left enough trace of for
forensic analysis in the aftermaths ? Are timeouts properly set for each response according 
to what is needed? Are exceptions handled and reported well enough ? Are they handled 
at the proper layer to allow the system albeit degraded in it's capabilities still on duty ? 

We have to take some time playing the role of analyst of fault tolerance needs, and as architects
design accordingly the components, services, logs , settings, messages and event tracking to withstand 
the punches, and get ormesis on our side.  Some simple tactics might help. Like forcing exceptions 
in the code at different layers and then test them for failure. If we have time and seems worthy, 
we can even design a simple way to force random failures now and then, to keep recovery tatics 
excercised. We might want to track events of sucess and failures on a event tracker service, 
it might be a custom solution like tracking them to a database. We can track our punches with 
critical info like serverity, recovery state (hard fail or recovered), exception messages, 
stack traces, time stamps. We might reset services, web servers, databases or machines to see 
we recover properly. 

To keep everyone calm, it's not enough to log and partial recover if no one handles the bad news. 
If our beast pleases moon and sun gods, then we should use a tool that monitors its availability, 
there are standard open source solutions that checks the health of a system according an specified 
pattern, keeps track of delays, historical info and reporting through instant messaging services 
and email. We can even make our even tracking service to have a descriptive monitor board for 
different parts of the system.

If we do our due, we leave the realm of a crystal prototype and enter 
into the wild lands of production armed to fail with grace.


The purist

Overseeing the programmer, there are two architects working on software design, their mind and 
heart are posed at two scales. One thinks clean with homogeneity in mind, his breathes modules, 
object hierarchies, abstractions and responsabilities, he splits and move around and regroup 
method, distills interfaces, delimits scopes; pills up scrambled properties into addecuate 
domain concepts, he enables reusability and minimizes inconsistencies. His tools are design 
patterns, refactoring, domain knowledge. He takes into account proper care of non functional 
requisites, allowing well organized error handling, tracing of troubles and security concerns. 

The Eclectic

The other architect takes a broader look at the system, he considers it outside the limits 
of any chosen programming language, observes his fellow’s micro architecture work with desdein 
and drops it in a black box. If many purity realms were needed through software or architect 
demands, that box becomes many. If many many processes for a given box, then instances are setup. 
He pounders how those heterogeneous pieces should interact and beyond the confines of a single 
process and language purity, he thinks about networks, queues, remote messaging, infra structure 
allocation, timed tasks, improvises messaging channels wiring components throughout amorphous 
piles of storage, networking, files or pipes. He determines how to setup the existing components 
into a living creature.  He designs how to monitor the monster behavior and safegard its storage 
from fortuit destruction. Above all, he thinks pragmatically on how to take advantage of existing 
standard components, remove the dust from legacy systems and give them another chance.
 As in open sea ships, he designs stanc compartments to prevent sinking on  confined damages.

Verbs and Nouns

The word refactoring seems to allude to that old school math operation which reorganizes formulas 
in a more usable form, in part, that’s what we do but what is missing in the analogy is the mean 
we use, which is to fact we conceptualize sequence of steps on a general term which describes the 
action. In the same sence we don’t talk about moving knees and muscles, leaning the torso and weaving 
the arm ti say someone is walking, we take the same path of extract general concepts which helps not 
only to avoid repetition but to also state a general intention of a complex activity. Another 
consideration we take, is to avoid mixing different levels of conceptualization in the same sense 
that a story teller don’t mix a character activity with a detail of how a given encyme is secreted 
in his stomach. So, proper conceptualization to aid a well organized thinking is the main objective 
of refactoring, which eases the work with the code base and avoids code duplication. In some cases, 
the supporting low level details are extracted to a set of private methods, in others it’s better 
to group them in a new low level module or class.

Argument set conceptualization

As a unit grows it might end up happening that a set of fields go everywere togetter, they could 
simply a set of scalar variables more or less used on and on over different methods other times 
they are stored as fields of an array or a similar structure. More often than not, they are part 
of a general concept which can be grouped up in a class or a structure, that makes the reader’s 
lifes simpler as it’s properly ties togetter the pieces with an identifiable name in method 
declarations and usage.


Splits

Some classes or modules grow wild and end up mixing different sets of behaviors which makes hard
to have a grasp of what is the relation between the unit at hand. An incisive cut from al savvy 
hand can alleviate the consequences of such mess: if the place is chosen right, the new 
conceptualization makes more sense for the code reader, the scope of action and state of each set
is reduced. Other times what makes a mess within a unit is that a core functionality gets mixed up 
with derived supporting functionality, splitting accordingly can make the core easier to analize 
and maintain and avoids constant introduction of satellital functionality which should not change the core itself.

			6. Constructor klans
As a breed of mathematician, the programmer crafts its functions, them as we do, 
sculpt abstract structures through the marvel nature's gifts called neo cortex. 
But, most of them by tradition and for the most part of our technological evolution 
they tint the paper, after the ancients left the arcilla the clay, the rock 
and the dirt and sticks. 

We, took the machinery built at the fifties, the childs of the ENIAC, the 
craftmanship of the modern mathematicians, physist, and electrical engineers. 
We, since not too long, have screens, and storage, we have compilers, static 
type checking. we can mut the machinery at work, and recoeve a non human but 
precise voice, an unfold echo of our declarations, as ecco gave to the 
classical greeks. 

So, we tinker easily, we can sculpt and unveil the master piece by approximation. 
Our cousin mathematicians relay on their hability of god's scalp of perfection, 
and scratch everything when they do what humans do most of the time, err. So, we
tinker fast, and build babel sized intermixed complex of and simple constructs 
bricked to unbelivable heights, we web out of persisting compositions on our work 
and a coral ensemble of fellow programmers, we name details and stepping up and 
struct, and name detail, always up. 

They clear cut, small but incredibly complex structures which some times our 
machinery cannot handle. While they abstract, the keep their realms mostly separate, 
they loosely grammar they symbolic rules, while we, instead  have to feed our machine 
beasts with no ambiguity. They inspect their counstruct old the time and wire the details 
and gain thier results, while all the time, we wire as tje, ,  but far less,and obtain 
mostly modest conclusions, but quite pragmatic ones.


				6. 	Accuracy
				
Our cousins enact sisiphus pushing upwads their paper  rock, and crash it to the flat
ground, when they fail to reach perfection. We, enact also sisiphus, over and over,
through failing tests, every time we carry a simple error, our rock crashes to ground, 
and we must debug, construct a bigger rock to climb, surrounded with incremental safeguards,
additional tests, and more complex accurate code, until we fail, and debug again. 

So, we get serious about keeping our machinery working, and take care to keep our 
path to the mountain top, as short as we can, we automate tests to obtain rapidly an 
overall diagnostic of the health of the system and its constituends. We mantain key 
tests as a whole or over its components, sometimes we need to do it manually or more 
commonly by automation. 

We might be constructing our brittle prototype, or already in the path to production, 
through architecture or through ingenieering, our even mantaining a our working machinery.  
So, when should we test manually, automate, run them, at which point we need to introduce 
it and at which granularity ? 

Some years ago started a fad around test driven development, which stated it was sensical 
to firt create a test case for every not yet built functionality we must include, ensure 
it fails, then build, then ensure it pass the test, from the ground up. Building our basic 
functionality, API and architecture bottom up, intermingled with intensive preemptive testing. 
The community took this rules too literally, and tested the hell out of it, and beared the pain, 
from paycheck to paycheck. 


					Driving Contrarians

We, as rogue engineers and as programmers of each of the noble klans, shall be wiser. Our means 
ar modest, and our energy easily disturbed. Every manual tests, if our design is not fitted to 
single action checking, requires us to do a sequence of manual operations, which is not only 
boring and painful, it's also inneficient. But, wait. Do not  yet automate. Be prompt to design 
in a way if can be suited that do not require a state to be reached by hand to just run a given 
functionality, keep as stateless whenever possible to not be forced loss loads of time, to give 
back beforhand the sand to cronos. Another property we design for is to not carry the load of 
building a complex setting to test by hand through our app's GUI. We prepare with propper setup 
to run tests without GUI interaction, to obtain fast status. 

IF we were able to do it, we get relief our load, we don't urge to automate tests to early, as 
test drive propposes. Some might say we might build our tower over faulty components, and then 
fall at large,  and that could be certainly the case. But, do not foul yourself: every test 
is subject to maintanance as every other part of the systems, they are brittle too, and worst:
you double your load over and over when your architecture and functionality is not solid, and 
is subject to refactoring and scrapping. That will occur all along in our path to a promising 
prototype, and to production. If we automate tests early, we are granted to scrap giant bulks 
of test cases. So, we are clever to postpone test case coding as late as reasonable, as long 
as we are comfortable tthat our code base mostly work without much additional work on manual 
testing. As soons as we feel unease with the beast walking, as the machinery halts to often, we 
reach the point of resignate to test automation, and as complexity gets over our code base, 
as its grow, and is closer to production, and as we cannot reason across the whole system 
the cascade effects of our changes, this seems to a sensible approach regarding the timeline. 
At some point complexity wins. We then enact sisiphus to the top of the mountain, over and over.

Contrary to the test driven approach we refuse to test at granularity fine enough to just waste
time on mostly trivial funcitonality, but the question arise. What is too trivial ? were to 
drive the line ? The sensical answer comes in the reversal attitude. We should test Top Down, 
and start just at Top, is core whole system functionality work ? If it's most of the time a yes,
and that matches our tests, then we return to our construction, as programmers or architects. 
We remain testing by automation, just at top level. If our machinenery holds it's required level 
of accury, we are fine, at the top. If it starts failing, horizontally at new funcitonallity 
too often, then we automate horizontally, unless we can get rid once and for all with the faulty 
aspects of the system, as a whole. 

We don't count with a huge budget, we don't relay on teams with lack of expertise that constantly 
introduce failure there were no one has a complete view,we can't work and tinker properly and adapt 
that way, it's to expensive, and we  loss our flow, and we are subject to deplete pir emergy once
and for all. Our experience allow us to take care of problems at the proper level, where they occur, 
we have a full view of the system, since we built it. 

When luck abandon us, and we are having problems due to unsafe roots, when the cracks proppagates 
from the a bottom level, when the roots seems failing, we descend, and construct automated tests 
at a subsytem level, as high as we can, and only if we cannot get rid, as before, with the faulty 
inners once and for all. Then, as before, we test horizontally on fail, and down again and again.
Always knowing that as down we go, our rock gets bigger, our path is longer. We are not civil 
architects nor civil engineers, their towers are solid, they render unusable and unsafe with a wrong
root, their errors are crytical and expensive as early are made, the shall measure twice, 
and cut once, everyone, when unsure an step taken, they won't never again rest well. On our domains,
we can solve problems at any layer, we can inspect an change the whole tower with a small correction. 

And that our testing constructs are neighboring crystal castles, easily teared appart by simple refactoring.
We are not So, we test, slowly, from top, then horizontally, then down and horizontally again, 
i.e., branching first, with a top down traversal.

			 Planarity
It started with an empty file, which sprouted in a flat surface, then it left that plane 
to rise into hierarchies, which stablished a sense of grouping, layers of abstraction 
and discurse, and overall sequenced sets of behaiviors, and names multiplied. That's all good,
everything makes better sense once named and organized. Well, at least as long as we gave 
birth to those hierarchies, and not too long ago. When we face other fellow's designs, their 
monumental mess of segmentation and interrelation of verbs and nouns, it is dawnting, where 
it starts, what's the core, and what's peripherial, what is is the unspoken allowed paths for 
obtain a desired result. 

Methods states and interdependencies are hardly clarified.  Documentation, in most cases, is 
insufficient. But, even so, how is possible that while two programmers have the training, 
the ability and the expertise to architect any machine, they hardly ever build a similar 
design, structures are differents, naming are chosen from dissimilar analogies, arguments
are arranged in other orders and places, in one design something takes an argument while 
in other that is carried through hidden state, some concepts emerge in one as a unit, in 
the other it's parts aer disgregated in a mesh with apparent unrelated fragments of 
functionality. 

It is a lost battle to expect clear communication through code betweeen programmers. 
It is as if we design taking random elements of a set of parts of functionality, 
mathematically speaking. Two complex hierarchies are not only just different, there 
is no way just to map one other, it is not a matter of adapting one to the other, 
there is no injective way to jump from one to the other. We spoke different languages 
with no common ancestor.

After designing good and large, we enjoy for some time the company of our metaphores 
and concepts, but hate other's ones. Maybe that be the cause that we feel releaved when
we just face simple, primitively structured frameworks and libraries, where almost no
hierarchy were erected. We find humble functions, with arguments, no state, and clear 
documentation, to the point, and evne so, with examples and discussions. 

A good example of this is PHP standard library and its documentation, no one ever gets
lost there, you just get what you need, in a simple manner. It's rare the case where
they build a complex structure. It seems to be all craft with not only expert hands,
but wise minds. If two minds envision a functional module differently, metaphors are 
mostly mappable between them and the jump from one to the other is reasonably doable, 
and we mostly understand each other, as we where just using a slightly different 
dialect and prose. 

We shall learn from that humble disposition, stay functional and modular when we can,
stateless whenever possible, to allow straighforward use, and ease of analysis. If we 
hold to that simplicity, our work is a joy for others, and for our future self. 
We avoid to climb up to build unsurmountable puzzles and seed babbel's fate. 
As programmers, we should have an endless long for simplicity. 

Two planar designs are far closer even while constructed using disimilar prorgramming
language than we designed with the same one but under complex hierarchies.
While you might need a more complex structure and behaiviour, it's trivial to just 
compose thorugh that kind of planar tribute to functionality. 
There is a practice seen on some horrendous design that consist on provide helper 
methods which compose the misterious instances that are required to be built to obtain 
a desired result, they provide paths to apparent simplicity through complex realms.

We decide to do the opposite, provide before hand plain simplicity and functionality,
and let the coming architects to do they craft as they will, upwards. We are humble,
we strive on simplicity and move forward. 

Another way to build rare puzzling  functionality is adding functionality through 
a pattern called decorator, each layer of the onion, composes for additional behavior.
We are force to navigate the hierarchy to figure out the right way to combine that 
partially usable set of classes. We can take relieve our fellow companions of that
experience through a clean exposure of direct functionality which takes the care 
to reuse the parts of the design needed for their needs without the unpleasant 
work solving the puzzle. 

Planarity, we insist, aids the separation of bare functionality built by our rogue
programmer from the architectural work. And if we find our self working much time 
on the architecture and not much on the core barebones  functionalities, it's clear
we took the wrong path. When we just have complex structures, functionality 
and design are intermingled, and we easily stray from the best route.

Our disposition to choose plannar structures should not only be related to 
code structures, the same applies folders for code organization and resources. 
In relation to code whe folders are plenty, shallow  and deep, we have to bear 
the cumbersome task of moving back and forth throug the folder tree, it won't fit 
in screen and we will have likelity to collapse at different places to make it 
more manageable. if the tree is big enough, and frequently used folders are spread 
across it, we might be forced to collapse, expand and jump between branches 
of folders, it worsen as the structure gets deeper. If the language at hand uses
packages or namespaces, declaration of dependency and usage by path is painful 
(unless we are working with the aid of IDE that handle that for us. Of course,
there might be good reasons to mantain such classification, it might be due to 
expose a well cut modularity, or to organize properly resources.  We just have 
to keep in mind that shallow tree structures might be a constant penalty while 
working on the project.

			Caprices
Even on the deep state of flow the rogue programmer enters,
invisible but vivid structures and machinery emerges that tempt 
him to expand beyond the razor blade focused constructrive process
at hand. While dedicated to translate  into code a matured, 
well defined functionality to step forward the prototype 
or beyond it, lateral ideas pop up, that while easily described 
at a conceptual level can make a huge dent on his available resources.  

If he were working for corp, it wouldn't be a problem, the programmer, 
the architect, serves at with a sense obligation, he feels forced 
to compliance to an external plan, chosen by other members of the team, 
and it is certainly the case. 

Now and then, he really thinks that can pitch the expansion to whom holds
the responsability and the final decision of the current scope 
of the system. We, unfortunelly, are not subject to such safeguards 
of our creativity, so we risk to deplete our energy and stall attending 
such internal voice. That inner voice might spoke us about new functionality,
to architect a better design, more flexible or more consistent, to aleviate 
our load of a not so frequent deployment process trrough automation, to replace 
components with more solid alterrnative, or whatnot. We must as depart from the
path our ship from the charming call of that digital sirens or to navigate 
closer like Ulisses but wax our ears to a barely manegable madness.

Insight can save us analysing alternative paths which don't involve forbing 
time coding, it can be mantaining manual occasional processes in place. 
It could be to relay on current infrastructure limitations, to avoid inpractical
purism, to take non verbose coding approachs when suitable. To work with 
simple tools which limit or powers to rework architecture, to introduce splits ,
renaming through IDE supported refactors unecessary an on a tiny project.

One approach is to resist the call through will power, and keep somehow focused 
on the rational decisions made before, by others or us playing a different role.
Another mean is to break the flow and depart from work in favor of of dispersing 
the mind with physical activity, to simple walks, to meet friends, to play 
with them or hear the vague or unrelated topics, to nature contemplation or 
to take the joy to play with the noble beasts, dogs, cats, or any silent 
but empatic animal.  

A third strategy is to suspend all project activity for a few days  if we are 
too immersed in the project, working long hours or taking selfishly every weekend 
and sensical rest, that can give us back balance to atend other relegated aspects 
of our life. Limiting our efforts can make is more effective while forcing as 
to take wiser decisions to get the project moving. Too much time available can fool
us to ponder time as cheap comodity. 

There is a hidden ratio between straightforward pragmatic construction ana grandiloquent, 
expansive design. That ratio express itself as a proportional penalty in code navigation,
in our human limitations of memory capacity and low bandwith betewen our minds, our fingers
and the computer input. Worst of all, that ratio directly express itself over the timeline,
to discourage us and puts at risk the whole project. 

Another way is just draining ideas in  a architecture old fashioned paper notebook, to capture
them in a postponed but possible path programming or engineering note book or just drop them
in a instant messaging private channel  for a blazing fast and cheap notation. That will silence
our inner voice, to let it feel that ist has been heard and the proposal has been properly noted 
and its efforts have been recognized and its work  won't be lost. In both cases, we would feel 
relieved and can continue as sensible creatures. We postponed the potential disaster.

The ideas will cool sooner or later, and we can revisit them when it lost it's shiny
reflection in proper role and mood and sense of direction.

Competent friends on the field, engineers, devolpers and architects,
people with project manager experience, can offer us from the distance 
necessary perspective that our narrowed menial but due attention to detail 
private us from.

To subject our work to short iterations tied to an expected timeline 
with incremental, staged,well defined and necessary deliverable functionality 
can inmensely aid. It's a rough corset that might be an undesired but final solution.
If we get tamed our hearts, the drift won't take us to obscure shores. 

If we sucumb, roles will be mixed, details will  likely get unantended, 
planed timelined turn in wishful thinking and will get expanded, partial
functionality might appear in the code base. Development gets a mess 
with tangled functionality expressed in complex commits or spawn 
of branchs for future completion. 

Paginini's caprices are a  sublime gift. Our caprices are just curses.


			Just figure it your self

In TV series Seinfeld, a guy comes to Jerry's appartment  to measure, 
design and install a set of cabinets, every two seconds, he asks Jerry 
to make a choice to the most minuscule irrelevant detail, he drives him 
completely nuts. As exposed in the TED talk titled "the paradox of choice", 
as more we have to choose, we feel increasingly unhappy, we tend to 
feel releaved with a subpar picking  that requires far less decision
making. We face the same chore when dealing with components, libraries 
and frameworks, and expose others to the same when designing over 
flexible software pieces. 

At design time it seems clear that we should not reach the point that 
we put more effort in the support for flexibility than to the core 
functionality, if everthing is externalized we might end up with trivial 
functionality that in the eyes of others won't worth the effort to figure
out , they might just pick a simpler solution or just rewrite to fit 
their needs. Adapters, factories, IoC, code generators, console tasks, 
interfaces, xml setups, abstraction layers, requirement to make repetitive 
use of boilerplate  code,  leads us in that direction. Complex state setup 
through object interdependencies and implicit dependendence method call sequences, 
nagging exceptions that users puzzled. Some times the setup has just too many 
ways to fail, or worse: too many ways to partial fail and obtain unexpected results. 

We face then, when architecting software, the need to ponder the non trivial 
trade offs between flexibility and simplicity and between ratio of core functionality 
vs. extensibility and to be prepared for change. Design for change mantra for use, 
rogue mason architects, is unacceptable, our time is precious and our energy can be depleted. 
 
One way we can retain the best of both worlds is to provide reasonable defaults 
wherever we can, we can make our machinery to stand on, usage can be straightforward 
from fabric, with a few lines of code. We can avoid to bother others and ourselves with 
the chore of preparing a daunting structure of folders, placeholder files, settings 
files with tons of configuration and boilerplate code.  

Preemptive care to prepare for future change can be done many times without the 
reliance of third party abstraction layers, IoC and extensive configuration, we can 
humbly implement plain modular function sets hardwired in a very concrete manner 
to the current component API calls. So, we just bridge from everywhere throughout 
this module to avoid cependeny on the currently required component's API. When winds 
of change arrive, we rework the module and move on.

Even if we follow blindly the prepare for change mantra, and expose massive 
configuration, factories , and IoC hooks, it might just be the case that after 
such hassle, the component that future brings us cannot be well fitted. 

Regarding to abstraction, there is an amusing example in Java, a double abstraction
layer called JPA over SQL management, the first layer abstracts the SQL per se, 
the second one abstracts different ORM mapping solution (there are just a few). 

It can easily made the point that the ORM mapping solutions offer a too simplistic 
solution to work reasonably well,  it just maps mostly trivial queries and users are
forced to be exposed to work on both layers since the claimed clean, higher one cannot 
manage a great deal of the tasks needed. 

We can do better. Explicit exposure of optional settings with in place code documentation, 
are far simpler to use than a foreign, lousely defined non code configuration files. 
Are those settings likely going to change without the company of code change? if not,
then we can just stay at code neighborhood in the company of our techy, savvy programmers 
and masons. We let others who work on giant corp teams to expose his users to dounting 
circuitery, we remain humble and grow wiser.

The most embarrasing case I stumbled upon of not figuring it youserlf is in traditional 
email server. Somehow the responability of discovering the right combination of a thousand
setting options is relegated to the server admin, just to control spam filtering 
we have hundreds of flags to tweak, a minor adjustment can lead to incorrect 
mails filtered. It is incredible that we still relay on POP, SMTP and IMAP protocols
defined in the late 80s. The email server's gazillion setup options smells damn funny,
the same way the offer of uncountable stock market strategies make the whole field 
of finance advisory look extremely fishy. Before we go smacking email server providers 
it is worth to consider that the underlying protocol design has fatal flaws which 
drives out the implementation of an ultimate solution.

We can point out a single major problem: the lack of a contact list for each 
email account (as a personal list of contacts we trust), and the fact that there is no protocol
to connect and authorize sender and receiver accounts, this two aspects make each mail 
a potential threat and an absolute mess to classify (it's quite obvious if we see every 
social network that does not suffer that problem like facebook, instagram, telegram, etc). 
Whatever the reason we ended up with such mess, we doomed server administrators 
to constant monitoring and tweaking of mail servers.


Ingredients for Digital Recipes

Most machinery, libraries, frameworks and components require a collection of settings 
to to be providade to work properly. As stated, if we were sensible, we took the care 
to provide reasonable defaults. Unfortunelly most machinery we built make a mess of it,
tangling operatory functions and methods with setup and preparatory ones distributed 
in class methods and constructors. More over, making things worst, most machinery require 
partial provision of such settings collection and some method call sequences fail 
due to incomplete or incompatible setup selection.

One way that could make ourselves and others life easier, to allow us to craft our  
building is to separate, to split both methods types introducing a settings class which 
clearly allow every supported option in a straightforward manner to be set. That gives
the API user a clear overview of whats possible to achieve, it gives a clear place to go. 

Even more, that metioned concetrated settings can be made immutable, that avoids the risk 
of too much sharing of state.  In most cases those settings can be provided on class 
constructors which depend on them. Default settings can be also concentrated there or can 
be easily found which IDE aided reference searchs. 

The same applies to nuances on the options of each setting, comments could be provided 
in place. As builders immutablity on this topic  likely not bothers us with considerations 
of performance since setup is something that occurs now and then, not constantly. External
access to those settings can be easily mapped from storage or XML, YAML files or the like. 
For the many of us, just keeping this simple, providing settings through code is felt as
a bliss. In this case, the additional overhead of splitting related functionality 
might be somethin desirable.

Failing gracefully

Prety much every IDE supports interactive debugging, we all sucumbed to it in
cases where we tried better weapons and failed. Depending on the nature of the 
project it might be the only way to treat an unwanted result, we might 
be leading with a complex UI that is by nature stateful and we cannot cut to 
the point we need to evaluate, so we just round the problem in closer circles 
jumping between breakpoints, and a frame to frame approach and careful 
inspection of the system's state. 

Other projects allow by design to just get a full run either by automated 
test calls or by direct UI request or non UI single calls. If that is 
the case, and if we have a deep understanding of the project design, 
we can just drop log info in the key parts of the system to understand 
by direct run a sequential track of what happened, we can inject exceptions 
to obtain traces to identify a problematic active path in the system.  

A few iterations tinkering with log info can give us the answer. 
With some practice, this approach tend to be way faster than 
the interactive debugging for many issues. We just go to the point, 
check what happens, iterate, and fix it, then rollback the hooks 
we made and we are done.

A step forward can be taken if we wire crucial tracking 
of information to log files or well structured databases, this not only
allow us to tackle a hot problem but to check for past failures. This is
particularly important if, by the nature of our system, we need 
to inspect a well known path over and over, if we have to inject 
code to prepare for a given problem in several places. 

We can also, include a settings class for enabling certain types of 
debugging information and behavior, for example: forcing rollbacks, 
exceptions, states, unexpected failures, etc. That way, we don't loss
the efforts we made to fix a single issue or inspect for proper functioning,
just keeping a reasonable sets of hooks in place that can be activated 
when needed. While this kind of practice is not common, it can certainly 
speed up things for a small team trying to move forward. 

This approaches should, though be very well organized since by the nature 
of the extension we are making, it gets tangled with the functional code. 
As example, if we decide to log an event which includes serverity, stacktrace, 
exception message, and more, it can take arount ten lines to setup, 
if we directly include that code in the function at hand, that function 
gets highly polluted, it's better to separate this kind of treatment appart 
in auxiliary well named methods.

We have then to ponder the trade offs, IDE's interactive debugging, 
on time tracing hooks and event tracking with permanent support 
for debugging.  The best pick is not always the same, sometimes
the best path is going interactive, some times with test cases, 
some times hooking and logging, etc. 

The same way event driven development prepares the code base for fail, 
we can prepare as well but as we move forward  realistically exposed
to the failures the system face. We do it ot with expensive or 
unrealistic brittle test case setups, we can just wire for inspection 
in normal use activity.

Hooking points wired in the system gives us also the possibility 
to prepare for partial failures, to move on as a whole with useful 
work instead of failing hard. It can aid to excercise scarcely 
ocurring events that can be oversighted. 

Of course, being able to mantain an useful settings by code capability 
depends on the support from the language we work with to dynamically 
include new source code to the running environment. All scrypting 
languages supports this by nature, most with suppurt for JIT, dynamic 
linking also do it. In the case our machinery needs a build 
through compiling, transpiling or the like, if we need to deploy
a full package of the software at hand,  the usability of settings
by code support might be reduced due to the overhead of time to deploy, 
specially if it is frequently tuned. Storage Structure, design and 
Backups (devel) Depending of the nature of our prototype and objective, 
we could need some kind persistence storage. In the early days 
of prototype implementation we can be tempted to just alter the 
storage structure design by tinkering with a tool which supports 
that storage for data inspection and design, (ex. SQL administration
tools). Working that way leave us with no track of structure changes 
(not to mention the testing data per se). Making things worse, that
structure is not stored in the control version system that we use
for the source code. 

A simple solution for our rougue project is setting up a backup solution
for our storage, it doesn't matter which one, they are made to handle
the task. It is wise to setup a rorative backup mode to have enough days
of history to get back to a working structure and data set in case
of disaster (and to also protect us even in the case we don't detect
the problem until some time pass. Suggesting backing up storage seem 
like a trivial, every body knows advice. Still, we are not talking
about production systems, we take this consideration for development 
process even while prototyping and not just for data which might be
irrelevant to some extent in a pre-production stage, we take into
consideration structure changes. 

Once we advanced on the workings of our machinery, running environment 
sprout for different stages of the system, somes might be as unestable
as the ones dedicated to the latest development, other,somewaht  
more stables, for testing, preproduction presentation, production etc. 

In these situations, we might have data and structural differences since
the environments are likely to be our of sync.  

So, we have the problem of propaaggation of structure and relevant data
from one environment to the other. For small projects it might be the case
that a diff between data backups (or just structure backups) will do
the trick. On others we might be forced to keep track of structural and data
change manually and explicitly following a strict well defined process. 

If it's worth it, we might even relay on a storage schema definition 
with support for versioning, either custom taylored, provided by third 
parties or integrated with the platform we use.

In the case of storage design applies the known mantra of keeping things
closed to modification and open to extension the same way it applies 
to coding and architecture work. If we keep each structure backward 
compatible (i.e. restricting us to expansions of those structures), 
synching storages is rasonably simple. Whenever we don't, we have to take
the load of constructing cumbesome migration jobs, update scripts 
and code changes, that should be carefully applied to each environment 
when needed.

An approach that we can take to retain backward compatibility 
in the case of SQL is exposing and using views to access data, that way,
in the face of structural change we avoid the need to change the
code base. The trade off, tough, might not worth it, since view
definition are not very comfortable to define and update.Sso this 
approach should be pounded carefully.

As always, we have to taking account the trade offs: Backward 
compatibility could mean a polluted or subpar design while the 
alternative means breaking things unexpectedly and additional 
load beared without additional functionality.

A common practice which is worth to mention to whom don't know it,
is to contain storage functionality well separated from the rest 
of the code base. That way, in the face of change, there is just
one place to go, it also means that it is easier to mantain 
a consistent approach in storage access. 

If the code is spread, when storage means change either by structure
or technology, we face the error prone and cumbersome task of identifiyng 
çand change every place throughout the code base that access storage.
Some access methods, like the onces exposed through ORM solutions might 
even be subtle enough to make it pretty much untrackable. The additional 
effort to isolate storage is a good practice in most cases. 

Storage selection, it's design and it proper acccess through code are mostly 
in the domains of engineering and architecture, Changes in storage technology 
and design commonly have a metoric impact in the code base and stall the
development for long times .So, don't be casual on it taking decision on the
matter while being in the flow state of coding.

Another aspects that make our storage decisions complex it the fact that 
each storage solution have it's limitations on the kind of work that can
handle correctly. While in coding preventive optimization is deemed as capital
sin, in the case of storage, we face the problem that for some kind of tasks 
the wrong selection can render the whole work done useless. 

For example an SQL storage might not scale to the point we need if we have
to deal with huge datasets and table joins. Extensive calculus machinery 
that depend on storage can not be usable if we read/write massively, 
oeven remain too slow if it requires constant interprocess communication and 
type conversions. Without a non standard storage soluction  that kind of project
as others, can be unfeasable. So we can be better of standard solutions or we can
need one in particular, many types of storage or it could be the case
that anyone might do the work.

But wait, not so fast! It, all this considerations not mean that we should 
optimize preemptively. Our first strategy writing queries or doing API calls 
should not to performance oriented,  simplicity and functionality is normally 
a better approach. That can optimization can be made when it probes to be a need,
when it imposes a bottleneck, since we can do it without the penalty of 
rendering all other code useless. 

There are other aspects while prototyping our machinery in regard to storage 
that requires care. That is the case, for example, of constraint definitions 
like foreign keys, primary keys, additional unicity constraints, type length
restrictions, indexing, etc. Most of them if we are operating in  pure rogue mode,
can be postponed and introduced progressively as the prototype or product matures.

Some of these mentioned aspects while likely needed when we reach production, 
can make a dent on our time and energy available. 

A final consideration is to avoid designing storage casually. If we have a clear
vision of the machinery we are going to build, asuming the role of storage designer 
and take the time exclusively to structure our storage well. That is more effective,
leads to a more consistent design and better supports the coding tasks to come.


Automation

Surrounding any project, there are tons of operations that can be automated. 
We can think of deployment processes the the relevant environment, building processes, 
night builds, running tests suits, chronic storage backups, updating libraries, 
updating system packages, and on and on. 

We, as eternal seekers of the perfect machine, want to automate just everything. 
Unfortunely, we are subject to a life span and we have priorities for the project 
and why not other interesting things to do. So, where should we drive the line ? 

There is, of course, no final answer, it depends on the resources we have and 
the amount of time we spend doing each manual task, it also depends on the risk 
we take while doing manual task due to our propensity to make slim and fat mistakes. 

As rogue programmers, in most cases, doing it on demand, due to boredom on repeating 
each task over and over and feeling we are wasting too much time. This can go well as
long we have a critic eye to spot when we are loosing precious time for marginal gains 
and smack ourselves back on focus.

For projects in a budget it is more a hierarchical decision tree, at best 
some what negotiated.

It's is interesting to point out, that across all software categories, there 
are always aspects that are required to be automated. It's a cross sectional 
aspect mostly beyond the scope of a prototype build. 

Being that the case, we can expect that there are tons of dedicated tools already
build that can fit each task. Still, we should be cautious, having a tool that provides 
a solution for automation doesn't mean that it is the right decision to use it. 
It might not be cost effective under the circumbstances we develop, it might require 
time and dedication to set up properly and to maintain it that could be prohibitive.

The perfect machinery is out of reach, we automate far from perfection.



Contrarian view on Optimization

premature optimization, flow, energy, weight dragging, tradeoff,  corp optimization needs,enginnering needs vs. programmers needs,miserable coding experiences, project manager view vs. programmers view, role bets, role blindness, crytical paths, hot spots, documented risks, right place to optimize, layer and role to optimize, embeded , *kinds of optimization, profiling,.merrily optimizing, fast rollbacking


The old mantra wisely declares that premature optimization is the root of all evil. 
It is a sensible approach to avoid droping code optimization just for the sake of it.
Even if easily done, in most cases it results in added complexity to code, forces readers
to lateral thinking and ephifany dependent understaning, it rises the risk of subtle
misbehavior of our carefully crafted machinery. 

Since suble failures tend to need prolonged times to express themselves,
and since might take our task at hand for done and advance completing other tasks,
things get out of hand easiliy. It can introduce one of the most feared issues: 
irreproducible failures. It can sneakily crawl to production, and put us on the stressful
situation of receiving 24/7 alarming urgent call. After these chain of events, our detective 
efforts to find root causes can render futile, fails will lurk in the shadow for long times 
if not for ever. Those failures might lead to hard to debug, hard to fix, instability. 

In that hard position, it can force us to back pedal to remote history
of our codebase, to try to find the point in time that inestability got in.
It can force us to review carefully codebase changes from that point on.
The circumbstances con obligue us to aggresive test case development to try to 
Wrong behavior can undermine feith in the project in the view of others and
even ours. It can throw a shadow of doubt in our technical capability, 
attention to detail, it can render us as sloppy or careless guys. 

And what for we might ask ? If optimization was done blindly, it is highly likely
that it resulted in marginal gains, no gains or even detrimental performance. 

All that said and taken seriously. We now look at the flip side of the coin.
There might be the case that optimization is nontheless needed. The most mandatory 
reason we can expect is that the current performance level of a given part 
of the system renders it unusable. If that is the case, PM role will take hands 
on the matter and priority it make it solved. 

It might be an engineering decision if it seems to affect the usage of the system
to some tolerable state or can affect an expected future service expansion and 
resources can be reallocated to solve it before it turns into a real problem.
On a small enough project, in independent team, in a rogue project, 

in a one man machinery, it might be a good idea at sub architecture level. It could be
the case that while the project performance has no real world usage implications, 
it is impacting nontheless the developement.

Bad performance might lead to slow moving team,  development turns sluggish, we drag 
weight in everything we do. It can also stall new features delivery, it can affect problems 
treatment. It can destroy flow. It can lead to boredom, apathy, low energy, it can risk 
commitment to the project, it can risk the whole enterprise. As already said, without 
corporate incentives, mood, interest and energy are key. 

Recovering flow might not be something that a PM or engineering time atend to. But for
seasoned programmers is must. In the case a role is filled by someone not seasoned in its
surrounding roles battles blindness is pervasive. 

At each role there are plenty of blind spots. PM are mostly blind to technical issues,
Engineering team are mostly blind to sub component architecture flaws, architecture
designers are mostly blind to subtle programming pebbles. 

While the bird's eye view of every role is a necesary feature , specially at PM 
and Engineering roles, the pain suffered at programmers level dedicated to coding 
is either is either unnoticed or directly ignroed. On a corp environment, 
it can keep nontheless the project beating, it can still advance, it can meet expected 
timelines and delivery usable functionality. If that is the case, the pain beared 
by the team silently permeates multiple aspects of the project, competent people 
leaves and employee rotation increase, apathy lands on people's mood, sub par work 
gets delivered,  time lapses slowly for everyone, the good guys turn to work just by the hour. 

Worst of all, it the situation gets easily overlooked since these kind of aspects of the 
ecosystem are hardly measured and being unidentified it is not managed. Management
is so far from the problem that they remain clueless. It is also hard to negotiate 
expending time to solve this seemingly  unimportant and unrequested tasks.while there
are requested deliverables on the timeline. Key information needs to flow too far
upstream to get noticed. Jonest communication between hierarchical layer are a 
must to solve the issue. 

Experience rogue practitioners do to the long involvement of mixed roles, perspectives, 
introspection  and responsabilities have a better understanding on crytical problems 
even if their approach are deemed as unorthodox in the eyes of corporate practitioners.
Well, that happens when they venture to raise their heads over their current eagle
eyed focused attention to detail. To solve such kind of crytical problems still need 
a risk taking betting like attitude from rogue  like constructors. 

Flow can be regained throughout optimization non required by PM and engineers.
To obtain that, we need to properly identify the root cause of the lost flow.
It might be a slow third party component that can be replaced for a more
performant one, we might need to immerse us in its domain to rewrite it
to our needs, if we have enough resources at hand. 

It can be circumbented in some cases by discarding non crytical 
features at PM or engineering level with the consequent drop of 
the limiting components. 

We must carefully profile the application with proper tulls an
thoughtfully evaluate if the we can afford the optimization and
its consequences. After optimizing, we should recheck with proper
profiling the effort resulted in a worthy improvement and be ready 
to back pedal if it is not good enough to give us a 
relevant gain, ignoring the sunk cost incurred.

Depending on the nature of or machinery, we could be subject to frequent
optimization. If that is the case, besides having our preferred third party 
tools and external applications, that actively allows us to run the beast
and profile it, we could have a complementary technique. 

We can embed in the machinery a light weight passive performance tracking 
library and the main hook points spread across the system to enable it 
when needed at a finger tip. When the library is enabled, the machine 
runs leaving and trace log of every hook performance and other related metrics.

That way, we don't have to depend on manual setups of external tools each
time we need to tune the system, we have the most common places hooked 
for instant reporting. In case we cannot relay on a third party library 
for passive profiling, its implementation is so simple that it can be
ported or created in a few hours.

If our development languages support macros, then those hooks are defined
without any detrimental on the system   penalty, macro usage drops them 
to oblivion. If a language we use doesn't have macros, then the hooks 
would incurr on a minimal negligible penalty if we take care to check
that they are not embeded in a hot spot that is fast enough to be affected
by the weight of the hooks involved.

A good practice, altough not a common one is to provide also a way
to disabling punctually the optimizations made, we can even have a 
central setup class or configuration file disable instantly a given
optimization. A very simple case is when we have one or multiple layer 
of caching. While performance gains can be huge through caching techniques, 
it can lead to subtle inconsistent states and responses. Caching layers 
and contexts can be spread in a greyscale risk taking. Thus, it is likely
to be a good idea to provide others and us the benefit of counting 
with fine grained control panel for caching.

What experienced well seasoned programmers with a bulky past on productive 
systems are wise enough to ponder the risks involved, to weight the tradeoffs
properly and where not to venture. Some aspects they consider are: 
is the change in a crytical path of the machinery, is it a single cuting edge
or node of the system that can lead to a systemic failure? Is It a hot spots
that is excercided in a diverse manner that it's change reach  cannot be propery
sized? Its usage is excercised in such a way  that confidence on the equivalence 
o the change to the existing code cannot be warranted for 
every use case. The same way, if the changes need to be added to a
hot spot of the code base,  we must take special care.

Well intenitioned but novice hands on these topics could incurr in the naiveté
of merrily add new code that seems to work very well at non productive environment
like dev, testing, preproduction that are subject to little or no concurrent 
heavy usage. Those changes filtered to production can lead to hard failures 
and long rollbacks and redeployments.

In case we decide to advance to troubled whaters or if we get pushed by 
management or engineering folks to move forward with the changes, we have
to be sure to address several issues. We should communicate clearly 
and enfatically with management  and engineering about the risks we are
taking. We should take care of preparing means to disable the new feature 
fast if the system requires long times to redeploy, and we should ensure
that the folks in charge of production processes are aware of the situation 
and the means to disable the new feature . Also we have consider 
that  deployment occurs normally with a set of changes, so, a crytical
failure can affect the deployment of the change set as a whole, so 
rollbacking can be really expensive.

Whenever possible, the new optimization should be excercised progressively
in production environments to ensure limited impact if things go south.

Another important task that we should address is to leave enough information
for other developers about optimized code, since it can be hard to understand,
a brief explanation can aid them in case a modification is necessary. If the
optimized code should not be touched due to the effects it can have, an emfatic 
message to treaten and disuade others to date to put their merry fingers. I recall
leaving myself  all caps DO NOT TOUCH menacing notes with  phone number and personal
contact details to avoid disasters.

Any programmer should be aware that there are many types of optimization, 
each one has its own trade offs, its domain of proper use, its limitation 
on the results that can be obtained, its costs and its risks. A brief overview 
count as: algorithmic complexity analysis, through the use of data structures
or improved procedures, infrastructure optimization, storage optimizations 
through structure definition, indexing, query rewriting, denormalization, 
native storage settings optimization, caching strategies, component replacement,

custom optimization layers for low latency and low overhead, passive vs. active
optimizations to avoid things like locking, interprocess communications and data 
type conversions. Low level code optmizations for crytical sections, language 
supported optimization like JIT, complier flags, op code caching, upper and
lower architecture simplification, communication channels optimizations, networking 
redesign throw better protocols, faster channels, and the list goes on and on.

It is not necessary to master them all, but being aware of them is really useful
to address the botlenecks with the right mean and to delegate its solution 
to the appropiate person.

In general, we can even go as far as to restrict the optimization processes 
to occur in development mode to avoid incurr in the risks involved after 
applying those optimizations at production. It could be also a good idea 
to just use the optimizations in non crytical but frequent processing 
that ruins flow. 

There is of course a tradeoff to evaluate, for a given optimization, 
supporting a non optimized working mode leads to a bigger codebase,
more test cases, and more code to mantain. Again we have to ponder
the risks, the benefits and decide wisely.

Freeze
feature freezing - bugs - rough edges - mvp - scope minimazation -
eternal expansion -  stable release - testing real use - functionality packing -
postponed fixes - completion convergence - mythical finished beats

When someone is part a corp team, there is a clear cut of 
what is included in the current stage, there are new features 
to provide, fixes to solve and quality characteristics to meet.
It might be the case, now and then, that during the a given 
iteration of the software development some urgen need arises 
and must be addressed but it should not happen too often. 

If we try to polish our software to perfection, we will enter
a black hole no one ever scaped. Software is just too complex
to be crafted to consider every concevable situation, input,
combination of usages, specially considering it's layered 
over tens of layers of its constituend parts. What we can do,
is to craft it to stand perfectly well for the use it's intended
for, that is a finite task, we can cheer up. Well, it is a finite
task as long as we do not allow others or ourselves to perpetually 
expand it's capabilities. Specially since a few common language 
words can express a requirement that can require ten thousand 
constructing hours to materialize. Its form thus, is never final, 
since software is by nature plastic, there is always room 
to accomodate to different usage and conditions and as amorph
and complex as it is, agreeing on it's completion according 
to plans is a hard issue. 

There is a mythical being that no one has ever seen but everyone
long for, it's the final complete machinery, the ultimate software 
beast which, once arrived, while shine us by its sublime
eternal life. No, it's a myth. We are frankenstein's brothers
and our beasts are cousins of frankenstein's child. Our machinery 
exits the lab as a composite imperfect craft, subject to all kind
of abstract rusting forces with destructive power. The software 
will be comfronted to hardware and operative systems changes,
security vulnerabilities, end of life of its constituend components, 
to fatal discovery of flaws that can lead to deliverate or
unintended disasters, it can be surparsed by better software 
and let to rust from lack of maintenance, etc.

Being capable of reaching production or to give additional features 
to productive systems requires that everyone involved agree to avoid 
expanding any further the work commited to do. After we deliver, we can
talk again for new horizons. This is specially crucial if we are working 
alone or with a couple friends to build something by heart, ti is of vital
importance if we are rogue programmers and engineers. We are constantly 
tempted to add cool features on the go. 

If we want to reach a new production state as soon as possible, we should
strive for scope minimization, to limit the scope of changes to the bare 
minimum we need to deliver better feature set. If we are going fo the first
shiny release there is a term coined for that scope minimization: Minimal 
Viable Product.

Even simple features, any single line of code change is subject to introduce
an error, it requires proper testing, we need to be sure there is no sneaky 
way it can affect other parts of the systems. Even simple features have to 
be excercised to ensure usability, utility, it has to be supported 
in the future, they expand our codebase, it expands the domain we are
dealing with. We expand the manual work that needs to be done.  
Its proper completion might require to write additional test cases. 

We have to freeze functionality at some point to allow getting ready
to deploy a new version of the machinery in the near futre. When we freeze,
the beast still evolves, it is subject to manual and automatic tests, 
it's checked and rechecked to ensure it is properly working, bug fixes
are needed, somes get resolved other just tracked for future resolution 
if we can live with the rough edges. We will deliver an imperfect machine 
through ever minor adjustemnts until we converge to a reasonable useful 
state longed for others or ourselves. 

Given the fact that going to production is an elaborate process and each
delivery entails its risks and costs, functionality is normally packed to
be promoted as all or none set. Each version delivery is also properly 
tagged to have a reference point to go back or branch when needed.
Being ready implies we promote the current state to more stable environment 
progressively climbing to production. If it is not good enough we back pedal,
correct the problems and go climb again. We then subject the machine 
to real use, progressively if possible. We might broadcast it for public 
experimentation or productive usage.

Shortcuts and Bottlenecks

layered analysis, removal tradeoff, non standard practices, 
simple architectures, working vs. technically complete, 
incomplete across vs. Incomplete layer, incomplete layers, 
pitching propposals, one excelent thing, migration to simpler, 
faster technology, releasing locked resources, inot prowess, not speed

When we are fight and loosing the battle to reach production at  
an acceptable timeframe, the ambience get tick, pressure gets higher, 
people gets irritable, stress spreads and catches pretty mutch anybody, 
people’s interactions becomes rough. Even if we are just a couple 
friends crafting machinery for the sake of it, a subset of those problems 
arise. Even if we are just one guy working, we become impatient and
feith in our bests can be depleted. We are in troubled waters if 
we over and over we expand the timeframe.

When this kind of situation arrives instead of just focusing in the work
to be done and just keep pushing the wall the way we do it is better to hit 
the hand brakes, get out of the car, and strategise. 

It is better to analyse the silent bottlenecks we are pushing though. 

It is reflect and take a bird’s eye to the cliff we have to go through.
Where are we spending too much time ? It might be a functionality that 
is too big and cumbersome to build, it might be that we have just too 
much functionality to add before reaching production ? Is it that 
functionality scope ever expands ? In this case, the bottleneck is 
imposed at PM domain  (either impersonated by a person or just a role
someone plays) . It might be that the works requires to deal to massively 
complex and slow set of components that are a pain to use,  it might be 
that the components to use, language and frameworks besides being complex,
lack proper documentation, it might be that the development related 
processess loads to much weight on developers, it might be that the quality
required is too high for some parts of the project to get things moving;
if the problems live in these cases, then we are dealing with engineering 
problems. It might also be that the arquitecture and structure, the development
environment and it tools imposes too much work and make constructing 
functionality cumbersome and a pain. If this is the case our bottleneck 
are at architecture domain. If the ingineering and architecture are fine
but the codebase is a massive mess, full of cryptic, tangled, and long
semi obsolete code, if the code has just too much ad hoc boilerplate, 
it is repetitive and inconsistent. If it is not organized in proper
levels of conceptualization and is hard to map functionality to code, 
if it is cross sectionally unreliable, then whe are dealing with
programmers bottlenecks.

Wherever lies these bottlenecks, we have to ponder our chance to get
rid of them, analyse risk and reward, ponder the tradeoff to keep
the status quo or take a bold move. We have to pick our battle,
we have to pick the most promising one all things considered.
Before move on, we have to face the possibility that we can fail 
in the endevour and just loss our time and add more delay to an 
already overdue project. When we are on our own, it is just a matter
to take the bold decision, go deep into it and get the glory 
or miserably fail. The only key preparation we need is to decide
how much time we can comfortably affort to spend and to make 
a proper snapshot of the system to back pedal if the reality 
doesn’t match or expectations.

When we are on a larger team, we need to be sure that the decision 
chain is in the same line. We can expore a bit to assess the viability
but refrain to explore to mutch to avoid the risk of geting a buck
et of cold water in the face when no one support us. 

Proper communication implies going to the right person that holds 
responsability for the issue at hand, expose clearly the problem 
the team is facing and make sure they too seem it as a problem,
then pitch the solution envisioned briefly, at a thecnical level 
they can understand, explain it’s benefits and why it’s crucial
giving the state of things, and to the same thing with anyone involved 
in the decision chain, at a joint meeting or in several. Sense the
waters, if you see people doesn’t seem to want to follow you in this
matter, just get chill, ask if anyone has proposal to get rid of this 
bottleneck or any other, and leave it for their consideration, just
be sure to not get in merry brainstorm of umbaked ideas. If your
proposal gets green lights, then make sure the key members of the
team are with you, or at least they won’t form a motín, specially
if you jumped a few ranks of the command chain.  While to somewhat 
extent we can count that everyone is interested in the project 
success, it’s not uncommon that personal ego, envy, competition,
or just feelings touched get involved. In the short run, you
might gain some animosity towards you. 

What is interesting in bottleneck removal is that it’s normally 
not a long tedious repetitive list of menial tasks that as a grain
of sand to build a large castle. Bottleneck removal is a strategy,
then a tactic and a rater small altough complex implementation 
that yields brutal rewards. It allows you to challenge yourself, 
and work with high focus, silently, outside of the usual chore 
like realms of duty compliance, and face glory or doom. Bottleneck 
removal is not a matter of technical prowess and blazzing fast coding,
it’s a combination of strategizing at the right level, the ability 
to find determine the biggest solvable problem, communicate with 
the right people, and take a bold move to widen the clugged parts 
of the system and it’s processes.

Sometimes, the misfit is not in a particular part of the system,
it’s instead related to the fact that we are stuck with a practice 
at engineering or architecture level that imposes a huge drag.
It can be the case that is a standard practice, a current trend 
and default to go way of making things, while that can be benefical 
in incountable scenarios, and is always wise to know them, it can
be a very bad fit in our current scenario of overdue. Maybe we
are tied to comply with it as a defacto standard but nontheless, 
it’s always aregueable moving against lauching to production 
for technical style is a bad  trade on a forseen overdue timeline 
casting a shadow over the project and posing a risk of cancel
the whole deal or worst. It’s not hard to negotiate with non
technical people to cast the design a to an unorthodox mold 
if it’s pave the road to finally launch, reduce non technical 
risks and leave pressure escape for everyone. And once non 
technical crowd is on board, negotiation with savvy experts 
it’s way simpler, it gets leveraged in our favor with the 
weight of the backing folks who want problems to go away 
no mather what.

If the unorthodox approach is taken care properly, ensuring 
proper isolation and endurance to the open waters of real usage, 
we can launch sooner with no detrimental on the future of the platform 
or component we all worked hard to build. Once we launched and got things
under control, including the unavoidable stabilization and adjustments 
of the machinery that will be rise, we can plan for casting the machinery 
to an orthodox design. Software is massively flexible to accomodate 
all kinds of changes, including recasting it’s design to whatever
is nedeeded as long as we didn’t went to far and allowed to proliferate 
wild approachs cross sectionally. We can, once releaved, as said,
refocus on compliance to defacto standards, trends and good practices, 
specially if it was explicitly required. 

The same applies to unorthodox processes, if we shortcutted layers 
of responsabilities, normal development cycles and systematic procedures, 
and whatnot. Normal processes are necessary and good practice, and likely
unavoidable to ensure a stable system whatever it is, but before production,
we can certainly are better off even if it gets processes deorganized and 
the most meticulous members of the team quite peaced off. Time to launch,
when becomes crytical, overrides    structured application of engineering 
principles in favor of allowing the most creative and bold guys of 
the team to unwire the time bomb.

One form we can gain faith in the process is getting complete
the functionality to the point it probes  without any shadow 
of doubt that the machinery can fulfill it’s requirements, trading
off in against non functional soft requirements, that can be postponed
in short term. The key to survive this approach is to ensure the unimplemented 
features get treated equally cross sectionally in the system,  that way,
there is no partial implementations spread unevenly that will probe to be 
a pain and a technical debt near impossible to track and fix. If we know 
what we are leaving aside, then we can handle it later in a well
organized manner. A good way to organize this partial implementation 
is doing it in a layered way to have a clear cut of what is done
and what is yet not done altough in many cases is not possible, 
so we are forced to tackle it later with special care. Depending 
the waters we are going to navigate, we can postpone exception and 
error treatment, security features, guarranties to control service 
health metrics, important stuff that has no visibility until the 
system gets stressed enough. Whatever the case, we can deliver core
functionality soon enough to save the project showing viability. 
At least in respect of the well behaved paths of use, the system
showed to be complete enough. And in quite a brief time, we can 
address the postponed non functional requirements. It is better
to get something working and to be technically incomplete that 
to have a finely polished functionality without providing other 
crytical parts of the system. This is more a political trade 
of than a shortcut, the work to be done can be even larger 
that what we was in the path to do, but we fragmented and
reorganized the construction in a pollitically safer order.

Another way that can shortcut path to production and win launch
is to provide something largely longed for it’s impact for the
system’s users and the client. For example, it might be something
that unlock commercial expansion, something that alleviates the tasks
at hand for a good deal of resources to redirect work to more important 
tasks. When we provide something exceptionally good, that seemed
completely out of reach, we can be even pushed to production in spite 
of have much pending work according to the envisioned plans.

On the engineering domains, we can get a good tradeoff replacing 
a flexible component that is a pain to develop with in favor of
more modest one that is pleasant to work with, if that replacement 
resides in the path that was causing the bottleneck, we can have 
a huge payoff. We can take proper care, as before, to isolate 
the access to the component to allow easy future replacement if it
is the case that it’s usage is wide spread and can result 
in a hard refactoring. 

It is key that, when we consider parts of the system that
are subject to change, not potential in a remote horizon 
but in foreseable future, to take the additional work of 
providing isolation. That way the replacement is a simple
enough matter. What we should avoid, is to expand the
architecture to make replacement a matter of configuration
through patterns that implies the proliferation of interfaces 
and dynamic code. If done through this mean, we will pose
a load on the development that makes everyhting more 
complicated, making debugging a pain through dynamic code
which requires figure out exactly which kind of objects 
and which can of behavior will get finally run of. Isolation 
more often than not beats complex configurations, inversion 
of controls, and agressive preparation to change.  Isolation
not only provides simple replacement of components it gives
the great benefit of avoiding proliferation of foreign 
API access which is not well suited to the narrow 
cases we need to use that kind of component.


TO BE CONTINUED

***** IDEAS ******

mantener branchs a mano DETALLAR freezar desarrollo: no seguir expandiendo un prototipo sin tener una version productiva estable. Una vez freezado, solo agregar fixes. DETALLAR tiempos de roles: segun el rol , hay que dedicar mas o menos tiempo DETALLAR no desarrollar ideas a medias, solo solo cuando uno está bloqueado

***** HECHO *****

DETALLADO: performance no solo por demanda productiva sino para mantener flow DETALLADO: planar: usar estructuras planas, class hierarchies, folder hier., etc. DETALLADO: defaults: usuar settings defaults en lugar de configuración explicita mandatoria testeabliidad,debugging sin tener que romper el codigo y hackearlo por todos lados DETALLADO: automated tools, support for deployment: no automatizar tareas poco usudas o muy complejas de automatizar INTRO: no desarrollar varias funcionalidades a la vez, a medias:


historias personales DO NOT TOUCK + PHONE NUMBER

About

An essay taking prototypes to production for small teams

Topics

Resources

Stars

Watchers

Forks