From 763dfbed3a024f959840781068f2113247222a22 Mon Sep 17 00:00:00 2001
From: Greg Hays
Date: Tue, 16 Apr 2024 16:06:49 -0700
Subject: [PATCH 01/18] CODE RUB: Foundations 2.1 Grammer Check
Closes: #271
---
2. Services/2.1 Foundations/2.1 Foundations.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/2. Services/2.1 Foundations/2.1 Foundations.md b/2. Services/2.1 Foundations/2.1 Foundations.md
index c669668..689c021 100644
--- a/2. Services/2.1 Foundations/2.1 Foundations.md
+++ b/2. Services/2.1 Foundations/2.1 Foundations.md
@@ -1225,4 +1225,4 @@ Here's some common scenarios for mapping native or inner local exceptions to out
[*] [Standardizing Validations & Exceptions](https://www.youtube.com/watch?v=Wtpxb7yPQP0)
-[*] [Test-Driving Non-Circuit-Breaking Validations](https://www.youtube.com/watch?v=guJPrIQ0kJk)
+[*] [Test-Driving Non-Circuit-Breaking Validations](https://www.youtube.com/watch?v=guJPrIQ0kJk)
\ No newline at end of file
From 99edf6996ee20f85a657695742135689ae44b563 Mon Sep 17 00:00:00 2001
From: Greg Hays
Date: Tue, 16 Apr 2024 17:08:23 -0700
Subject: [PATCH 02/18] CODERUB: Processings 2.2 Grammer Check
---
.../2.2 Processings/2.2 Processings.md | 62 +++++++++----------
TheStandard.sln | 13 ++++
2 files changed, 44 insertions(+), 31 deletions(-)
create mode 100644 TheStandard.sln
diff --git a/2. Services/2.2 Processings/2.2 Processings.md b/2. Services/2.2 Processings/2.2 Processings.md
index 21b2ae6..b1025a7 100644
--- a/2. Services/2.2 Processings/2.2 Processings.md
+++ b/2. Services/2.2 Processings/2.2 Processings.md
@@ -27,11 +27,11 @@ TryCatch(async () =>
});
```
-Processing services make Foundation services nothing but a layer of validation on top of the existing primitive operations. Which means that Processing services functions are beyond primitive, and they only deal with local models as we will discuss in the upcoming sections.
+Processing services make Foundation services nothing but a layer of validation on top of the existing primitive operations. This means that Processing services functions are beyond primitive, and they only deal with local models as we will discuss in the upcoming sections.
## 2.2.1 On The Map
-When used, Processing services live between foundation services and the rest of the application. they may not call Entity or Business brokers, but they may call Utility brokers such as logging brokers, time brokers and any other brokers that offer supporting functionality and not specific to any particular business logic. Here's a visual of where processing services are located on the map of our architecture:
+When used, Processing services live between foundation services and the rest of the application. they may not call Entity or Business brokers, but they may call Utility brokers such as logging brokers, time brokers, and any other brokers that offer supporting functionality and are not specific to any particular business logic. Here's a visual of where processing services are located on the map of our architecture:
@@ -39,12 +39,12 @@ When used, Processing services live between foundation services and the rest of
-On the right side of a Processing service lies all the non-local models and functionality, whether it's through the brokers, or the models that the foundation service is trying to map into local models.
-On the left side of Processing services is pure local functionality, models and architecture. Starting from the Processing services themselves, there should be no trace or track of any native or non-local models in the system.
+On the right side of a Processing service lies all the non-local models and functionality, whether it's through the brokers or the models that the foundation service is trying to map into local models.
+On the left side of Processing services are pure local functionality, models, and architecture. Starting from the Processing services themselves, there should be no trace or track of any native or non-local models in the system.
## 2.2.2 Characteristics
-Processing services in general are combiners of multiple primitive-level functions to produce a higher-order business logic. but they have much more characteristics than just that, let's talk about those here.
+Processing services in general are combiners of multiple primitive-level functions to produce a higher-order business logic. but they have many more characteristics than just that, let's talk about those here.
### 2.2.2.0 Language
@@ -53,8 +53,8 @@ Usually, processing services combine two or more primitive operations from the f
#### 2.2.2.0.0 Functions Language
-At a glance, Processing services language change from primitive operations such as `AddStudent` or `RemoveStudent` to `EnsureStudentExists` or `UpsertStudent`. they usually offer a more advanced business-logic operations to support a higher order functionality.
-Here's some examples for the most common combinations a processing service may offer:
+At a glance, the Processing services language changes from primitive operations such as `AddStudent` or `RemoveStudent` to `EnsureStudentExists` or `UpsertStudent`. they usually offer more advanced business logic operations to support a higher-order functionality.
+Here are some examples of the most common combinations a processing service may offer:
| Processing Operation | Primitive Functions |
| ------------------------ | ---------------------------------------------------------- |
@@ -63,7 +63,7 @@ Here's some examples for the most common combinations a processing service may o
| VerifyStudentExists | RetrieveAllStudents |
| TryRemoveStudentAsync | RetrieveStudentById + RemoveStudentByIdAsync |
-As you can see, the combination of primitive functions processing services do might also include adding an additional layer of logic on top of the existing primitive operation. For instance, `VerifyStudentExists` takes advantage of the `RetrieveAllStudents` primitive function, and then adds a boolean logic to verify the returned student by and Id from a query actually exists or not before returning a `boolean`.
+As you can see, the combination of primitive functions processing services might also include adding an additional layer of logic on top of the existing primitive operation. For instance, `VerifyStudentExists` takes advantage of the `RetrieveAllStudents` primitive function, and then adds a boolean logic to verify whether the returned student by an Id from a query exists or not before returning a `boolean`.
#### 2.2.2.0.1 Pass-Through
@@ -72,18 +72,18 @@ Unlike Foundation services, Processing services are required to have the identif
#### 2.2.2.0.2 Class-Level Language
-More importantly Processing services must include the name of the entity that is supported by their corresponding Foundation service.
-For instance, if a Processing service is dependant on a `TeacherService`, then the Processing service name must be `TeacherProcessingService`.
+More importantly, Processing services must include the name of the entity that is supported by their corresponding Foundation service.
+For instance, if a Processing service is dependent on a `TeacherService`, then the Processing service name must be `TeacherProcessingService`.
### 2.2.2.1 Dependencies
Processing services can only have two types of dependencies. a corresponding Foundation service, or a Utility broker. That's simply because Processing services are nothing but an extra higher-order level of business logic, orchestrated by combined primitive operations on the Foundation level.
-Processing services can also use Utility brokers such as `TimeBroker` or `LoggingBroker` to support it's reporting aspect. but it shall never interact with an Entity or Business broker.
+Processing services can also use Utility brokers such as `TimeBroker` or `LoggingBroker` to support its reporting aspect. but it shall never interact with an Entity or Business broker.
### 2.2.2.2 One-Foundation
-Processing services can interact with one and only one Foundation service. In fact without a foundation service there can never be a Processing layer. and just like we mentioned above about the language and naming, Processing services take on the exact same entity name as their Foundation dependency.
-For instance, a processing service that handles higher-order business-logic for students will communicate with nothing but its foundation layer, which would be `StudentService` for instance. That means that processing services will have one and only one service as a dependency in its construction or initiation as follows:
+Processing services can interact with one and only one Foundation service. In fact, without a foundation service there can never be a Processing layer. and just like we mentioned above about the language and naming, Processing services take on the same entity name as their Foundation dependency.
+For instance, a processing service that handles higher-order business logic for students will communicate with nothing but its foundation layer, which would be `StudentService` for instance. That means that processing services will have one and only one service as a dependency in its construction or initiation as follows:
```csharp
public class StudentProcessingService
@@ -99,7 +99,7 @@ However, processing services may require dependencies on multiple utility broker
### 2.2.2.3 Used-Data-Only Validations
-Unlike the Foundation layer services, Processing services only validate what it needs from it's input. For instance, if a Processing service is required to validate a student entity exists, and it's input model just happens to be an entire `Student` entity, it will only validate that the entity is not `null` and that the `Id` of that entity is valid. the rest of the entity is out of the concern of the Processing service.
+Unlike the Foundation layer services, Processing services only validate what it needs from its input. For instance, if a Processing service is required to validate a student entity exists, and its input model just happens to be an entire `Student` entity, it will only validate that the entity is not `null` and that the `Id` of that entity is valid. the rest of the entity is out of the concern of the Processing service.
Processing services delegate full validations to the layer of services that is concerned with that which is the Foundation layer. here's an example:
```csharp
@@ -121,21 +121,21 @@ TryCatch(async () =>
});
```
-Processing services are also not very concerned about outgoing validations except for what it's going to use within the same routine. For instance, if a Processing service is retrieving a model, and it's going to use this model to be passed to another primitive-level function on the Foundation layer, the Processing service will be required to validate that the retrieved model is valid depending on which attributes of the model it uses.
-For Pass-through scenarios however, processing services will delegate the outgoing validation to the foundation layer.
+Processing services are also not very concerned about outgoing validations except for what is going to be used within the same routine. For instance, if a Processing service is retrieving a model, and it's going to use this model to be passed to another primitive-level function on the Foundation layer, the Processing service will be required to validate that the retrieved model is valid depending on which attributes of the model it uses.
+For Pass-through scenarios, however, processing services will delegate the outgoing validation to the foundation layer.
## 2.2.3 Responsibilities
-Processing services main responsibility is to provide higher-order business logic. This happens along with the regular signature mapping and various use-only validations which we will discuss in detail in this section.
+The processing service's main responsibility is to provide higher-order business logic. This happens along with the regular signature mapping and various use-only validations which we will discuss in detail in this section.
### 2.2.3.0 Higher-Order Logic
-Higher-order business logic are functions that are above primitive. For instance, `AddStudentAsync` function is a primitive function that does one thing and one thing only. But higher-order logic is when we try to provide a function that changes the outcome of a single primitive function like `VerifyStudentExists` which returns a boolean value instead of the entire object of the `Student`, or a combination of multiple primitive functions such as `EnsureStudentExistsAsync` which is a function that will only add a given `Student` model if and only if the aforementioned object doesn't already exist in storage. here's some examples:
+Higher-order business logic are functions that are above primitive. For instance, `AddStudentAsync` function is a primitive function that does one thing and one thing only. But higher-order logic is when we try to provide a function that changes the outcome of a single primitive function like `VerifyStudentExists` which returns a boolean value instead of the entire object of the `Student`, or a combination of multiple primitive functions such as `EnsureStudentExistsAsync` which is a function that will only add a given `Student` model if and only if the aforementioned object doesn't already exist in storage. Here are some examples:
#### 2.2.3.0.0 Shifters
-The shifter pattern in a higher order business logic is when the outcome of a particular primitive function is changed from one value to another. Ideally a primitive type such as a `bool` or `int` not a completely different type as that would violate the purity principle.
-For instance, in a shifter pattern, we want to verify if a student exists or not. We don't really want the entire object, but just whether it exists in a particular system or not. Now, this seems like a case where we only need to interact with one and only one foundation service and we are shifting the value of the outcome to something else. Which should fit perfectly in the realm of the processing services. Here's an example:
+The shifter pattern in higher-order business logic is when the outcome of a particular primitive function is changed from one value to another. Ideally, a primitive type such as a `bool` or `int` is not a completely different type as that would violate the purity principle.
+For instance, in a shifter pattern, we want to verify if a student exists or not. We don't want the entire object, but just whether it exists in a particular system or not. Now, this seems like a case where we only need to interact with one and only one foundation service and we are shifting the value of the outcome to something else. Which should fit perfectly in the realm of the processing services. Here's an example:
```csharp
public ValueTask VerifyStudentExists(Guid studentId) =>
@@ -150,9 +150,9 @@ TryCatch(async () =>
});
```
-In the snippet above, we provided a higher order business logic, by returning a boolean value of whether a particular student with a given `Id` exists in the system or not. There are cases where your orchestration layer of services isn't really concerned with all the details of a particular entity but just knowing whether it exists or not as a part of an upper business logic or what we call orchestration.
+In the snippet above, we provided a higher-order business logic, by returning a boolean value of whether a particular student with a given `Id` exists in the system or not. There are cases where your orchestration layer of services isn't really concerned with all the details of a particular entity but just knowing whether it exists or not as a part of an upper business logic or what we call orchestration.
-Here's another popular example for processing services shifting pattern:
+Here's another popular example of a processing services shifting pattern:
```csharp
public int RetrieveStudentsCount() =>
@@ -172,7 +172,7 @@ In our case here we validate the outgoing data as much as the incoming, especial
#### 2.2.3.0.1 Combinations
-The combination of multiple primitive functions from the foundation layer to achieve a higher-order business logic is one of the main responsibilities of a processing service. As we mentioned before, some of the most popular examples is for ensuring a particular student model exists as follows:
+The combination of multiple primitive functions from the foundation layer to achieve a higher-order business logic is one of the main responsibilities of a processing service. As we mentioned before, some of the most popular examples are for ensuring a particular student model exists as follows:
```csharp
public async ValueTask EnsureStudentExistsAsync(Student student) =>
@@ -194,22 +194,22 @@ TryCatch(async () =>
});
```
-In the code snippet above, we combined `RetrieveAll` with `AddAsync` to achieve a higher-order business logic operation. The `EnsureAsync` operation which needs to verify something or entity exists first before trying to persist it. The terminology around these higher-order business logic routines is very important. Its importance lies mainly in controlling the expectations of the outcome and the inner functionality. But it also ensures less cognitive resources from the engineers are required to understand the underlying capabilities of a particular routine.
-The conventional language used in all of these services also ensures that redundant capability will not be created mistakenly. For instance, an engineering team without any form of standard might create `TryAddStudentAsync` while already having an existing functionality such as `EnsureStudentExistsAsync` which does exactly the same thing. The convention here with the limitation of the size of capabilities a particular service may have ensures redundant work shall never occur in any occassion.
-There are so many different instances of combinations that can produce a higher-order business logic, for instance we may need to implement a functionality that ensure a student is removed. We use `EnsureStudentRemovedByIdAsync` to combine a `RetrieveById` and a `RemoveById` in the same routine. It all depends on what level of capabilities an upstream service may need to implement such a functionality.
+In the code snippet above, we combined `RetrieveAll` with `AddAsync` to achieve a higher-order business logic operation. The `EnsureAsync` operation needs to verify something or an entity exists first before trying to persist it. The terminology around these higher-order business logic routines is very important. Its importance lies mainly in controlling the expectations of the outcome and the inner functionality. But it also ensures less cognitive resources from the engineers are required to understand the underlying capabilities of a particular routine.
+The conventional language used in all of these services also ensures that redundant capability will not be created mistakenly. For instance, an engineering team without any form of standard might create `TryAddStudentAsync` while already having an existing functionality such as `EnsureStudentExistsAsync` which does the same thing. The convention here with the limitation of the size of capabilities a particular service may have ensured redundant work shall never occur on any occasion.
+There are so many different instances of combinations that can produce a higher-order business logic, for instance, we may need to implement a functionality that ensures a student is removed. We use `EnsureStudentRemovedByIdAsync` to combine a `RetrieveById` and a `RemoveById` in the same routine. It all depends on what level of capabilities an upstream service may need to implement such a functionality.
### 2.2.3.1 Signature Mapping
-Although processing services operate fully on local models and local contracts, they are still required to map foundation-level services' models to their own local models. For instance, if a foundation service is throwing `StudentValidationException` then processing services will map that exception to `StudentProcessingDependencyValidationException`. Let's talk about mapping in this section.
+Although processing services operate fully on local models and local contracts, they are still required to map foundation-level services' models to their local models. For instance, if a foundation service is throwing `StudentValidationException` then processing services will map that exception to `StudentProcessingDependencyValidationException`. Let's talk about mapping in this section.
#### 2.2.3.1.0 Non-Exception Local Models
-In general, processing services are required to map any incoming or outgoing objects with a specific model to its own. But that rule doesn't always apply to non-exception models. For instance, if a `StudentProcessingService` is operating based on a `Student` model, and there's no need for a special model for this service, then the processing service may be permitted to use the exact same model from the foundation layer.
+In general, processing services are required to map any incoming or outgoing objects with a specific model of its own. But that rule doesn't always apply to non-exception models. For instance, if a `StudentProcessingService` is operating based on a `Student` model, and there's no need for a special model for this service, then the processing service may be permitted to use the same model from the foundation layer.
#### 2.2.3.1.1 Exception Models
-When it comes to processing services handling exceptions from the foundation layer, it is important to understand that exceptions in our Standard are more expressive in their naming conventions and their role more than any other model. Exceptions here define the what, where and why every single time they are thrown.
-For instance, an exception that's called `StudentProcessingServiceException` indicates the entity of the exception which is the `Student` entity. Then it indicates the location of the exception which is the `StudentProcessingService`. And lastly it indicates the reason for that exception which is `ServiceException` indicating an internal error to the service that is not a validation or a dependency of nature that happened.
+When it comes to processing services handling exceptions from the foundation layer, it is important to understand that exceptions in our Standard are more expressive in their naming conventions and their role than any other model. Exceptions here define the what, where, and why every single time they are thrown.
+For instance, an exception that's called `StudentProcessingServiceException` indicates the entity of the exception which is the `Student` entity. Then it indicates the location of the exception which is the `StudentProcessingService`. Lastly, it indicates the reason for that exception which is `ServiceException` indicating an internal error to the service that is not a validation or a dependency of nature that happened.
Just like the foundation layer, processing services will do the following mapping to occurring exceptions from its dependencies:
| Exception | Wrap Inner Exception With | Wrap With | Log Level |
@@ -226,4 +226,4 @@ Just like the foundation layer, processing services will do the following mappin
[*] [Processing services in Action (Part 3)](https://www.youtube.com/watch?v=R4sA4V4-_5o)
-[*] [Processing services in Action (Part 4)](https://www.youtube.com/watch?v=u4uqBiB1TaI)
+[*] [Processing services in Action (Part 4)](https://www.youtube.com/watch?v=u4uqBiB1TaI)
\ No newline at end of file
diff --git a/TheStandard.sln b/TheStandard.sln
new file mode 100644
index 0000000..536dbd0
--- /dev/null
+++ b/TheStandard.sln
@@ -0,0 +1,13 @@
+
+Microsoft Visual Studio Solution File, Format Version 12.00
+# Visual Studio Version 17
+VisualStudioVersion = 17.6.33829.357
+MinimumVisualStudioVersion = 10.0.40219.1
+Global
+ GlobalSection(SolutionProperties) = preSolution
+ HideSolutionNode = FALSE
+ EndGlobalSection
+ GlobalSection(ExtensibilityGlobals) = postSolution
+ SolutionGuid = {6382F136-1802-4BD2-8701-80D07EFAED1F}
+ EndGlobalSection
+EndGlobal
From e4cc6367adfb4181c5128ce1751b0b0385696e1a Mon Sep 17 00:00:00 2001
From: Greg Hays
Date: Tue, 16 Apr 2024 18:32:01 -0700
Subject: [PATCH 03/18] CODE RUB: Orchestrations 2.3 Grammer Check
---
.../2.3 Orchestrations/2.3 Orchestrations.md | 156 +++++++++---------
1 file changed, 78 insertions(+), 78 deletions(-)
diff --git a/2. Services/2.3 Orchestrations/2.3 Orchestrations.md b/2. Services/2.3 Orchestrations/2.3 Orchestrations.md
index 9ae0245..0c17a0b 100644
--- a/2. Services/2.3 Orchestrations/2.3 Orchestrations.md
+++ b/2. Services/2.3 Orchestrations/2.3 Orchestrations.md
@@ -2,7 +2,7 @@
## 2.3.0 Introduction
-Orchestration services are the combinators between multiple foundation or processing services to perform a complex logical operation. Their main responsibilities are multi-entity logical operations and delegating the dependencies of those operations to downstream processing or foundation services.
+Orchestration services combine multiple foundation or processing services to perform a complex logical operation. Their main responsibilities are multi-entity logical operations and delegating the dependencies of those operations to downstream processing or foundation services.
Orchestration services' primary responsibility is encapsulating operations requiring two or three business entities.
@@ -20,15 +20,15 @@ TryCatch(async () =>
```
-In the above example, the `LibraryCardOrchestrationService` calls both the `StudentProcessingService` and `LibraryCardProcessingService` to perform a complex operation. Firstly, verify the student's existence for which we are creating a library card, and verify the student's enrollment; and secondly, create the library card.
+In the above example, the `LibraryCardOrchestrationService` calls both the `StudentProcessingService` and `LibraryCardProcessingService` to perform a complex operation. First, we verify the student's existence and enrollment, then create the library card.
-The operation of creating a library card for a given student cannot be performed by simply calling the library card service; because the library card service (processing or foundation) does not have access to all the details about the student. Therefore a combination logic is required to ensure a proper flow is in place.
+Creating a library card for a given student cannot be performed by simply calling the library card service because the library card service (processing or foundation) needs access to all the details about the student. Therefore, a combination logic is required to ensure that a proper flow is in place.
It's important to understand that orchestration services are only required if we need to combine multi-entity operations, which can be primitive or higher-order. In some architectures, orchestration services might not even exist. That's simply because some microservices might be merely responsible for applying validation logic and persisting and retrieving data from storage, no more or no less.
## 2.3.1 On The Map
-Orchestration services are one of the core business logic components in any system, positioned between single entity services (such as processing or foundation) and advanced logic services such as coordination services, aggregation services, or simply exposers such as controllers web components or anything else. Here's a high-level overview of where orchestration services may live:
+Orchestration services are one of the core business logic components in any system, positioned between single entity services (such as processing or foundation) and advanced logic services such as coordination services, aggregation services, or simply exposers such as controllers, web components, or anything else. Here's a high-level overview of where orchestration services may live:
@@ -36,11 +36,11 @@ Orchestration services are one of the core business logic components in any syst
-As shown above, Orchestration services have quite a few dependencies and consumers. They are the core engine of any software. On the right-hand side, you can see an orchestration service's dependencies. Since a processing service is optional based on whether a higher-order business logic is needed or not - orchestration services can combine multiple foundation services as well.
+As shown above, Orchestration services have quite a few dependencies and consumers. They are the core engine of any software. On the right-hand side, you can see an orchestration service's dependencies. Since a processing service is optional based on whether a higher-order business logic is needed, orchestration services can combine multiple foundation services as well.
-The existence of an Orchestration service warrants the presence of a Processing service. But that's not always the case. There are situations where all orchestration services need to finalize a business flow to interact with primitive-level functionality.
+The existence of an Orchestration service warrants the presence of a Processing service. But that's only sometimes the case. In some situations, all orchestration services need to finalize a business flow to interact with primitive-level functionality.
-However, an Orchestration service could have several consumers, such as coordination services (orchestrators of orchestrators), aggregation services or simply an exposer. Exposers are like controllers, view services, UI components, or another foundation or processing service in case of putting messages back on a queue - which we will discuss further in our Standard.
+However, an Orchestration service could have several consumers, such as coordination services (orchestrators of orchestrators), aggregation services, or an exposer. Exposers are like controllers, view services, UI components, or another foundation or processing service in case of putting messages back on a queue - which we will discuss further in our Standard.
## 2.3.2 Characteristics
@@ -49,25 +49,25 @@ In general, orchestration services are concerned with combining single-entity pr
### 2.3.2.0 Language
Just like Processing services, the language used in Orchestration services defines the level of complexity and the capabilities it offers.
-Usually, orchestration services combine two or more primitive or higher-order operations from multiple single-entity services to execute a successful operation.
+Orchestration services usually combine two or more primitive or higher-order operations from multiple single-entity services to execute a successful operation.
#### 2.3.2.0.0 Functions Language
-Orchestration services have a common characteristic regarding the language of their functions. Orchestration services are wholistic in most of the language of its function. You will see functions such as `NotifyAllAdmins` where the service pulls all users with an admin type and then calls a notification service to notify them.
+Orchestration services have a common characteristic regarding the language of their functions. Orchestration services are wholistic in most of the language of its function. You will see functions such as `NotifyAllAdmins` where the service pulls all users with an admin type and then calls a notification service.
-Orchestration services offer functionality that inches closer and closer to a business language than primitive technical operation. You may see almost an identical expression in a non-technical business requirement matching a function name in an orchestration service. The same pattern continues as one goes to higher and more advanced categories of services within that realm of business logic.
+Orchestration services offer functionality that inches closer to a business language than primitive technical operations. You may see almost an identical expression in a non-technical business requirement matching a function name in an orchestration service. The same pattern continues as one goes to higher and more advanced categories of services within a specific realm of business logic.
#### 2.3.2.0.1 Pass-Through
-Orchestration services can also be a pass-through for some operations. For instance, an orchestration service could allow an `AddStudentAsync` to be propagated through the service to unify the source of interactions with the system at the exposers level. In this case, orchestration services will use the same terminology a processing or foundation service may use to propagate the operation.
+Orchestration services can also be a pass-through for some operations. For instance, an orchestration service could allow an `AddStudentAsync` to be propagated through the service to unify the source of interactions with the system at the exposer's level. In this case, orchestration services will use the same terminology a processing or foundation service may use to propagate the operation.
#### 2.3.2.0.2 Class-Level Language
-Orchestration services mainly combine multiple operations supporting a particular entity. So, if the primary entity is `Student` and the rest of the entities are just to support an operation mainly targetting a `Student` entity - then the name of the orchestration service would be `StudentOrchestrationService`.
+Orchestration services mainly combine multiple operations supporting a particular entity. So, if the primary entity is `Student` and the rest of the entities are just to support an operation mainly targeting a `Student` entity, then the name of the orchestration service would be `StudentOrchestrationService`.
Enforcement of naming conventions ensures that any orchestration service stays focused on a single entity's responsibility concerning multiple other supporting entities.
-For instance, creating a library card requires the school enrollment of the student referenced in that library card. In this case, the Orchestration service name will reflect its primary entity, `LibraryCard`. Our orchestration service name then would be `LibraryCardOrchestrationService`.
+For instance, creating a library card requires the school enrollment of the student referenced in that library card. In this case, the Orchestration service name will reflect its primary entity, `LibraryCard`. Our orchestration service name would then be `LibraryCardOrchestrationService`.
The opposite is also true. If enrolling a student in a school has associated operations such as creating a library card, then, in this case, a `StudentOrchestrationService` must exist to create a `Student` and all other related entities.
@@ -75,7 +75,7 @@ The same idea applies to all exceptions created in an orchestration service, suc
### 2.3.2.1 Dependencies
-As we mentioned above, Orchestration services might have a more extensive range of dependencies, unlike Processing and Foundation services, because of the optionality of Processing services. Therefore, Orchestration services may have dependencies ranging from foundation services or optional processing services to cross-cutting services such as logging or other utility brokers.
+As we mentioned above, Orchestration services might have a more extensive range of dependencies, unlike Processing and Foundation services, because Processing services are optional. Therefore, Orchestration services may have dependencies ranging from foundation services or optional processing services to cross-cutting services such as logging or other utility brokers.
#### 2.3.2.1.0 Dependency Balance (Florance Pattern)
@@ -83,7 +83,7 @@ A fundamental rule governing the consistency and balance of orchestration servic
That means an Orchestration service cannot combine Foundation and Processing services. The dependencies have to be either all Processings or all Foundation services. That rule doesn't apply to utility broker dependencies, however.
-Here's an example of an unbalanced orchestration service dependencies:
+Here's an example of an unbalanced orchestration service dependency:
@@ -91,7 +91,7 @@ Here's an example of an unbalanced orchestration service dependencies:
-An additional processing service is required to give a pass-through to a lower-level foundation service to balance the architecture - applying 'Florance Pattern' for symmetry would turn our architecture to the following:
+An additional processing service is required to give a pass-through to a lower-level foundation service to balance the architecture - applying 'Florance Pattern' for symmetry would turn our architecture into the following:
@@ -99,13 +99,13 @@ An additional processing service is required to give a pass-through to a lower-l
-Applying 'Florance Pattern' might be very costly at the beginning as it includes creating an entirely new processing service (or multiple) to balance the architecture. But its benefits outweighs the cost from a maintainability, readability, and pluggability perspectives.
+Applying the 'Florance Pattern' might be very costly initially, including creating an entirely new processing service (or multiple) to balance the architecture. However, its benefits outweigh the cost from maintainability, readability, and pluggability perspectives.
#### 2.3.2.1.1 Two-Three
-The 'Two-Three' rule is a complexity control rule. This rule dictates that an Orchestration service may not have more than three or less than two Processing or Foundation services to run the orchestration. This rule, however, doesn't apply to utility brokers. Orchestration services may have a `DateTimeBroker` or a `LoggingBroker` without any restriction. But an Orchestration service may not have an entity broker, such as a `StorageBroker` or a `QueueBroker`, which feeds directly into the core business layer of any service.
+The 'Two-Three' rule is a complexity control rule. This rule dictates that an Orchestration service may have up to three or less than two Processing or Foundation services to run the orchestration. This rule, however, doesn't apply to utility brokers. Orchestration services may have a `DateTimeBroker` or a `LoggingBroker` without restriction. However, an orchestration service may not have an entity broker, such as a `StorageBroker` or a `QueueBroker`, which feeds directly into the core business layer of any service.
-This rule, such as most of the patterns and concepts in The Standard is inspired by nature. You can see how the trees branche into two's and three's - the same thing for thunder, blood vessels and so many other creations around, within, and above us follow the same pattern.
+This rule, like most of the patterns and concepts in The Standard, is inspired by nature. You can see how the trees branch into twos and threes - the same thing for thunder, blood vessels, and so many other creations around, within, and above us follow the same pattern.
@@ -113,15 +113,15 @@ This rule, such as most of the patterns and concepts in The Standard is inspired
-A tree branches as it grows upwards but also in it's very roots. And so is the case with Orchestration and Orchestration-Like services. They can branch further upwards as I will explain here shortly, but also downwards through patterns like the Cul-De-Sac pattern.
+A tree branches as it grows upwards but also in its very roots. And so is the case with Orchestration and Orchestration-Like services. They can branch further upwards, as I will explain here shortly, but also downwards through patterns like the Cul-De-Sac pattern.
The 'Two-Three' rule may require a layer of normalization to the categorical business function. Let's talk about the different mechanisms of normalizing orchestration services.
##### 2.3.2.1.1.0 Full-Normalization
-There are frequently situations where the current architecture of any given orchestration service ends up with one orchestration service with three dependencies. And a new entity processing or foundation service is required to complete an existing process.
+There are frequently situations where the current architecture of any given orchestration service ends up with one orchestration service with three dependencies. A new entity processing or foundation service is required to complete an existing process.
-For instance, let's say we have a `StudentContactOrchestrationService` and that service has dependencies that provide primitive-level functionality for `Address`, `Email`, and `Phone` for each student. Here's a visualization of that state:
+For instance, let's say we have a `StudentContactOrchestrationService`, which has dependencies that provide primitive-level functionality for each student `Address`, `Email`, and `Phone`. Here's a visualization of that state:
@@ -129,7 +129,7 @@ For instance, let's say we have a `StudentContactOrchestrationService` and that
-Now, a new requirement, 'SocialMedia', is added to 'Student', to gather more contact information about how to reach a student. We can go into full-normalization mode simply by finding common ground that equally splits the contact information entities. For instance, we can break out regular contact information versus digital contact information as in `Address` and `Phone` versus `Email` and `SocialMedia`. This way, we split four dependencies into two, each for their orchestration services as follows:
+Now, a new requirement, 'SocialMedia', is added to 'Student', to gather more contact information about how to reach a student. We can go into full-normalization mode by finding common ground that equally splits the contact information entities. For instance, we can break out regular contact information versus digital contact information as in `Address` and `Phone` versus `Email` and `SocialMedia`. This way, we split four dependencies into two, each for their orchestration services as follows:
@@ -139,13 +139,13 @@ Now, a new requirement, 'SocialMedia', is added to 'Student', to gather more con
In the figure above, we modified the existing `StudentContactOrchestrationService` into `StudentRegularContactOrchestrationService` and removed one of its dependencies on the `EmailService`.
-Additionally, we created a new `StudentDigitalContactOrchestrationService` to have two dependencies on the existing `EmailService` and the new `SocialMediaService`. Consequently, we now need an advanced business logic layer, like a coordination service to provide student contact information to upstream consumers.
+Additionally, we created a new `StudentDigitalContactOrchestrationService` to have two dependencies on the existing `EmailService` and the latest `SocialMediaService`. Consequently, we need an advanced business logic layer, like a coordination service, to provide student contact information to upstream consumers.
##### 2.3.2.1.1.1 Semi-Normalization
-Normalization isn't always as straightforward as the example above, especially when a core entity has to exist before creating or filling in additional information about related entities.
+Normalization is more complex than the example above, especially when a core entity has to exist before creating or filling in additional information about related entities.
-For instance, let's say we have a `StudentRegistrationOrchestrationService` which relies on `StudentProcessingService`, `LibraryCardProcessingService` and `BookProcessingService` as follows:
+For instance, let's say we have a `StudentRegistrationOrchestrationService` which relies on `StudentProcessingService`, `LibraryCardProcessingService`, and `BookProcessingService` as follows:
@@ -153,7 +153,7 @@ For instance, let's say we have a `StudentRegistrationOrchestrationService` whic
-But now, we need a new service to handle students' immunization records as `ImmunizationProcessingService`. We need all four services but we already have a `StudentRegistrationOrchestrationService` that has three dependencies. At this point, a semi-normalization is required for the re-balancing of the architecture to honor the 'Two-Three' rule and eventually to control the complexity.
+But now, we need a new service called' ImmunizationProcessingService' to handle students' immunization records. We need all four services, but we already have a `StudentRegistrationOrchestrationService` that has three dependencies. At this point, a semi-normalization is required for the re-balancing of the architecture to honor the 'Two-Three' rule and eventually control the complexity.
@@ -169,21 +169,21 @@ In this case, a further normalization or a split is required to re-balance the a
-To complete the registration flow with a new model, a coordination service must pass in advanced business logic to combine these entities. But more importantly, you will notice that each orchestration service has a redundant dependency of `StudentProcessingService` to ensure no virtual dependency on any other orchestration service, creating/ensuring a student record exists.
+To complete the registration flow with a new model, a coordination service must pass in advanced business logic to combine these entities. But more importantly, you will notice that each orchestration service has a redundant dependency of `StudentProcessingService` to ensure no virtual dependency on any other orchestration service, creating/providing a student record exists.
Virtual dependencies are very tricky. It's a hidden connection between two services of any category where one service implicitly assumes that a particular entity will be created and present. Virtual dependencies are very dangerous and threaten the proper autonomy of any service. Detecting virtual dependencies early in the design and development process could be a daunting but necessary task to ensure a clean, Standardized architecture is in place.
-Just like model changes require database structure migrations and additional logic and validations, a new requirement for a new entity might require restructuring an existing architecture or extending it to a new version, depending on which stage the system is receiving these new requirements.
+Just like model changes require database structure migrations and additional logic and validations, a new requirement for a new entity might require restructuring an existing architecture or extending it to a new version, depending on which stage the system receives these new requirements.
-It may be very enticing to add another dependency to an existing orchestration service - but that's where the system starts to diverge from 'The Standard'. And that's when the system begins to become an unmaintainable legacy system. But more importantly, this scenario tests the design principles and standards of craftsmanship of the engineers involved in designing and developing the system.
+Adding another dependency to an existing orchestration service may be very enticing - but that's where the system starts to diverge from 'The Standard'. And that's when the system begins to become an unmaintainable legacy system. But more importantly, this scenario tests the design principles and standards of craftsmanship of the engineers involved in designing and developing the system.
##### 2.3.2.1.1.2 No-Normalization
-I believe that everything, everywhere, somehow is connected. Yet, there are scenarios where higher levels of normalization are challenging to achieve. Sometimes it might be incomprehensible for the mind to group multiple services under one orchestration service.
+Everything, everywhere, is somehow connected. Yet, there are scenarios where higher levels of normalization are challenging to achieve. Sometimes, it might be incomprehensible for the mind to group multiple services under one orchestration service.
-Because it's pretty hard for my mind to come up with an example for multiple entities that have no connection to each other, as I genuinely believe it couldn't exist. I'm going to rely on some fictional entities to visualize a problem. So let's assume there are `AService` and `BService` orchestrated together with an `XService`. The existence of `XService` is important to ensure that both `A` and `B` can be created with an assurance that a core entity `X` does exist.
+Because it's hard for me to come up with an example for multiple entities that have no connection to each other, it couldn't exist. I'm going to rely on some fictional entities to visualize a problem. So, let's assume there are `AService` and `BService` orchestrated together with an `XService`. The existence of `XService` is important to ensure that both `A` and `B` can be created with an assurance that a core entity `X` does exist.
-Now, let's say a new service `CService` is required to be added to the mix to complete the existing flow. So, now we have four different dependencies under one orchestration service, and a split is mandatory. Since there's no relationship whatsoever between `A`, `B`, and `C`, a 'No-Normalization' approach becomes the only option to realize a new design as follows:
+Now, let's say a new service, `CService`, must be added to the mix to complete the existing flow. So, now we have four different dependencies under one orchestration service, and a split is mandatory. Since there's no relationship whatsoever between `A`, `B`, and `C`, a 'No-Normalization' approach becomes the only option to realize a new design as follows:
@@ -191,12 +191,12 @@ Now, let's say a new service `CService` is required to be added to the mix to co
-Each of the above primitive services will be orchestrated with a core service `X` and then gathered under a coordination service. This case above is the worst-case scenario, where normalization of any size is impossible. Note that the author of this Standard couldn't come up with a realistic example unlike any others to show you how rare it is to run into that situation, so let a 'No-Normalization' approach be your very last solution if you run out of options.
+The above primitive services will be orchestrated with a core service, `X`, and then gathered under a coordination service. This case above is the worst-case scenario, where normalization of any size is impossible. Note that the author of this Standard couldn't come up with a realistic example unlike any others to show you how rare it is to run into that situation, so let a 'No-Normalization' approach be your very last solution if you run out of options.
##### 2.3.2.1.1.3 Meaningful Breakdown
-Regardless of the type of normalization you follow, you must ensure that your grouped services represent a common meaning. For instance, putting together a `StudentProcessingService` and `LibraryProcessingService` must require a functional commonality. A good example of that would be `StudentRegistrationOrchestrationService`, for instance. The registration process requires adding a new student record and creating a library card for that very same student.
+Regardless of the type of normalization you follow, you must ensure that your grouped services represent a common meaning. For instance, putting together a `StudentProcessingService` and `LibraryProcessingService` must require a functional commonality. An excellent example of that would be `StudentRegistrationOrchestrationService`. The registration process requires adding a new student record and creating a library card for that student.
-Implementing orchestration services without intersection between two or three entities per operation defeats the whole purpose of having an orchestration service. This condition is satisfied if at least one intersection between two entities has occurred. An orchestration service may have other 'Pass-Through' operations where we propagate certain routines from their processing or foundation origins if they match the same contract.
+Implementing orchestration services without an intersection between two or three entities per operation defeats the whole purpose of having an orchestration service. This condition is satisfied if at least one intersection between two entities has occurred. An orchestration service may have other 'Pass-Through' operations where we propagate certain routines from their processing or foundation origins if they match the same contract.
Here's an example:
@@ -221,17 +221,17 @@ public class StudentOrchestrationService
}
```
-In the example above, our `StudentOrchestrationService` had an orchestration routine that combined adding a student and creating a library card for that student. But additionally, it also offers a 'Pass-Through' function for a low-level processing service routine to modify a student.
+In the example above, our `StudentOrchestrationService` had an orchestration routine that combined adding a student and creating a library card for that student. It also offers a 'Pass-Through' function for a low-level processing service routine to modify a student.
'Pass-Through' routines must have the same contract as the other routines in any orchestration service. Our 'Pure Contract' principle dictates that any service should allow the same contract as input and output or primitive types.
### 2.3.2.2 Contracts
-Orchestration services may combine two or three different entities and their operations to achieve an higher business logic. There are two scenarios for contract/models for orchestration services: One that stays true to the primary entity's purpose; and one that is complex - a combinator orchestration service that tries to expose its inner target entities explicitly.
+Orchestration services may combine two or three different entities and their operations to achieve a higher business logic. There are two scenarios for contract/models for orchestration services: One that stays true to the primary entity's purpose and one that is complex - a combinator orchestration service that tries to expose its inner target entities explicitly.
Let's talk about these two scenarios in detail.
#### 2.3.2.2.0 Physical Contracts
-Some orchestration services are still single-purposed even though they may combine two or three other higher-order routines from multiple entities. For instance, an orchestration service that reacts to messages from some queue then persists these messages are single-purposed and single-entity orchestration services.
+Some orchestration services are still single-purposed, even though they may combine two or three other higher-order routines from multiple entities. For instance, an orchestration service that reacts to messages from some queue and persists in these messages is a single-purposed and single-entity orchestration service.
Let's take a look at this code snippet:
@@ -266,12 +266,12 @@ public class StudentOrchestrationService
In the above example, the orchestration service still exposes functionality that honors the physical model `Student` and internally communicates with several services that may provide completely different models. These are the scenarios where a single entity has a primary purpose, and all other services are supporting services to ensure a successful flow for that entity.
-In our example, the orchestration services _listen_ to a queue for new student messages, then use that event to persist any incoming new students in the system. So the physical contract `Student` is the same language the orchestration service explicitly uses as a model to communicate with upper stream services/exposers or others.
+In our example, the orchestration services listen to a queue for new student messages and use that event to persist any incoming new students in the system. So, the physical contract `Student` is the same language the orchestration service explicitly uses as a model to communicate with upper stream services/exposers or others.
-But there are other scenarios where a single entity is not the only purpose/target for an orchestration service. Let's talk about that in detail.
+However, there are other scenarios in which a single entity is not the only purpose/target for an orchestration service. Let's discuss that in detail.
#### 2.3.2.2.1 Virtual Contracts
-In some scenarios, an orchestration service may be required to create non-physical contracts to complete a particular operation. For instance, consider an orchestration service required to persist a social media post containing a picture. The requirement here is to persist the picture in one database and the actual post (comments, authors, and others) into a different database table in a relational model.
+In some scenarios, an orchestration service may be required to create non-physical contracts to complete a particular operation. For instance, consider an orchestration service required to persist a social media post containing a picture. The requirement is to persist the picture in one database and the actual post (comments, authors, and others) in a different database table in a relational model.
The incoming model might be significantly different from the actual physical models. Let's see what that would look like in the real world.
@@ -287,7 +287,7 @@ public class MediaPost
}
```
-The above contract `MediaPost` contains two different physical entities combined. The first is the actual post, including the `Id`, `Content`, and `Date`, and the second is the list of images attached to that very post.
+The above contract, `MediaPost,` contains two separate physical entities. The first is the actual post, including the `Id`, `Content`, and `Date`, and the second is the list of images attached to that post.
Here's how an orchestration service would react to this incoming virtual model:
@@ -333,16 +333,16 @@ public List MapToMedias(MediaPost mediaPost)
}
```
-The above code snippet shows the orchestration service deconstructing a given virtual model/contract `MediaPost` into two physical models. Each one has its separate processing service that handles its persistence. There are scenarios where the virtual model gets deconstructed into one single model with additional details used for validation and verification with downstream processing or foundation services.
+The above code snippet shows the orchestration service deconstructing a given virtual model/contract, `MediaPost`, into two physical models. Each one has its separate processing service that handles its persistence. There are scenarios where the virtual model gets deconstructed into one single model with additional details used for validation and verification with downstream processing or foundation services.
-There are also hybrid situations where the incoming virtual model may have nested physical models, which is something we can only allow with virtual models. Physical models shall stay anemic (contains no routines or constructors) and flat (contains no nested models) at all times to control complexity and focus responsibility.
+In hybrid situations, the incoming virtual model may have nested physical models, which we can only allow with virtual models. Physical models shall always stay anemic (contains no routines or constructors) and flat (contains no nested models) to control complexity and focus responsibility.
-In summary, Orchestration services may create their own contracts. These contracts may be physical or virtual. And a virtual contract may be a combination of one or many physical (or nested virtual) contracts or simply has it's own flat design in terms of properties.
+In summary, Orchestration services may create their contracts, which may be physical or virtual. A virtual contract may be a combination of one or many physical (or nested virtual) contracts or simply have its own flat design in terms of properties.
### 2.3.2.2 Cul-De-Sac
-Sometimes, Orchestration services and their equivalent (coordination, management, etc.) may not need an exposer component (controller, for instance). That's because these services may be listeners to specific events and communicate the event back into a Processing or a Foundation service at the same level where the event started or was received.
+Sometimes, Orchestration services and their equivalent (coordination, management, etc.) may not need an exposer component (controller, for instance). That's because these services may listen to specific events and communicate them back into a Processing or a Foundation service at the same level where the event started or was received.
-For example, incoming messages can be received from a subscription to an event service or a queue. In this case, the input for these services isn't necessarily through an exposer component anymore. Imagine building a simple application where it gets notified with messages from a queue and then maps these messages into some local model to persist it in storage. In this case, the orchestration service would look something like the following:
+For example, incoming messages can be received from a subscription to an event service or a queue. In this case, the input for these services isn't necessarily through an exposer component anymore. Imagine building a simple application that gets notified with messages from a queue and then maps these messages into some local model to persist it in storage. In this case, the orchestration service would look something like the following:
@@ -350,7 +350,7 @@ For example, incoming messages can be received from a subscription to an event s
- The `StudentEventOrchestrationService` listens to messages for new students coming in and immediately converts that into models that can be persisted in the database.
+ The `StudentEventOrchestrationService` listens to messages for new students and immediately converts them into models that can be persisted in the database.
Here's an example:
@@ -401,9 +401,9 @@ public async Task ShouldAddProfileAsync()
}
```
- The test here indicates an event listening has to occur first, then persistence logic in the student service must match the outcome of mapping an incoming message to a given student.
+ The test here indicates that an event listening has to occur first, and then persistence logic in the student service must match the outcome of mapping an incoming message to a given student.
- Let's try to make this test pass.
+ Let's make this test pass.
```csharp
public partial class ProfileEventOrchestrationService : IProfileEventOrchestrationService
@@ -439,7 +439,7 @@ public partial class ProfileEventOrchestrationService : IProfileEventOrchestrati
}
```
-In the above example, the constructor of the Orchestration service subscribes to the events that would come from the `ProfileEventProcessingService`. When an event occurs, the orchestration service will call the `ProcessProfileEventAsync` function to persist the incoming student into the database through a foundation or a processing service at the same level as the event service.
+In the above example, the Orchestration service constructor subscribes to the events that would come from the `ProfileEventProcessingService`. When an event occurs, the orchestration service calls the `ProcessProfileEventAsync` function to persist the incoming student into the database through a foundation or a processing service at the same level as the event service.
This pattern or characteristic is called the Cul-De-Sac. An incoming message will turn and head in a different direction for a different dependency. This pattern is typical in large enterprise-level applications where eventual consistency is incorporated to ensure the system can scale and become resilient under heavy consumption. This pattern also prevents malicious attacks against your API endpoints since it allows processing queue messages or events whenever the service is ready to process them. We will discuss the details in 'The Standard Architecture.
@@ -447,18 +447,18 @@ This pattern or characteristic is called the Cul-De-Sac. An incoming message wil
Orchestration services provide advanced business logic. It orchestrates multiple flows for multiple entities/models to complete a single flow. Let's discuss in detail what these responsibilities are:
### 2.3.3.0 Advanced Logic
-Orchestration services cannot exist without combining multiple routines from multiple entities. These entities may differ in nature but share a common flow or purpose. For instance, a `LibraryCard` as a model is fundamentally different from a `Student` model. However, they both share a common purpose regarding the student registration process. Adding a student record is required to register a student, but assigning a library card to that student is required for a successful student registration process.
+Orchestration services can only exist by combining multiple routines from multiple entities. These entities may differ in nature but share a standard flow or purpose. For instance, a `LibraryCard` model fundamentally differs from a `Student` model. However, they both share a common purpose regarding the student registration process. Adding a student record is required to register a student, but assigning a library card to that student is required for a successful student registration process.
-Orchestration services ensure the correct routines for each entity are integrated, but also ensure these routines are called in the correct order. Additionally, orchestration services are responsible for rolling back a failing operation. These three aspects constitute an orchestration effort across multiple routines, entities, or contracts.
+Orchestration services ensure the correct routines for each entity are integrated and called in the proper order. Additionally, orchestration services are responsible for rolling back a failing operation. These three aspects constitute an orchestration effort across multiple routines, entities, or contracts.
Let's talk about those in detail.
#### 2.3.3.0.0 Flow Combinations
-We spoke earlier about orchestration services combining multiple routines to achieve a common purpose or a single flow. This aspect of orchestration services can serve as both a fundamental characteristic and a responsibility. An orchestration service without at least one routine combining two or three entities is not considered an orchestration. Integrating with multiple services without a common purpose is a better-fit definition for Aggregation services which we will discuss later in this services chapter.
+We spoke earlier about orchestration services combining multiple routines to achieve a common purpose or a single flow. This aspect of orchestration services can serve as both a fundamental characteristic and a responsibility. An orchestration service without at least one routine combining two or three entities is not considered an orchestration. Integrating multiple services without a common purpose is a better-fit definition for aggregation services, which we will discuss later in this services chapter.
-But within the flow combination comes the unification of contract. I call it mapping and branching. Mapping an incoming model into multiple lower-stream services models then branching the responsibility across these services.
+However, within the flow combination comes the unification of the contract. I call it mapping and branching. Mapping an incoming model into multiple lower-stream service models and then branching the responsibility across these services.
-Just like the previous services, during their flow combination, Orchestration services are responsible for ensuring the purity of the exposed input and output contracts, which becomes a bit more complex when combining multiple models. Orchestration services will continue to be responsible for mapping incoming contracts to their respective downstream services. They will also map back the returned results from these services into the unified model.
+Just like the previous services, during their flow combination, Orchestration services are responsible for ensuring the purity of the exposed input and output contracts, which becomes a bit more complex when combining multiple models. Orchestration services will continue to be responsible for mapping incoming contracts to their respective downstream services. They will also map back the results from these services into the unified model.
Let's bring back a previous code snippet to illustrate that aspect:
@@ -516,13 +516,13 @@ private MediaPost MapToMediaPost(Post post, List medias)
```
As you can see in the above example, the mapping and branching don't just happen on the way in. But a reverse action has to be taken on the way out. It violates The Standard to return the same input object that was passed in. That takes away any visibility on potential changes to the incoming request during persistence. The duplex mapping should substitute the need to dereference the incoming request to ensure no unexpected internal changes have occurred.
-Note that breaking out the mapping logic into its own aspect/partial class file is also recommended. Something like `StudentOrchestrationService.Mappings.cs` to ensure the only thing left is orchestration's business logic.
+Note that breaking out the mapping logic into its own aspect/partial class file is also recommended�something like `StudentOrchestrationService.Mappings.cs`�to ensure the only thing left is orchestration's business logic.
#### 2.3.3.0.1 Call Order
-Calling routines in the correct order can be crucial to any orchestration process. For instance, a library card cannot be created unless a student record is created first. Enforcing the order here can split into two different types. Let's talk about those here for a bit.
+Calling routines in the correct order can be crucial to any orchestration process. For instance, a library card cannot be created unless a student record is created first. Enforcing the order here can be divided into two different types. Let's discuss those here for a bit.
##### 2.3.3.0.1.0 Natural Order
-The natural order here refers to specific flows that cannot be executed unless a prerequisite of input parameters is retrieved or persisted. For instance, imagine a situation where a library card cannot be created unless a student's unique identifier is retrieved first. In this case, we don't have to worry about testing that certain routines were called in the right order because it comes naturally with the flow.
+The natural order here refers to specific flows that cannot be executed unless a prerequisite of input parameters is retrieved or persisted. For instance, imagine a situation where a library card cannot be created unless a student's unique identifier is retrieved first. In this case, we don't have to worry about testing that certain routines were called in the correct order because it comes naturally with the flow.
Here's a code example of this situation:
@@ -541,7 +541,7 @@ In the example above, having a student `Name` is a requirement to create a libra
Let's talk about the second type of order - Enforced Order.
##### 2.3.3.0.1.1 Enforced Order
-Imagine the very same example above, but instead of the library card requiring a student name, it just needs the student `Id` already enclosed in the incoming request model. Something like this:
+Imagine the same example above, but instead of the library card requiring a student name, it just needs the student `Id` already enclosed in the incoming request model. Something like this:
```csharp
public async ValueTask CreateLibraryCardAsync(LibraryCard libraryCard)
@@ -553,9 +553,9 @@ public async ValueTask CreateLibraryCardAsync(LibraryCard libraryCa
}
```
-Ensuring a verified enrolled student exists before creating a library card might become a challenge because there's no dependency between the return value of one routine and the input parameters of the next. In other words, there is nothing that the `VerifyEnlistedStudentExistAsync` function returns that the `CreateLibraryCardAsync` function cares about in terms of input parameters.
+Ensuring a verified enrolled student exists before creating a library card might become a challenge because there is no dependency between the return value of one routine and the input parameters of the next. In other words, the `VerifyEnlistedStudentExistAsync` function returns nothing that the `CreateLibraryCardAsync` function cares about in terms of input parameters.
-In this case here an enforced type of order must be implemented through unit tests. A unit test for this routine would require verifying not just that the dependency has been called with the correct parameters, but also that they are called in the correct _order_ let's take a look at how that would be implemented:
+In this case, an enforced type of order must be implemented through unit tests. A unit test for this routine would require verifying not just that the dependency has been called with the correct parameters but also that they are called in the correct _order_ let's take a look at how that would be implemented:
```csharp
[Fact]
@@ -600,12 +600,12 @@ public async Task ShouldCreateLibraryCardAsync()
```
-From the example above, the mock framework here is being used to ensure a certain order is enforced when calling these dependencies. This way we enforce a certain implementation within any given method to ensure that non-naturally connected dependencies are sequentially called in the intended order.
+In the example above, the mock framework is being used to ensure a certain order is enforced when calling these dependencies. This way, we enforce a certain implementation within any given method to ensure that non-naturally connected dependencies are sequentially called in the intended order.
It's more likely that the type of ordering leans more towards enforced than natural when orchestration services reach the maximum number of dependencies.
#### 2.3.3.0.2 Exceptions Mapping (Wrapping & Unwrapping)
-This responsibility is very similar to flow combinations. Except that in this case, orchestration services unify all the exceptions that may occur out of any of its dependencies into one unified categorical exception model. Let's start with an illustration of what that mapping may look like:
+This responsibility is very similar to flow combinations. Except in this case, orchestration services unify all the exceptions that may occur from any dependencies into one unified categorical exception model. Let's start with an illustration of what that mapping may look like:
@@ -613,7 +613,7 @@ This responsibility is very similar to flow combinations. Except that in this ca
-In the illustration above, you will notice that validation and dependency validation exceptions, thrown from downstream dependency services, map into one unified dependency exception at the orchestration level. This practice allows upstream consumers of that same orchestration service to determine the next course of action based on one categorical exception type instead of four or in the case of three dependencies it would be six categorical dependencies.
+In the illustration above, you will notice that validation and dependency validation exceptions, thrown from downstream dependency services, map into one unified dependency exception at the orchestration level. This practice allows upstream consumers of that same orchestration service to determine the next course of action based on one categorical exception type instead of four, or in the case of three dependencies, it would be six categorical dependencies.
Let's start with a failing test to materialize our idea here:
@@ -702,9 +702,9 @@ public async Task ShouldThrowDependencyValidationExceptionOnCreateIfDependencyVa
}
```
-Above, we verify that any of our four exception types are mapped into a `StudentOrchestrationDependencyValidationException`. We maintain the original localized exception as an inner exception. But we unwrap the categorical exception at this level to maintain the original issue as we go upstream.
+Above, we verify that any of our four exception types are mapped into a `StudentOrchestrationDependencyValidationException`. We maintain the original localized exception as an inner exception. But we unwrap the categorical exception at this level to keep the original issue as we go upstream.
-These exceptions are mapped under a dependency validation exception because they originate from a dependency or a dependency of a dependency downstream. For instance, if a storage broker throws an exception that is a dependency validation (something like `DuplicateKeyException`). The broker-neighboring service would map that into a localized `StudentAlreadyExistException` and then wrap that exception in a categorical exception of type `StudentDependencyValidationException`. When that exception propagates upstream to Processing or an Orchestration service, we lose the categorical exception as we have already captured it under the right scope of mapping. Then we continue to embed that very localized exception under the current service dependency validation exception.
+These exceptions are mapped under a dependency validation exception because they originate from a dependency or a dependency of a dependency downstream. For instance, if a storage broker throws an exception, it is a dependency validation (something like `DuplicateKeyException`). The broker-neighboring service would map that into a localized `StudentAlreadyExistException` and then wrap that exception in a categorical exception of type `StudentDependencyValidationException`. When that exception propagates upstream to Processing or an Orchestration service, we lose the categorical exception as we have already captured it under the proper scope of mapping. Then, we continue to embed that very localized exception under the current service dependency validation exception.
Let's try to make this test pass:
@@ -763,16 +763,16 @@ TryCatch(async () =>
});
```
-You can see in the implementation that we mapped all four different types of external downstream services validation exceptions into one categorical exception and then maintained the inner exception for each one.
+In the implementation, you can see that we mapped all four different types of external downstream services validation exceptions into one categorical exception and then maintained the inner exception for each one.
The same rule applies to dependency exceptions. Dependency exceptions can be both Service and Dependency exceptions from downstream services. For instance, in the above example, calling a student service may produce `StudentDependencyException` and `StudentServiceException`. These categorical exceptions will be unwrapped from their categorical layer and have their local layer wrapped in one unified new orchestration-level categorical exception under `StudentOrchestrationDependencyException`. The same applies to all other dependency categorical exceptions like `LibraryCardDependencyException` and `LibraryCardServiceException`.
It's crucial to unwrap and wrap localized exceptions from downstream services with categorical exceptions at the current service layer to ensure consistency with the Exposers layer. These exceptions can be easily handled and mapped into whatever the nature of the exposer component dictates. In the case of an Exposer component of type API Controller, the mapping would produce HTTP Status Codes. In the case of UI Exposer components, it would map to text meaningful to end users.
-We will discuss further upstream in this Standard when to expose localized inner exceptions details where end-users are not required to take any action, which is exclusive to dependency and service level exceptions.
+We will discuss further upstream in this Standard when to expose localized inner exceptions details where end-users are not required to take any action exclusive to dependency and service level exceptions.
## 2.3.4 Variations
-Orchestration services vary depending on where they stand in the overall low-level architecture. For instance, an Orchestration service depending on downstream orchestration services is called a Coordination Service. An Orchestration service working with multiple Coordination services as dependencies are called a Management Service. These variants are an orchestration service with uber-level business logic.
+Orchestration services vary depending on their position in the overall low-level architecture. For instance, an orchestration service that relies on downstream orchestration services is called a Coordination service. An Orchestration service working with multiple Coordination services as dependencies is called a Management Service. These variants are Orchestration services with uber-level business logic.
### 2.3.4.0 Variants Levels
Let's take a look at the possible variants for orchestration services and where they would be positioned:
@@ -783,7 +783,7 @@ Let's take a look at the possible variants for orchestration services and where
- In my personal experience, I've rarely had to resolve to an Uber Management service. The idea of the limitation here in terms of dependencies and variations of orchestration-like services is to help engineers re-think the complexity of their logic. But admittedly, there are situations where complexity is an absolute necessity. Therefore Uber-Management services exist as an option.
+ In my personal experience, I've rarely had to resolve to an Uber Management service. The limitation here in terms of dependencies and variations of orchestration-like services is to help engineers rethink the complexity of their logic. But admittedly, there are situations where complexity is an absolute necessity. Therefore, Uber Management services exist as an option.
The following table should guide the process of developing variants of orchestration services based on the level:
@@ -794,10 +794,10 @@ Let's take a look at the possible variants for orchestration services and where
| Management Services | Coordination Services | Uber Management Services | High |
| Uber Management Services | Management Services | Aggregation, Views or Exposer Components | Very High |
- Working beyond Uber Management services in an orchestration manner would require a deeper discussion and a serious consideration of the overall architecture. Future versions of The Standard might be able to address this issue in what I call "The Lake House," but that is outside of the scope of this version of The Standard.
+ Working beyond Uber Management services in an orchestration manner would require a more profound discussion and a serious consideration of the overall architecture. Future versions of The Standard might be able to address this issue in what I call "The Lake House," but that is outside of the scope of this version of The Standard.
### 2.3.4.1 Unit of Work
- With the variations of orchestration services, I highly recommend staying true to the unit of work concept. Every request can do one thing and one thing only, including its prerequisites. For instance, if you need to register a student in a school, You may also require adding a guardian, contact information, and other details. Eventing these actions can significantly decrease the complexity of the flow and lower the risk of failures in downstream services.
+ With the variations of orchestration services, I recommend staying true to the concept of unit of work. Every request can do one thing and one thing only, including its prerequisites. For instance, if you need to register a student in a school, you may also need to add a guardian, contact information, and other details. Eventing these actions can significantly decrease the complexity of the flow and lower the risk of failures in downstream services.
Here's a visualization for a complex single-threaded approach:
@@ -816,16 +816,16 @@ Let's take a look at the possible variants for orchestration services and where
- Above, the incoming request is turned into events, where each of these events would notify its orchestration services in a cul-de-sac pattern as discussed in section 2.3.2.2. That means that a single thread is no longer responsible for the success of each dependency in the system. Instead, every event-listening broker would handle its process in a simplified way.
+ Above, the incoming request is turned into events, each of which would notify its orchestration services in a cul-de-sac pattern, as discussed in section 2.3.2.2. That means that a single thread is no longer responsible for the success of each dependency in the system. Instead, every event-listening broker would handle its process in a simplified way.
- This approach does not guarantee an immediate response of success or failure to the requestor. It's an eventual-consistency pattern where the client would get an `Accepted` message or its equivalent based on the communication protocol to let them know that a process has started. Still, there's no guarantee of results until all event logic has been executed.
+ This approach does not guarantee an immediate response of success or failure to the requestor. It's an eventual consistency pattern where the client would get an `Accepted` message or its equivalent based on the communication protocol to let them know that a process has started. Still, results are only guaranteed once all event logic has been executed.
Note that we can add an extra layer of resiliency to these events by temporarily storing them in Queue-like components or memory-based temporary storages; depending on the criticality of the business.
- But an eventual consistency approach isn't always a good solution if the client on the other side is waiting for a response, especially in critical situations where an immediate response is required. One solution to this problem is Fire-n-Observe queues, which we will discuss in the future version of The Standard.
+ However, an eventual consistency approach is only sometimes a good solution if the client on the other side is waiting for a response, especially in critical situations where an immediate response is required. One solution to this problem is Fire-n-Observe queues, which we will discuss in the future version of The Standard.
[*] [Introduction to Orchestration Services](https://www.youtube.com/watch?v=OP6HcIpXduE)
[*] [Cul-De-Sac Pattern for Orchestration Services](https://www.youtube.com/watch?v=C8Sm1kSKF1o)
-[*] [Cul-De-Sac Pattern for Coordination Services](https://www.youtube.com/watch?v=8KZIw_IJC0U)
+[*] [Cul-De-Sac Pattern for Coordination Services](https://www.youtube.com/watch?v=8KZIw_IJC0U)
\ No newline at end of file
From ef55f40582e06c5a6868493c9b652c1da63cc10e Mon Sep 17 00:00:00 2001
From: Greg Hays
Date: Wed, 17 Apr 2024 12:46:46 -0700
Subject: [PATCH 04/18] CODE RUB: Aggregations 2.4 Grammer Check
---
.../2.4 Aggregations/2.4 Aggregations.md | 38 +++++++++----------
1 file changed, 19 insertions(+), 19 deletions(-)
diff --git a/2. Services/2.4 Aggregations/2.4 Aggregations.md b/2. Services/2.4 Aggregations/2.4 Aggregations.md
index ffa3d22..5616a67 100644
--- a/2. Services/2.4 Aggregations/2.4 Aggregations.md
+++ b/2. Services/2.4 Aggregations/2.4 Aggregations.md
@@ -3,7 +3,7 @@
## 2.4.0 Introduction
An Aggregation service's primary responsibility is to expose one single point of contact between the core business logic layer and any exposure layers. It ensures that multiple services of any variation share the same contract to be aggregated and exposed to one exposer component through one logical layer.
-Aggregation services don't hold any business logic in themselves. They are simply a knot that ties together multiple services of any number. They can have any layer of services as dependencies and it mainly exposes the call to these services accordingly. Here's a code example of an aggregation service:
+Aggregation services do not hold any business logic in themselves. They are simply a knot that ties together multiple services of any number. They can have any layer of services as dependencies, and it mainly exposes the call to these services accordingly. Here is a code example of an aggregation service:
```csharp
public async ValueTask ProcessStudentAsync(Student student)
@@ -16,10 +16,10 @@ public async ValueTask ProcessStudentAsync(Student student)
}
```
-As the snippet shows above, an Aggregation service may have any number of calls in any order without limitation. And there may be occasions where you may or may not need to return a value to your exposure layers depending on the overall flow and architecture, which we will discuss shortly in this chapter. But more importantly, Aggregation services should not be mistaken for an Orchestration service or any of its variants.
+As the snippet shows above, an Aggregation service may have any number of calls in any order without limitation. There may be occasions where you may or may not need to return a value to your exposure layers depending on the overall flow and architecture, which we will discuss shortly in this chapter. More importantly, aggregation services should be distinct from orchestration services or variants.
## 2.4.1 On The Map
-Aggregation services always sit on the other end of a core business logic layer. They are the last point of contact between exposure layers and logic layers. Here's a visualization of where Aggregation services are located in an overall architecture:
+Aggregation services always sit on the other end of a core business logic layer. They are the last point of contact between the exposure layers and logic layers. Here is a diagram visualization of where an Aggregation service can be found in the architecture:
@@ -30,12 +30,12 @@ Aggregation services always sit on the other end of a core business logic layer.
Let's discuss the characteristics of Aggregation services.
## 2.4.2 Characteristics
-Aggregation services mainly exist when multiple services share the same contract or primitive types of the same contract, requiring a single point of exposure. They mainly exist in hyper-complex applications where multiple services (usually orchestration or higher but can be lower) require one single point of contact through exposure layers. Let's talk in detail about the main characteristics of Aggregation services.
+Aggregation services mainly exist when multiple services share the same contract or primitive types of the same contract, requiring a single exposure point. They mainly exist in hyper-complex applications where multiple services (usually orchestration or higher but can be lower) require one single point of contact through exposure layers. Let's discuss the main characteristics of Aggregation services in detail.
### 2.4.2.0 No Dependency Limitation
-Unlike any other service, Aggregation services can have any number of dependencies as long as these services are of the same variation. For instance, an Aggregation service cannot aggregate between an Orchestration service and a Coordination service. It's a partial Florance-Like pattern where services must be of the same variation but not necessarily limited by the number.
+Unlike any other service, Aggregation services can have any number of dependencies as long as these services have the same variation. For instance, an Aggregation service cannot aggregate between an Orchestration service and a Coordination service. It's a partial Florance-like pattern where services must have the same variation but are not necessarily limited by the number.
-The lack of limitation of the dependencies for Aggregation services is because the service doesn't perform any level of business logic between these services. It doesn't care what these services do or require. It only focuses on exposing these services regardless of what was called before or after them.
+The dependencies for Aggregation services are not limited because the service doesn't perform any level of business logic between these services. It doesn't care what these services do or require. It only focuses on exposing these services regardless of what they were called before or after.
Here's what an Aggregation service test would look like:
@@ -74,7 +74,7 @@ public async Task ShouldProcessStudentAsync()
}
```
-As you can see above, we only verify and test for the aggregation aspect of calling these services. No return type is required in this scenario, but there might be in the scenarios of pass-through, which we will be discussing shortly.
+As you can see above, we only verify and test for the aggregation aspect of calling these services. No return type is required in this scenario, but there might be one in the pass-through scenarios, which we will discuss shortly.
An implementation of the above test would be as follows:
@@ -90,17 +90,17 @@ public async ValueTask ProcessStudentAsync(Student student)
```
### 2.4.2.1 No Order Validation
-By definition, Aggregation services are naturally required to call several dependencies with no limitation. The order of calling these dependencies is also not a concern or a responsibility for Aggregation services because the call-order verification is considered a core business logic, which falls outside the responsibilities of an Aggregation service. That, of course, includes both natural order of verification or enforced order of verification, as we explained in section 2.3.3.0.1 in the previous chapter.
+By definition, Aggregation services are naturally required to call several dependencies with no limitation. The order of calling these dependencies is also not a concern or a responsibility for Aggregation services because the call-order verification is considered a core business logic, which falls outside the responsibilities of an Aggregation service. That includes both the natural order of verification and the enforced order of verification, as we explained in section 2.3.3.0.1 in the previous chapter.
-It's a violation of The Standard to use simple techniques like a mock sequence to test an Aggregation service. It is also a violation to verify reliance on the return value of one service call to initiate a call to the next. These responsibilities are more likely to fall on the next lower layer of an Aggregation service for any orchestration-like service.
+It violates The Standard to use simple techniques like a mock sequence to test an Aggregation service. It is also a violation to verify reliance on the return value of one service call to initiate a call to the next. These responsibilities are more likely to fall on the next lower layer of an Aggregation service for any orchestration-like service.
### 2.4.2.2 Basic Validations
-Aggregation services are still required to validate whether or not the incoming data is higher-level structurally valid. For instance, an Aggregation service that takes a `Student` object as an input parameter will validate only if the `student` is `null` or not. But that's where it all stops.
+Aggregation services are still required to validate whether or not the incoming data is structurally valid at a higher level. For instance, an Aggregation service that takes a `Student` object as an input parameter will only validate if the `student` is `null`. But that's where it all stops.
There may be an occasion where a dependency requires a property of an input parameter to be passed in, in which case it is also permitted to validate that property value structurally. For instance, if a downstream dependency requires a student name to be passed in. An Aggregation service will still be required to validate if the `Name` is `null`, empty, or just whitespace.
### 2.4.2.3 Pass-Through
-Aggregation services are not required to implement their aggregation by performing multiple calls from one method. They can also aggregate by offering pass-through methods for multiple services. For instance, assume we have `studentCoordinationService`, `studentRecordsService` and `anyOtherStudentRelatedCoordinationService` where each service is independent in terms of business flow. So an aggregation here is only at the level of exposure but not necessarily the level of execution.
+Aggregation services are not required to implement aggregation by performing multiple calls from one method. They can also aggregate by offering pass-through methods for multiple services. For instance, assume we have `studentCoordinationService`, `studentRecordsService`, and `anyOtherStudentRelatedCoordinationService` where each service is independent in terms of business flow. So, an aggregation here is only at the level of exposure, not necessarily the execution level.
Here's a code example:
@@ -135,13 +135,13 @@ public partial class StudentAggregationService
}
```
-As you can see above, each service is using the Aggregation service as a pass-through. There's no need in this scenario whatsoever for an aggregated routines call. This would still be a very valid scenario for Aggregation services.
+As you can see above, each service uses the Aggregation service as a pass-through. There's no need for an aggregated routines call in this scenario, but it would still be a very valid scenario for Aggregation services.
### 2.4.2.4 Optionality
-It is important to mention here that Aggregation services are optional. Unlike foundation services, Aggregation services may or may not exist in architecture. Aggregation services are there to solve a problem with abstraction. This problem may or may not exist based on whether the architecture requires a single point of exposure at the border of the core business logic layer or not. This single responsibility of Aggregation services makes it much simpler to implement its task and perform its function easily. Aggregation services being optional is more likely to be than any other lower-level services. Even in the most complex of applications out there.
+It is essential to mention here that Aggregation services are optional. Unlike foundation services, Aggregation services may or may not exist in architecture. Aggregation services are there to solve a problem with abstraction. This problem may or may not exist based on whether the architecture requires a single exposure point at the border of the core business logic layer. This single responsibility of Aggregation services makes it much simpler to implement its task and perform its function efficiently. Aggregation services being optional is more likely than any other lower-level services, even in the most complex of applications out there.
### 2.4.2.5 Routine-Level Aggregation
-If an aggregation service has to make two different calls from the same dependency amongst other calls, It is recommended to aggregate for every dependency routine. But that's only from a clean-code perspective, and it doesn't necessarily impact the architecture or the end result.
+If an aggregation service has to make two different calls from the same dependency amongst other calls, it is recommended that it aggregate for every dependency routine. But that's only from a clean-code perspective, and it doesn't necessarily impact the architecture or the result.
Here's an example:
@@ -160,21 +160,21 @@ private async ValueTask ProcessStudentRecordAsync(Student student)
```
-This organizational action doesn't warrant any change in testing or end result as previously mentioned.
+As previously mentioned, this organizational action doesn't warrant any change in testing or results.
### 2.4.2.6 Pure Dependency Contracts
An Aggregation service's most important rule/characteristic is that its dependencies (unlike orchestration services) must share the same contract. The input parameter for a public routine in any Aggregation service must be the same for all its dependencies. There may be occasions where a dependency may require a student id instead of the entire student, which is permitted with caution as long as the partial contract isn't a return type of another call within the same routine.
## 2.4.3 Responsibilities
-An Aggregation service's primary responsibility is to offer a single point of contact between exposer components and the rest of the core business logic. But in essence, abstraction is the true value Aggregation services offer to ensure any business component is pluggable into any system regardless of the style of exposure.
+An Aggregation service's primary responsibility is to offer a single point of contact between exposer components and the rest of the core business logic. But in essence, abstraction is the actual value Aggregation services offer to ensure any business component is pluggable into any system regardless of exposure style.
Let's talk about these responsibilities in detail.
### 2.4.3.0 Abstraction
-An aggregation service performs into responsibility successfully when its clients or consumers have no idea what lies beyond the lines of its implementation. An Aggregation service could combine ten different services and expose a single routine in a fire-n-forget scenario.
+An aggregation service successfully assumes responsibility when its clients or consumers have no idea what lies beyond the lines of its implementation. For example, an aggregation service could combine ten different services and expose a single routine in a fire-and-forget scenario.
-But even in pass-through scenarios, Aggregation services abstract away any identification of the underlying dependency from exposers at all costs. It doesn't always happen, especially in terms of localized exceptions but close enough to make the integration seem as if it is with one single service that's offering all the options natively.
+But even in pass-through scenarios, Aggregation services abstract away any identification of the underlying dependency from exposers at all costs. It only sometimes happens, especially in terms of localized exceptions. Still, it is close enough to make the integration seem as if it is with one single service that's offering all the options natively.
### 2.4.3.1 Exceptions Aggregation
-Aggregation services are similar to orchestration-like services when mapping and aggregating exceptions from downstream dependencies. For instance, if `studentCoordinationService` is throwing `StudentCoordinationValidationException`an Aggregation service would map that into `StudentAggregationDependencyValidationException`. This falls back into the concept of exception unwrapping and then wrapping of localized exceptions, which we discussed in detail in section 2.3.3.0.2 of this Standard.
\ No newline at end of file
+Aggregation services resemble orchestration-like services when mapping and aggregating exceptions from downstream dependencies. For instance, if `studentCoordinationService` is throwing, `StudentCoordinationValidationException`an Aggregation service would map that into `StudentAggregationDependencyValidationException`. This falls back into the concept of exception unwrapping and then wrapping of localized exceptions, which we discussed in detail in section 2.3.3.0.2 of this Standard.
\ No newline at end of file
From 5f0577969ebc00cfaab6b087d43eadf580727737 Mon Sep 17 00:00:00 2001
From: Greg Hays
Date: Wed, 17 Apr 2024 13:33:25 -0700
Subject: [PATCH 05/18] CODE RUB: Services 2.0 Grammer Check
---
2. Services/2. Services.md | 57 ++++++++++++++++++++------------------
1 file changed, 30 insertions(+), 27 deletions(-)
diff --git a/2. Services/2. Services.md b/2. Services/2. Services.md
index a8a7cb9..086cc45 100644
--- a/2. Services/2. Services.md
+++ b/2. Services/2. Services.md
@@ -2,11 +2,11 @@
## 2.0 Introduction
-Services, in general, are the containers of all the business logic in any given software - they are the core component of any system and the main component that makes one system different from another.
+Services, in general, are the containers of all the business logic in software�they are the core component of any system and the main component that makes one system different from another.
Our main goal with services is to keep them agnostic from specific technologies or external dependencies.
-Any business layer is more compliant with The Standard if it can plug into any other dependencies and exposure technologies with the least amount of integration effort.
+Any business layer is more compliant with The Standard if it can plug into other dependencies and exposure technologies with the least integration effort.
### 2.0.0 Services Operations
@@ -22,21 +22,21 @@ Let's talk about these categories.
#### 2.0.0.0 Validations
-Validations ensure that incoming or outgoing data match a particular set of rules: structural, logical, or external validations, in that exact order of priority. We will go into details about this in the upcoming sections.
+Validations ensure that incoming or outgoing data match a particular set of rules, such as structural, logical, or external validations, in that exact order of priority. We will discuss this in detail in the upcoming sections.
#### 2.0.0.1 Processing
-Processing mainly focuses on the flow-control, mapping, and computation to satisfy a business need - the processing operations distinguish one service from another and, in general, one software from another.
+Processing mainly focuses on flow control, mapping, and computation to satisfy a business need�the processing operations distinguish one service from another and, in general, one piece of software from another.
#### 2.0.0.2 Integration
Finally, the integration process focuses on retrieving or pushing data from or to any integrated system dependencies.
-We will discuss these aspects in detail in the upcoming chapter. The main thing to understand about services is that the design is to be pluggable and configurable to easily integrate with any technology from a dependency standpoint and easy to plug into any exposure functionality from an API perspective.
+We will discuss these aspects in detail in the upcoming chapter. The main thing to understand about services is that their design is to be pluggable and configurable, allowing them to easily integrate with any technology from a dependency standpoint and easily plug into any exposure functionality from an API perspective.
### 2.0.1 Services Types
-Services have several types based on their disposition in any given architecture. They fall under three main categories: validators, orchestrators, and aggregators.
+Services are classified into several types based on their disposition in any given architecture. They fall into three main categories: validators, orchestrators, and aggregators.
@@ -56,15 +56,18 @@ Orchestrator services are the core of the business logic layer. They can be proc
Orchestrator services mainly focus on combining multiple primitive operations or multiple high-order business logic operations to achieve an even higher goal.
-Orchestrator services are the decision makers within any architecture, the owners of the flow control in any system, and the main component that makes one application or software different from the other.
+Orchestrator services are:
+The decision-makers within any architecture.
+The owners of the flow control in any system.
+The main component that makes one application or software different from the other.
We intentionally design Orchestrator services to be longer-lived than any other type of service in the system.
#### 2.0.1.2 Aggregators
-Aggregator services' primary responsibility is to tie the outcome of multiple processing, orchestration, coordination, or management services to expose one single API for any given API controller or UI component to interact with the rest of the system.
+The primary responsibility of the aggregator services is to tie the outcome of multiple processing, orchestration, coordination, or management services to expose one single API for any given API controller or UI component to interact with the rest of the system.
-Aggregators are the gatekeepers of the business logic layer. They ensure the data exposure components (like API controllers) are interacting with only one point of contact to interact with the rest of the system.
+Aggregators are the gatekeepers of the business logic layer. They ensure the data exposure components (like API controllers) interact with only one point of contact to interact with the rest of the system.
Aggregators, in general, don't care about the order in which they call the operations that are attached to them. Still, it is sometimes necessary to execute a particular operation, such as creating a student record before assigning a library card.
@@ -78,15 +81,15 @@ These rules ensure the system's overall readability, maintainability, and config
#### 2.0.2.0 Do or Delegate
-Every service should either do or delegate the work but not both.
+Every service should either do or delegate the work, but not both.
-For instance, a processing service should delegate the work of persisting data to a foundation service and not try to do that work by itself.
+For instance, a processing service should delegate the work of persisting data to a foundation service rather than try to do that work itself.
#### 2.0.2.1 Two-Three (Florance Pattern)
-For Orchestrator services, their dependencies of services (not brokers) should be limited to two or three, but not one or not four or more.
+For Orchestrator services, the dependencies of services (not brokers) should be limited to two or three, not one, four, or more.
-If an Orchestrator depends only on one service, then it violates the definition of orchestration which is the combination of multiple operations from different sources to achieve a higher order of business logic.
+Suppose an Orchestrator depends only on one service. In that case, it violates the definition of orchestration, which is the combination of multiple operations from different sources to achieve a higher order of business logic.
###### This pattern violates Florance Pattern
@@ -106,7 +109,7 @@ If an Orchestrator depends only on one service, then it violates the definition
The Florance pattern also ensures the balance and symmetry of the overall architecture.
-For instance, you can't orchestrate between a foundation and a processing service. It causes a form of unbalance in your architecture and difficulty when trying to combine one unified statement with the language each service speaks based on their level and type.
+For instance, you can't orchestrate between a foundation and a processing service. This causes an imbalance in your architecture and difficulty when trying to combine one unified statement with the language each service speaks based on its level and type.
The aggregators are the only types of services allowed to violate this rule, where the combination and the order of services or their calls don't have any real impact.
@@ -114,11 +117,11 @@ We will discuss the Florance pattern in detail in the upcoming sections of The S
#### 2.0.2.2 Single Exposure Point
-API controllers, UI components, or any other form of data exposure from the system should have one single point of contact with the business-logic layer.
+API controllers, UI components, or any other form of system data exposure should have one single point of contact with the business logic layer.
-For instance, an API endpoint that offers endpoints for persisting and retrieving student data should not have multiple integrations with multiple services but rather one service that provides all of these features.
+For instance, an API endpoint that offers endpoints for persisting and retrieving student data should not have multiple integrations with multiple services but one service that provides all these features.
-Sometimes, a single orchestration, coordination, or management service does not offer everything related to a particular entity. An aggregator service combines all of these features into one service ready to be integrated with an exposure technology.
+Sometimes, a single orchestration, coordination, or management service does not offer everything related to a particular entity. An aggregator service combines all these features into one service that is ready to be integrated with exposure technology.
#### 2.0.2.3 Same or Primitives I/O Model
@@ -126,7 +129,7 @@ All services must maintain a single contract regarding their return and input ty
For instance, a service that provides operations for an entity type `Student` - should not return from any of its methods from any other entity type.
-You may return an aggregation of the same entity whether it's custom or native such as `List` or `AggregatedStudents` models, or a primitive type like getting students count, or a boolean indicating whether a student exists or not but not any other non-primitive or non-aggregating contract.
+You may return an aggregation of the same entity, whether it's custom or native, such as `List` or `AggregatedStudents` models, or a primitive type like getting students count, or a boolean indicating whether a student exists or not but not any other non-primitive or non-aggregating contract.
A similar rule applies for input parameters - any service may receive an input parameter of the same contract, a virtual aggregation contract, or a primitive type but not any other contract.
@@ -134,17 +137,17 @@ This rule focuses the responsibility of a service on a single entity and all its
When a service returns a different contract, it violates its naming convention like a `StudentOrchestrationService` returning `List` - and it starts falling into the trap of being called by other services from entirely different data pipelines.
-For primitive input parameters, if they belong to a different entity model that is not necessarily a reference to the primary entity, it begs the question of orchestrating between two processing or foundation services to maintain a unified model without breaking the pure-contracting rule.
+If primitive input parameters belong to a different entity model that is not necessarily a reference to the primary entity, it begs the question of orchestrating between two processing or foundation services to maintain a unified model without breaking the pure-contracting rule.
-Suppose the combination of multiple different contracts in an orchestration service is required. In that case, a new unified virtual model indicates the need for a new unique contract for the orchestration service with mappings implemented underneath on the concrete level of that service to maintain compatibility and integration safety.
+Suppose an orchestration service requires a combination of multiple different contracts. In that case, a new unified virtual model indicates the need for a new unique contract for the orchestration service, with mappings implemented underneath on the concrete level of that service to maintain compatibility and integration safety.
#### 2.0.2.4 Every Service for Itself
-Every service is responsible for validating its inputs and outputs. Do not rely on services up or downstream to validate your data.
+Every service is responsible for validating its inputs and outputs. Do not rely on services upstream or downstream to validate your data.
-This is a defensive programming mechanism to ensure that in case of swapping implementations behind contracts, the responsibility of any given service is not be affected if down or upstream services decided to pass on their validations for any reason.
+This is a defensive programming mechanism to ensure that if implementations are swapped behind contracts, the responsibility of any given service is not affected if downstream or upstream services decide to pass on their validations for any reason.
-Within any monolithic, microservice, or serverless architecture-based system, every service is designed so that it can split off from the system at some point and become the last point of contact before integrating with some external resource broker.
+Within any monolithic, microservice, or serverless architecture-based system, every service is designed to split off from the system at some point and become the last point of contact before integrating with some external resource broker.
For instance, in the following architecture, services map parts of an input `Student` model into a `LibraryCard` model. Here's a visual of the models:
@@ -198,11 +201,11 @@ private LibraryCard MapToLibraryCard(Student student)
}
```
-As you can see above, a valid student id is required to ensure a successful mapping to a `LibraryCard` - and since the mapping is the orchestrator's responsibility, we are required to ensure that the input student and its id is in good shape before proceeding with the orchestration process.
+As you can see above, a valid student id is required to map to a `LibraryCard successfully`. Since the mapping is the orchestrator's responsibility, we must ensure that the input student and its id are in good shape before proceeding with the orchestration process.
#### 2.0.2.5 Flow Forward
Services cannot call services at the same level. For instance, Foundation Services cannot call other Foundation Services, and Orchestration Services cannot call other Orchestration Services from the same level.
-This principle is called a Flow-Forward - as the illustation shows:
+This principle is called a Flow-Forward - as the illustration shows:
@@ -234,6 +237,6 @@ public async ValueTask ModifyStudentAsync(Student student)
}
```
-In a Foundation Service example above, we cannot call `RetriveStudentByIdAsync` in a `public` method from another `public` method such as `ModifyStudentAsync`. You will see that both methods called the exact same method from a lower dependency like a `StorageBroker` full independent of one another.
+In the Foundation Service example above, we cannot call `RetriveStudentByIdAsync` in a `public` method from another `public` method such as `ModifyStudentAsync`. You will see that both methods call the exact same method from a lower dependency, like a `StorageBroker`, fully independent of one another.
-While this may seem redundant, the reason for this is that `public` APIs, contracts or otherwise are destined to be deprecated at some point in it's lifetime. It may also be changed completely from an implementation standpoint. If a `public` API depended on another `public` API at the same level the deprecation of one will cause a cascading effect on all others. That's a symptom of Chaotic design which The Standard strongly prohibits.
+While this may seem redundant, the reason for this is that `public` APIs, contracts, or otherwise, are destined to be deprecated at some point in their lifetime. They may also be changed completely from an implementation standpoint. If a `public` API depended on another `public` API at the same level, the deprecation of one will cause a cascading effect on all others. That's a symptom of Chaotic design, which The Standard strongly prohibits.
\ No newline at end of file
From 7de637d4f17e5acad5448cdecdb5c9724cc81bb1 Mon Sep 17 00:00:00 2001
From: Greg Hays
Date: Wed, 17 Apr 2024 14:31:01 -0700
Subject: [PATCH 06/18] CODE RUB: Introduction 0 Grammer Check
---
0. Introduction/0. Introduction.md | 17 ++++++++---------
1 file changed, 8 insertions(+), 9 deletions(-)
diff --git a/0. Introduction/0. Introduction.md b/0. Introduction/0. Introduction.md
index 6cbff4f..0e959a6 100644
--- a/0. Introduction/0. Introduction.md
+++ b/0. Introduction/0. Introduction.md
@@ -1,22 +1,21 @@
-# 0 Introduction
+0 Introduction
This is The Standard. A collection of decades of experience in the engineering industry. I authored it to help you navigate the vast ocean of knowledge. The Standard is not perfect and never will be, and it reflects the ongoing evolution of the engineering industry. While one person may write it, it is the collection of thoughts from hundreds of engineers I've had the honor to interact with and learn from throughout my life.
-The Standard holds hundreds of years of collective experiences from many different engineers. As I have traveled the world and worked in various industries, I've had the chance to work with many kinds of engineers - some of them were mad scientists who would fixate on minor details of every routine. Others have been business engineers who cared more about the results than the means to get to these results. In addition to others, I've learned from them what makes a simple engineering guide that can light the way for all other engineers to be inspired by it and hopefully follow it. And therefore, I have made this Standard, hoping it will be a compass for engineers to find the best way to engineer solutions that will hopefully change the world.
+The Standard holds hundreds of years of collective experiences from many different engineers. As I have traveled the world and worked in various industries, I've had the chance to work with many kinds of engineers - some of them were mad scientists who would fixate on minor details of every routine. Others have been business engineers who cared more about the results than the means to get to these results. In addition to others, I've learned from them what makes a simple engineering guide that can light the way for all other engineers to be inspired by it and follow it. Therefore, I have made this Standard, hoping it will be a compass for engineers to find the best way to engineer solutions that will hopefully change the world.
-This Standard is an appeal to engineers worldwide: read through it and make extracts of their experiences and knowledge to enrich an engineering Standard worthy of the industry of software. Today, we know the earth's origins, of man, and all the animals. We know how hot boiling water is; how long a yard is. Our ships' masters know the precise measurements of latitude and longitude. Yet, we have neither chart nor compass to guide us through the vast sea of code. The time has come to accord our great craft with the same dignity and respect as the other standards defined by science.
+This Standard requests engineers worldwide to read through it and extract their experiences and knowledge to enrich an engineering Standard worthy of the software industry. Today, we know the origins of man and all the animals on earth. We know how hot boiling water is and how long a yard is. Our ships' masters know the precise measurements of latitude and longitude. Yet, we have neither a chart nor a compass to guide us through the vast sea of code. The time has come to accord our great craft with the same dignity and respect as the other standards defined by science.
-In this industry, The Standard has immense value for those still finding their way. Or even those who have lost their way. And the Standard can guide them towards a better future. But more importantly, The Standard is written for everyone, equally, to inspire every engineer or engineer-to-be to look forward to focusing on what matters the most about engineering--its purpose, not its technicalities. When engineers have any form of Standard, I have observed that they start focusing more on what can be accomplished in our world today. When engineers follow some form of Standard, their energy and focus become more on what to achieve, not on how.
-I collected, then authored this Standard, hoping it will eliminate much confusion and enable engineers to focus on what matters most--use technology as a means for higher purposes and establish its equivalent goals. The art and science of designing software have come a long way and have proven to be one of the most powerful tools a person could have today. It deserves a proper introduction, and how we educate youth about it matters.
+The Standard has immense value for those still finding their way in this industry. Or even those who have lost their way. And the Standard can guide them towards a better future. More importantly, the Standard is written for everyone equally to inspire every engineer or engineer-to-be to look forward to focusing on what matters the most about engineering- its purpose, not its technicalities. When engineers have any form of Standard, I have observed that they start focusing more on what can be accomplished in our world today. When engineers follow some form of Standard, their energy and focus become more on what to achieve rather than how. I collected and then authored this Standard, hoping it will eliminate confusion and enable engineers to focus on what matters most--use technology for higher purposes and establish its equivalent goals. The art and science of designing software have come a long way and have proven to be one of the most powerful tools a person could have today. It deserves a proper introduction, and how we educate youth about it matters.
-Essentially, The Standard is my interpretation of SOLID principles and many other practices and patterns that continue to enrich our designs and development to achieve solid systems. The Standard aims to help every engineer find guidance in their day-to-day work. But more importantly, the Standard can ensure every engineer that they have the guidance required when they need to build robust systems that can land on the moon, solve the most complex problems, and ensure the survival of humankind and its evolution.
+Essentially, The Standard is my interpretation of SOLID principles and many other practices and patterns that continue to enrich our designs and development to achieve solid systems. The Standard aims to help every engineer find guidance in their day-to-day work. But more importantly, the Standard can ensure that every engineer has the guidance required when they need to build robust systems that can land on the moon, solve the most complex problems, and ensure the survival of humankind and its evolution.
-The Standard is intentionally technology-agnostic. Its principles apply to any programming language, and its tri-nature of foundation can guide any development or design decisions beyond software. The Standard shall not be tied to any particular technology, nor shall it be a limitation to those who want to follow it, regardless of their language of preference. I will be using C# on the .NET framework only to materialize and realize the concepts of this Standard. But know that at the early stages of forming this Standard, I was heavily using Scala as a programming language.
+The Standard is intentionally technology-agnostic. Its principles apply to any programming language, and its tri-nature foundation can guide any development or design decisions beyond software. The Standard shall not be tied to any particular technology, nor shall it be a limitation to those who want to follow it, regardless of their language of preference. I will be using C# on the .NET framework only to materialize and realize the concepts of this Standard. However, knowing that at the early stages of forming this Standard, I was heavily using Scala as a programming language.
-But what's more important about The Standard? The Standard is the option to set a measure for expertise, influence, and knowledge depth before making any decisions. It's also meant to play the role of inspiration for generations of engineers to come to either follow it, improve on it, or come up with their own. The alternative is to build software without standards, which is subject to chaos and injustice when investing the best time into the best efforts. Our industry today is in chaos in terms of standardization. Unqualified individuals may have or take leadership positions and influence those much more qualified to make unfortunate decisions.
+But what's more important about The Standard? The Standard is the option to set a measure for expertise, influence, and knowledge depth before making any decisions. It is also meant to play the role of inspiration for generations of engineers to come to either follow it, improve on it or come up with their own. The alternative is to build software without standards, which is subject to chaos and injustice when investing the best time into the best efforts. Our industry today is in chaos in terms of standardization. Unqualified individuals may have or take leadership positions, influencing those much more qualified to make unfortunate decisions.
The Standard is also my labor of love for the rest of the world. It is driven by and written with a passion for enhancing the engineering experience and producing efficient, rugged, configurable, pluggable, and reliable systems that can withstand any challenges or changes that occur almost daily in our industry.
[*] [Introduction to The Standard](https://www.youtube.com/watch?v=8PveoymxCok)
-[*] [Questions about The Standard](https://www.youtube.com/watch?v=Au7G_y4BkbY)
+[*] [Questions about The Standard](https://www.youtube.com/watch?v=Au7G_y4BkbY)
\ No newline at end of file
From 42a4e3717f75cc77c8e491439e2665429dea0330 Mon Sep 17 00:00:00 2001
From: Greg Hays
Date: Wed, 17 Apr 2024 14:57:49 -0700
Subject: [PATCH 07/18] CODE RUB: TheTheory 0.0 Grammer Check
---
0. Introduction/0.0 The Theory.md | 46 +++++++++++++++----------------
1 file changed, 23 insertions(+), 23 deletions(-)
diff --git a/0. Introduction/0.0 The Theory.md b/0. Introduction/0.0 The Theory.md
index b612142..319fa96 100644
--- a/0. Introduction/0.0 The Theory.md
+++ b/0. Introduction/0.0 The Theory.md
@@ -4,15 +4,15 @@
When designing any system, it is of utmost importance for designers to back up their design with a particular theory. Theories play a massive role in ensuring their design's purposes, models, and simulations are cohesive and extensible within a specific domain.
No matter how chaotic it may seem, any system is influenced by at least one theory created by the designer or inherited from previous designers or their methods.
-Regardless of what or who the influencer may be, it is essential for the designer to understand the theory they follow fully, or it will negatively impact their future decisions in terms of extending their design to keep up with a forever-changing and expanding universe.
+Regardless of what or who the influencer may be, the designer needs to understand the theory they follow fully. Otherwise, it will negatively impact their future decisions regarding extending their design to keep up with a forever-changing and expanding universe.
-Early on, I realized that the simpler any theory is, the easier it becomes for other designers to adapt and extend its reach beyond the original designer's dreams. A universe built on simpler patterns can make it a lot easier for those who marvel at its beauty to understand and appreciate it much more than those who give in to the fact that it's complex beyond their comprehension.
+Early on, I realized that the simpler any theory is, the easier it becomes for other designers to adapt and extend its reach beyond the original designer's dreams. A universe built on simpler patterns can make it much easier for those who marvel at its beauty to understand and appreciate it more than those who give in to the fact that it's complex beyond their comprehension.
-A theory about the universe could make life much more purposeful, enriched with all sorts of tales about survival, evolution, and fulfillment.
+A theory about the universe could make life much more purposeful and enriched with tales about survival, evolution, and fulfillment.
## 0.0.1 Finding Answers
-Early on in my life, I struggled with schooling. Nothing that was taught to me made any sense to me. It seemed that everyone at school was more concerned with memorizing and regurgitating what they've memorized during their exam than truly understanding what was taught, questioning its origins, and validating its purposes.
+Early on in my life, I struggled with schooling. Nothing that was taught to me made any sense to me. Everyone at school seemed more concerned with memorizing and regurgitating what they've memorized during their exam than truly understanding what was taught, questioning its origins, and validating its purposes.
I realized at an earlier age that I needed some magical equation to help me distinguish between what's true and what's not, right and wrong, what is driven by a purpose, and what's an imitation for those with an actual purpose.
@@ -20,22 +20,22 @@ I was named all kinds of names during my schooling years. But I didn't mind much
When looking for answers, keeping your heart and mind open to all options is essential. Don't let any social or traditional structures limit your mind from seeking the truth about the universe and embracing the answers from everywhere.
-After years and years of search, I settled on a theory that made it simple for a simple person like myself to understand everything. I called it The Tri-Nature of Everything.
+After years of searching, I settled on a theory that made it simple for a simple person like myself to understand everything. I called it The Tri-Nature of Everything.
## 0.0.2 Tri-Nature
-The Tri-Nature theory states that everything in this world is comprised of 3 main categories. Dependencies, purposes, and exposures. Each of these components plays a crucial role in its system's survival, evolution, and fulfillment.
+The Tri-Nature theory states that everything in this world comprises three main categories: dependencies, purposes, and exposures. Each component is crucial to its system's survival, evolution, and fulfillment.
Let's talk about these components here.
### 0.0.2.0 Purpose
-Everything around us has a purpose. It was created and designed with a specific reason in the mind of its creator. We design cars to take us from point A to point B. We design cups for drinking, plates for eating, and shoes for walking. Everything has a core purpose that governs its design and legitimatizes its existence.
+Everything around us has a purpose. It was created and designed with a specific reason in the mind of its creator. We design cars to take us from point A to point B. We design cups for drinking, plates for eating, and shoes for walking. Everything has a core purpose that governs its design and legitimizes its existence.
### 0.0.2.1 Dependency
-But every system must have a dependency in one form or another in order. For instance, we as biological systems rely on food and water to survive. Cars rely on oil or electricity. Computer systems rely on power and electricity and so on. Regardless of its impact and importance, every method must have a dependency of some kind, no matter how small or big it may be.
+However, every system must have a dependency in one form or another. For instance, we, as biological systems, rely on food and water to survive. Cars rely on oil or electricity. Computer systems rely on power and electricity, and so on. Regardless of its impact and importance, every method must have a dependency, whether small or big.
### 0.0.2.2 Exposure
-Every system must expose itself to allow other systems to integrate and consume its capabilities. But to become a dependency, it must reveal itself somehow for other systems to rely on it. For instance, power outlets are an exposure layer for power sources to allow other systems to plug in and consume their services. Gas stations are exposure layers for underground oil tanks to store that oil.
+Every system must expose itself to allow other systems to integrate and consume its capabilities. However, it must reveal itself somehow to become a dependency for different systems to rely on it. For instance, power outlets are an exposure layer for power sources to allow other systems to plug in and consume their services. Gas stations are exposure layers for underground oil tanks to store that oil.
@@ -44,9 +44,9 @@ Every system must expose itself to allow other systems to integrate and consume
### 0.0.3 Everything is Connected
-In the larger scheme of things, all systems out there are connected. A simple example of this is the food chain in nature. The sun is a dependency for the grass to grow; grasshoppers are grass consumers while frogs feed on grasshoppers, snakes feed off of frogs, and so on.
+In the larger scheme of things, all systems are connected. A simple example of this is the food chain in nature. The sun is a dependency for the grass to grow; grasshoppers are grass consumers, while frogs feed on grasshoppers, snakes feed off of frogs, and so on.
-Every member of the food chain is a system with dependencies, purposes, and exposure.
+Every food chain member is a system with dependencies, purposes, and exposure.
@@ -54,12 +54,12 @@ Every member of the food chain is a system with dependencies, purposes, and expo
-Since computer systems are nothing but a reflection of our reality, these systems integrations represent a chain of infinite dependencies where each one of these systems relies on one or more systems to fulfill its purpose. A simple mobile application could rely on a backend system to persist its data. But the backend system relies on a cloud-based system to store the data. And the cloud-based system relies on a file system to perform basic persistence operations and so on.
+Since computer systems are nothing but a reflection of our reality, these systems integrations represent a chain of infinite dependencies where each one of these systems relies on one or more systems to fulfill its purpose. A simple mobile application could rely on a backend system to persist its data. However, the backend system relies on a cloud-based system to store the data. And the cloud-based system relies on a file system to perform basic persistence operations and so on.
### 0.0.4 Fractal Pattern
-The Tri-Nature pattern of Things could also be perceived at the smallest scale of any system and the largest scale. Every system out there is infinitely comprised of three components, each of which has three components and so on. That's what we call a fractal pattern.
+The Tri-Nature pattern of Things could also be perceived at the smallest and largest scales of any system. Every system is infinitely comprised of three components, each of which has three components, and so on. That's what we call a fractal pattern.
-For instance, the smallest known component in the universe is the quarks within neutron within an atom. These quarks are three components, two down quarks and one up quark. But if you zoom out slightly, you would see that the larger system where these quarks reside is also comprised of three components: electrons, protons, and neutrons.
+For instance, the smallest known component in the universe is the quarks within a neutron within an atom. These quarks are three components: two down quarks and one up quark. But if you zoom out slightly, you will see that the more extensive system where these quarks reside also comprises three components: electrons, protons, and neutrons.
@@ -67,7 +67,7 @@ For instance, the smallest known component in the universe is the quarks within
-If we zoom far out from the sub-atomic level to the solar system, the pattern continues to repeat at a massive scale. Our solar system is comprised of the sun, planets, and moons. And they fall within the dependency purposing and exposure patterns as the components in the sub-atomic level as follows:
+If we zoom far out from the sub-atomic level to the solar system, the pattern continues to repeat at a massive scale. Our solar system is comprised of the sun, planets, and moons. They fall within the dependency purposing and exposure patterns as the components in the sub-atomic level as follows:
@@ -80,11 +80,11 @@ And if we zoom further out at scale, we find that galaxies are made of dust, gas
The Tri-Nature pattern continues to repeat itself in every aspect of our lives. Every component in our universe, from the smallest sub-atomic parts to the scale of galaxies and solar systems, follows the same rule.
### 0.0.5 Systems Design & Architecture
-It is now evident that we can follow a theory to design systems! We can now develop every component in our software according to The Tri-Nature of Everything. The rules and guidelines that govern designing software according to The Theory is called The Standard. It refers to the universal standard in designing systems in every matter.
+It is now evident that we can follow a theory to design systems! We can now develop every component in our software according to The Tri-Nature of Everything. The rules and guidelines that govern software design according to the theory are called the Standard. It refers to the universal standard in designing systems in every matter.
-The Standard dictates at the low-level architecture that every system out there should be comprised of brokers (dependencies) and services (purposes), and exposers (exposures).
+The Standard dictates at the low-level architecture that every system out there should be comprised of brokers (dependencies) and services (purposes), as well as exposers (exposures).
-For instance, when designing a simple RESTful API, we may need to integrate with a database system, then validate incoming data based on specific business rules and expose these capabilities to the outside world for the API consumers to integrate with it.
+For instance, when designing a simple RESTful API, we may need to integrate with a database system, validate incoming data based on specific business rules, and expose these capabilities to the outside world so that the API consumers can integrate with it.
According to The Standard, that system would look like this:
@@ -94,7 +94,7 @@ According to The Standard, that system would look like this:
-The same pattern would repeat itself when digging deeper into any of these components. For instance, a service is comprised of validation components, processing components, and integration components. And then, if we zoom in a bit further, these same validation components are comprised of three more refined components: structural, logical, and external. The pattern continues to go on and on to the lowest level of our design, as shown here:
+The same pattern would repeat itself when digging deeper into these components. For instance, a service is comprised of validation components, processing components, and integration components. And then, if we zoom in a bit further, these same validation components are comprised of three more refined components: structural, logical, and external. The pattern continues to go on and on to the lowest level of our design, as shown here:
@@ -110,13 +110,13 @@ The same pattern also applies to larger systems if we zoom out of the one system
-In a distributed system, some services play the role of ambassadors to external or local resources, equivalent to a broker component at the service level. But then a purpose-driven component must come into play to orchestrate business flows by combining one or many primitive resource-consumption operations from these ambassador services. The final part is the exposure layer, a thin gatekeeper layer that becomes the first point of contact between the outside world and your microservice architecture.
+In a distributed system, some services act as ambassadors to external or local resources, equivalent to a broker component at the service level. However, a purpose-driven component must come into play to orchestrate business flows by combining one or many primitive resource-consumption operations from these ambassador services. The final part is the exposure layer, a thin gatekeeper layer that becomes the first point of contact between the outside world and your microservice architecture.
-The same pattern of tri-nature will continue to repeat itself across several systems, may it be large across multiple organizations or small within one single service.
+The same pattern of Tri-Nature will continue to repeat itself across several systems, may it be large across multiple organizations or small within one single service.
### 0.0.6 Conclusion
-In conclusion, The Tri-Nature of Everything is the theory that powers up The Standard. The Tri-Nature theory heavily influences every single aspect of the rules and guidelines of The Standard. But it's important to understand that the theory goes well beyond designing some software system. It can apply to management styles, writing books, making meals, establishing relationships, and every other aspect of our lives, which goes beyond the purpose of The Standard here.
+In conclusion, The Tri-Nature of Everything is the theory that powers up The Standard. The Tri-Nature theory heavily influences every single aspect of the rules and guidelines of The Standard. But it's important to understand that the theory goes beyond designing some software system. It can apply to management styles, writing books, making meals, establishing relationships, and every other aspect of our lives, which goes beyond the purpose of The Standard here.
After so many years of research and experimentation with the Tri-Nature theory, it is evident now that it works! It helps simplify some of the most complex systems out there. It plays well with our intuition as human beings. It makes it even simpler for automatons in the future to expedite our development processes of software and hardware and everything else in between.
-Finally, The Standard is an ongoing journey of continuing to question The Tri-Nature theory. The further we go into uncharted waters in business domains, the more we discover some new territories where my theory still stands. Even for the most chaotic systems out there, the theory applies in certain ways, even if the components of said systems don't entirely adhere to The Standard form of distinction.
+Finally, The Standard is an ongoing journey of questioning The Tri-Nature theory. The further we go into uncharted waters in business domains, the more we discover some new territories where my theory still stands. Even for the most chaotic systems out there, the theory applies in specific ways, even if the components of said systems don't entirely adhere to The Standard form of distinction.
\ No newline at end of file
From c570494003868cd550c92c648460d8dd855b9b1a Mon Sep 17 00:00:00 2001
From: Greg Hays
Date: Wed, 17 Apr 2024 16:26:35 -0700
Subject: [PATCH 08/18] CODE RUB: Purposing... 0.1 Grammer Check
---
.../0.1 Purposing, Modeling & Simulation.md | 86 +++++++++----------
1 file changed, 43 insertions(+), 43 deletions(-)
diff --git a/0. Introduction/0.1 Purposing, Modeling & Simulation.md b/0. Introduction/0.1 Purposing, Modeling & Simulation.md
index 1916a58..eb9f3be 100644
--- a/0. Introduction/0.1 Purposing, Modeling & Simulation.md
+++ b/0. Introduction/0.1 Purposing, Modeling & Simulation.md
@@ -1,10 +1,10 @@
-# 0.1 Purposing, Modeling and Simulation
+# 0.1 Purposing, Modeling, and Simulation
## 0.1.0 Introduction
-The Standard defines the software engineering process in three main categories: Purposing, Modeling, and Simulation. Each of these aspects plays a crucial role in guiding engineering efforts toward producing a successful solution and fulfilling a particular purpose.
+The Standard defines the software engineering process in three main categories: Purposing, Modeling, and Simulation. Each aspect is crucial in guiding engineering efforts toward a successful solution and fulfilling a purpose.
-The order in which these aspects are followed is also intentional. A purpose must exist to shape the modeling process. And one can't simulate interactions with no models. But while that order at the initiation of the engineering process is crucial, It's important to understand the process itself is selectively iterative. A change in the purpose may reflect as a change in the simulation but not necessarily the modeling. A change in the models may not necessarily require changing the purpose or the simulation.
+The order in which these aspects are followed is also intentional. A purpose must exist to shape the modeling process, and one can't simulate interactions without models. But while that order at the initiation of the engineering process is crucial, it's important to understand that the process is selectively iterative. A change in the purpose may reflect a change in the simulation. Still, not necessarily the modeling, and a change in the models may not necessarily require changing the purpose or the simulation.
@@ -14,7 +14,7 @@ The order in which these aspects are followed is also intentional. A purpose mus
## 0.1.1 Purposing
-The purposing process is our ability to find out *why* we need a solution. For instance, if we have an issue with knowing how many items are on the shelf in some grocery store. We deem the manual counting process inefficient, and a system needs to be implemented to ensure we have the proper count of items.
+The purposing process is our ability to find out why we need a solution. For instance, if we have an issue with knowing how many items are on the shelf in a grocery store, we deem the manual counting process inefficient, and a system needs to be implemented to ensure we have the proper count of items.
Reasoning relies heavily on our ability to observe problems and then articulate a problem to devise a solution that addresses the given problem. Purposing, therefore, is to find a reason to take action.
@@ -24,29 +24,29 @@ So, we have the observation, the articulation of the reasoning (the problem), an
We live in a world full of observables. Our inspiration is triggered by our ambition to achieve more. Our dreams reveal blockers in our way that we need to solve to continue our journey and fulfill our dreams. From the moment a young student uses a calculator to solve a complex equation to the moment that same student becomes an astronaut, calculating the trajectory of satellites orbiting our planet.
-Observation is our ability to detect an issue that's blocking a goal from being achieved. Issues can be as simple as having the proper count of items on a grocery store shelf, all the way up to understanding why we can't capture images of planets millions of light years away from us. These are what engineers would describe as observable problems.
+Observation is our ability to detect an issue that's blocking a goal from being achieved. Issues can be as simple as having the proper count of items on a grocery store shelf or as complex as understanding why we can't capture images of planets millions of light-years away. Engineers would describe these as observable problems.
-The greater the purpose, the more complex a problem will be. But starting with smaller purposes is a way to train our minds to tackle bigger ones—step by step, one problem at a time.
+The greater the purpose, the more complex a problem will be. But starting with more minor purposes is a way to train our minds to tackle bigger ones—step by step, one problem at a time.
### 0.1.1.1 Articulation
-Describing the observable is an art in and of itself because describing a problem well is halfway to its solution. The clearer the articulation of the problem is, the more likely the problem is to be understood by others helping us to solve that very same problem.
+Describing the observable is an art in and of itself because describing a problem well is halfway to its solution. The clearer the articulation of the problem, the more likely it is to be understood by others, helping us to solve that very same problem.
-Articulation isn't always with words. It's also with figures and shapes. It is not an accident that some of the most advanced ancient cultures have used figures and shapes to describe their times and history. Figures are a universal language, understood and interpreted by anyone who can relate to them much faster than learning a spoken language. A figure or shape might be the most optimum way to illustrate an idea, as its pictures are worth thousands of words.
+Articulation is only sometimes with words. It's also with figures and shapes. It is not an accident that some of the most advanced ancient cultures have used figures and shapes to describe their times and history. Figures are a universal language, understood and interpreted by anyone who can relate to them much faster than learning a spoken language. A figure or shape might be the most optimum way to illustrate an idea, as its pictures are worth thousands of words.
-Articulation requires a passion for solving the issue. Whether it's written, spoken, or illustrated. A passionate mind shall convey the hidden message of the criticality of the problem to be solved. Articulating a problem is a big part of how one can sell a solution. Our ability to convey an idea to other engineers and those who will be investing and using this solution is one of the most critical aspects of engineering software.
+Articulation requires a passion for solving the issue, whether written, spoken, or illustrated. A passionate mind conveys the hidden message of the criticality of the problem to be solved. Articulating a problem is a big part of selling a solution. Our ability to convey an idea to other engineers and those investing in and using this solution is one of the most critical aspects of engineering software.
### 0.1.1.2 Solutioning
-A part of the purpose is the way to fulfill it. In the engineering industry, fulfilling the goals can't just be by -any- means. A huge reason so much software fails worldwide is that the solutioning aspect was overlooked as a trivial part of the purpose. You may have heard of engineers up against a deadline who decide to cut corners to achieve a goal. In our Standard, this is a violation. A solution *must* not simply reach a goal but must be a purpose in and of itself, in aid of ambient architectural issues such as optimization, readability, configurability, and longevity. Solutioning as part of the purpose is software craftsmanship.
+A part of the purpose is the way to fulfill it. In the engineering industry, fulfilling the goals can be done by more than just any- means. Software fails worldwide because the solutions aspect was overlooked as a trivial part of the purpose. You may have heard of engineers up against a deadline who decide to cut corners to achieve a goal. In our Standard, this is a violation. A solution *must* not simply reach a goal but must be a purpose in and of itself to aid ambient architectural issues such as optimization, readability, configurability, and longevity. Solutioning is part of the purpose of software craftsmanship.
## 0.1.2 Modeling
-Modeling is the second most crucial aspect of software engineering. We can extract models from the actors in any problem, whether these actors are living beings, objects, or others. Extracting only the attributes relevant to the problem we are trying to solve and discarding everything else. For instance, when we are trying to count the items on a grocery store shelf, we would need a model for these items.
+Modeling is the second most crucial aspect of software engineering. We can extract models from the actors in any problem, whether these actors are living beings, objects, or others. We extract only the attributes relevant to the problem we are trying to solve and discard everything else. For instance, when trying to count the items on a grocery store shelf, we would need a model for these items.
-A more straightforward example would be detecting the items in a grocery store that are perishable. The only attribute we are concerned with here is the expiration date on the item. Everything else, including the label, color, weight, or any other details, is out of the scope of the modeling process and the solution.
+A more straightforward example would be detecting perishable items in a grocery store. The only attribute we are concerned with here is the expiration date on the item. Everything else, including the label, color, weight, or any other details, is outside the scope of the modeling process and the solution.
-Modeling then cannot exist without a purpose. As the purpose defines the scope or the framework of which the modeling should occur. Modeling without a purpose leaves the door open for attracting an infinite number of attributes every single element in the observable universe may have.
+Modeling, then, can only exist with a purpose. The purpose defines the scope or the framework of which the modeling should occur. Modeling without a purpose leaves the door open for attracting an infinite number of attributes every single element in the observable universe may have.
The relationship between the purposing and modeling attributes is proportional. The more complex the purpose is, the more likely the modeling process will require more attributes from the real world to model in the solution.
@@ -61,7 +61,7 @@ public class Item
The name of the `class` represents the overall *type* of the item. Since all items have the same attribute of `ExpirationDate`, the name shall stay as generic as possible.
-Now, imagine if our purpose grew a bit more complex. Let's assume the new problem is to identify the more expensive perishable items so the store can put them up front for selling before the less costly items. In this case, our model would require a new attribute such as `Price` so a computer program or a solution can determine which is more valuable. This is what our new model would look like:
+Now, imagine if our purpose grew more complex. Let's assume the new problem is identifying the more expensive perishable items so the store can put them up front for selling before the less costly items. In this case, our model would require a new attribute such as `Price` so a computer program or a solution can determine which is more valuable. This is what our new model would look like:
```csharp
public class Item
@@ -72,17 +72,17 @@ public class Item
```
### 0.1.2.0 Model Types
-Models govern the entire process of simulating a problem (and its solution). Models themselves break into three main categories, Data Carriers, Operational and Configurations. Let's discuss those types in the following sections:
+Models govern the entire process of simulating a problem (and its solution). Models break into three main categories: Data Carriers, Operational, and Configurations. Let's discuss those types in the following sections:
#### 0.1.2.0.0 Data Carrier Models
-Data carrier models have one main purpose which is to carry data points across systems. Data carrier models can vary based on the type of data they carry. Some data carrier models will carry other models to represent a complex system. Some others will just represent references to the original data points it represents.
+Data carrier models have one primary purpose: to carry data points across systems. They can vary based on the type of data they carry. Some carry other models to represent a complex system, while others represent references to the original data points they represent.
-Data carrier models in a relational fashion can be broken into three different categories. These categories makes it a lot clearer what the areas of priority are in terms of development, design and engineering. For instance, we can't start developing secondary/supporting models if we don't have our primary models in place first. Let's talk about these categories in detail:
+Data carrier models in a relational fashion can be broken into three different categories. These categories make the priority development, design, and engineering areas much clearer. For instance, we can only start developing secondary/supporting models if our primary models are in place first. Let's talk about these categories in detail:
#### 0.1.2.0.0.0 Primary Models
-Primary models are the pillars of every system. Any given system cannot proceed in design and engineering without a clear definition and materializing of these primary models. For instance, if we are building a schooling system, models like `Student`, `Teacher` and `Course` are considered Primary models.
+Primary models are the pillars of every system. Any given system can only proceed in design and engineering with a clear definition and materializing of these primary models. For instance, if we are building a schooling system, models like `Student`, `Teacher`, and `Course` are considered Primary models.
-Primary relational storage schema models do not contain foreign keys or references to any other physical model. We call these models Primary because they are self-sufficient. They don't rely physically on some other model to exist. Meaning, that a given Primary model like `Student` may still exist in a schooling system whether a `Teacher` record exists or not. This is called physical dependency.
+Primary relational storage schema models do not contain foreign keys or references to any other physical model. We call these models Primary because they are self-sufficient. They don't rely physically on some other model to exist. This means that a given primary model like `Student` may still exist in a schooling system, regardless of whether a `Teacher` record exists or not. This is called physical dependency.
Primary models, however, may rely conceptually or logically on other models. For instance, a `Student` model has a logical relationship to a `Teacher`, simply because there can never be a student without a teacher and vice versa. A `Student` model also has a conceptual relationship with its host and neighboring hosting services. For instance, there's a conceptual relationship between a `Student` model and a `Notification` model regarding business flow. Any student in any school conceptually relies on notifications to attend classes and complete assignments or other events.
@@ -90,7 +90,7 @@ Primary models, however, may rely conceptually or logically on other models. For
#### 0.1.2.0.0.1 Secondary Models
On the other hand, *Secondary* models have a hard dependency on Primary models. In a relational database model, secondary models usually have foreign keys referencing another model in the overall database schema. But even in non-relational storage systems, secondary models can be represented as nested entities within a given larger entity or have a loose reference to another entity.
-Let's talk about some examples for secondary models. A `Comment` model in a social media platform cannot exist without a `Post` model. You cannot comment on something that doesn't exist. In a relational database, the comments model would look something like this:
+Let's talk about some examples of secondary models. A `Comment` model in a social media platform cannot exist without a `Post` model. You cannot comment on something that doesn't exist. In a relational database, the comments model would look something like this:
@@ -115,12 +115,12 @@ In the example above, a secondary model `Comment` has a foreign key `PostId` ref
}
```
-Secondary models in general may have logical and conceptual relations to other models within their host, neighboring or external systems. However, their chances of having these conceptual relations are much less than the Primary models.
+Secondary models may generally have logical and conceptual relations to other models within their host, neighboring, or external systems. However, their chances of having these conceptual relations are much less than those of Primary models.
#### 0.1.2.0.0.2 Relational Models
-Relational models are connectors between two Primary models. Their main responsibility is to materialize a many-to-many relationship between two entities. For instance, a `Student` may have multiple teachers; and a `Teacher` may have multiple students. In this case we need a relational model to act as an intermediary model.
+Relational models are connectors between two Primary models. Their main responsibility is to materialize a many-to-many relationship between two entities. For instance, a `Student` may have multiple teachers, and a `Teacher` may have multiple students. In this case, we need a relational model to act as an intermediary model.
-Relational models are not supposed to have any details in them. They only contain references to other models and that is their primary key. A composite key that aggregates two or more foreign keys within it. Let's take a look at an example:
+Relational models are not supposed to have any details. They only contain references to other models, which is their primary key. A composite key that aggregates two or more foreign keys within it. Let's take a look at an example:
@@ -131,7 +131,7 @@ Relational models are not supposed to have any details in them. They only contai
#### 0.1.2.0.0.3 Hybrid Models
-There's a situation where a model connects multiple entities but also carries its data. I highly advise against following that path to maintain purity in your system design and control the complexity of your models. However, sometimes this approach is a necessary option to proceed with a specific implementation or a business flow. In this case, we can propose a hybrid model that can carry particular details about the relationship between two independent entities.
+There's a situation where a model connects multiple entities but also carries its data. I highly advise against following that path to maintain purity in your system design and control the complexity of your models. However, this approach is sometimes a necessary option to proceed with a specific implementation or business flow. In this case, we can propose a hybrid model that can carry particular details about the relationship between two independent entities.
A hybrid model can describe the detachment between two entities in a many-to-many relationship in a soft-delete scenario. Here's an example of a hybrid model that can occur in reality. Let's assume a group member does not want to be a part of a particular group anymore. We consider their group membership as `Deactivated` with a reason attached without actually deleting the record. Here's what it would that look like:
@@ -141,19 +141,19 @@ A hybrid model can describe the detachment between two entities in a many-to-man
-Hybrid models combine secondary models in the way they reference Primary models. They implement a relational nature in allowing multiple entities to relate to each other without exclusivity. In a non-relational data model, the referencing integrity might become a bit looser given the linear nature of that schema.
+Hybrid models combine secondary models in the way they reference Primary models. They implement a relational nature in allowing multiple entities to relate to each other without exclusivity. In a non-relational data model, the referencing integrity may become looser, given the linear nature of that schema.
#### 0.1.2.0.1 Operational Models
-Operational models mainly target the simulation aspect of any software system. Think about all primitive, complex, and exposure operations a simple scenario could require for a successful simulation to be implemented. Let's assume we are trying to solve a problem where we can simplify student registrations in some schools. The registration process will require some simulation to add these students' information into a computerized system.
+Operational models mainly target the simulation aspect of any software system. Think about all the primitive, complex, and exposure operations a simple scenario could require for a successful simulation to be implemented. Let's assume we are trying to solve a problem where we can simplify student registrations in some schools. The registration process will require some simulation to add these students' information into a computerized system.
-Operational models will handle the entire process's exposure, processing, and integration by offering services that offer APIs/UIs to enter, post, add and insert/persist students' information into some schooling systems.
+Operational models will handle the entire process's exposure, processing, and integration by offering services that offer APIs/UIs to enter, post, add, and insert/persist students' information into some schooling systems.
-The Standard focuses heavily on operational models because they represent the core of any system in terms of business flows. Operational models are also where most of the development and design resources go in any software development effort. Operational models can be broken into three main categories: Integration, Processing, and Exposure.
+The Standard focuses heavily on operational models because they represent the core of any system in terms of business flows. Operational models are also where most development and design resources go in any software development effort. Operational models can be broken into three main categories: Integration, Processing, and Exposure.
Let's talk about the operational models here.
#### 0.1.2.0.1.0 Integration Models (Brokers)
-Integration operational models' primary responsibility is to connect any existing system with external resources, which can be localized to the environment of that system like reading the current date or time or remote like calling an external API or persisting data in some database.
+Integration operational models' primary responsibility is to connect any existing system with external resources, which can be localized to the system's environment, like reading the current date or time, or remote, like calling an external API or persisting data in some database.
We call these integration models Brokers. They play a liaison role between processing operational models and external systems. Here's an example:
@@ -167,13 +167,13 @@ public partial class ApiBroker
The integration model above offers the capability to call an external API while abstracting the configuration details away from the processing operational models.
-Like any other operational model type, they don't hold data but instead, use private class members and constants to share internal data across its public and private methods. The `ApiBroker` here as a model represents a simulation of integration with an external system.
+Like any other operational model type, they don't hold data but instead, use private class members and constants to share internal data across their public and private methods. The `ApiBroker` here as a model represents a simulation of integration with an external system.
-We will discuss Brokers extensively in upcoming chapters to shed some light on the rules and guidelines in developing brokers with any external resources or systems.
+In upcoming chapters, we will discuss Brokers extensively to clarify the rules and guidelines for developing brokers with external resources or systems.
#### 0.1.2.0.1.1 Processing Models (Services)
-Processing models are the holders of all business-specific simulations. Things like student registrations, requesting a new library card, or simply retrieving some student information based on specific criteria. Processing models can be either primitive/foundational, high-order/processing, or advanced/orchestrators.
+Processing models are the holders of all business-specific simulations, such as student registrations, requesting a new library card, or retrieving student information based on specific criteria. Processing models can be either primitive/foundational, high-order/processing, or advanced/orchestrators.
Processing models, in general, either rely on integration models or self-relying like computational processing services or rely on each other.
@@ -235,14 +235,14 @@ public partial class StudentOrchestrationService : IStudentOrchestrationService
}
```
-In general, operational models are only concerned with the nature of simulation or processing of specific data carrier models; they are not concerned with holding data or retaining a status. In general, operational models are stateless in that they don't retain any of the details that went through them other than delegating logging for observability and monitoring purposes.
+In general, operational models are only concerned with the nature of simulation or processing of specific data carrier models; they are not concerned with holding data or retaining a status. In general, operational models are stateless in that they don't retain any details that went through them other than delegating logging for observability and monitoring purposes.
#### 0.1.2.0.1.2 Exposure Models (Exposers)
-Exposure models handle the HMI in all scenarios where humans and systems have to interact. They could be simple RESTful APIs and SDKs or just UI like in web, mobile, or desktop applications, including commandline-based systems/terminals.
+Exposure models handle the HMI in all scenarios where humans and systems interact. They could be simple RESTful APIs and SDKs or just UIs like in web, mobile, or desktop applications, including command-line-based systems/terminals.
Exposure operational models are like the integration models; they allow the outside world to interact with your system. They sit on the other end of any system and are responsible for routing every request, communication, or call to the proper operational models. Exposure models never communicate directly with integration models and don't have any configuration other than their dependencies injected through their constructors.
-Exposure models may have their own language in terms of operations; for instance, an integration model might use a language like `InsertStudent` while an exposure model for an API endpoint would use a language such as `PostStudent` to express the very same operation in an exposure context.
+Exposure models may have their language in terms of operations; for instance, an integration model might use a language like `InsertStudent`, while an exposure model for an API endpoint would use a language like `PostStudent` to express the same operation in an exposure context.
Here's an example of exposure models:
@@ -262,12 +262,12 @@ public class StudentsController
}
```
-The above model exposes an API endpoint for RESTful communication to offer the capability to register students into a schooling system. We will further discuss the types of exposure models based on the context and the systems they are implemented within.
+The above model exposes an API endpoint for RESTful communication to allow students to be registered into a schooling system. We will further discuss the types of exposure models based on the context and the systems they are implemented within.
#### 0.1.2.0.2 Configuration Models
-The last type of model in any system is the configuration model. They can represent the entry point into a system, register dependencies for any system, or take the form of middleware to route URLs into their respective function within an exposure model.
+The last type of model in any system is the configuration model. It can represent the entry point into a system, register dependencies for any system, or act as middleware to route URLs into their respective functions within an exposure model.
-Configuration models usually sit at the beginning of launching a system, handling incoming and outgoing communications or underlying system operations like memory caching, thread management, etc.
+Configuration models usually appear at the beginning of a system's launch, handling incoming and outgoing communications or underlying system operations like memory caching, thread management, etc.
In a simple API application, you may see models that look like this:
@@ -288,7 +288,7 @@ As you can see from the code snippet above, the configuration model `Startup` of
The simulation aspect of software engineering is our ability to resemble the interactions to and from the models. For instance, in the grocery store example, a simulation would be the act of *selling* the item. Selling the item requires multiple modifications to the item in terms of deducting the count of the available items and reordering the items left based on the most valuable available item.
-We can describe the simulation process as illustrating the relationships between models, which are programmed as `functions`, `methods`, or `routines`; these terms all mean the same thing. If we have a software service that is responsible for items sales, a simulation process will look like this:
+We can describe the simulation process as illustrating the relationships between models, which are programmed as `functions`, `methods`, or `routines`; these terms all mean the same thing. If we have a software service that is responsible for item sales, a simulation process will look like this:
```csharp
public class SaleService
@@ -299,9 +299,9 @@ public class SaleService
In the example above, we have a model called `SaleService` that offers functionality to simulate the sales process in the real world on a model of an item. And this is how you describe everything in object-oriented programming. Everything is an object (from a model), and these objects interact with each other (simulation).
-Object interaction, in general, can be observed in three different types. A model is taking an action on another model. For instance, the `SaleService` is executing an action of `Sell` on an `Item` model. That's a model interacting with another model. In the very same example, a simulation could be something happening to the model from another model, such as the `Item` in the example above. And the last type of simulation is a model interacting with itself, such as models that self-dispose once their purpose is achieved, as they are no longer needed, so they self-destruct.
+Object interaction, in general, can be observed in three different types. A model is taking an action on another model. For instance, the `SaleService` is executing an action of `Sell` on an `Item` model. That's a model interacting with another model. In the very same example, a simulation could be something happening to the model from another model, such as the `Item` in the example above. The last type of simulation is a model interacting with itself, such as models that self-dispose once their purpose is achieved, as they are no longer needed, so they self-destruct.
-The simulation process is the third and last aspect of software engineering. We will dive into it deeply when we talk about brokers, services, and exposers to illustrate how the industrial software's modeling and simulation process happens.
+The simulation process is the third and last aspect of software engineering. We will explore it deeply when we discuss brokers, services, and exposers to illustrate how industrial software's modeling and simulation process happens.
## 0.1.4 Summary
@@ -321,6 +321,6 @@ It's important to understand that computer software can serve multiple purposes.
-The complexity of any large system can be broken into smaller problems if the single-purpose or single-responsibility aspect is enforced for each sub-system. That's what modern software architectures would call granularity and modularization, which we will be discussing briefly throughout the architecture aspect of The Standard.
+The complexity of any large system can be broken into smaller problems if the single-purpose or single-responsibility aspect is enforced for each subsystem. Modern software architectures call this granularity and modularization, which we will discuss briefly throughout the architecture aspect of The Standard.
-[*] [Purposing, Modeling & Simulation (Part 1)](https://www.youtube.com/watch?v=Doul1gZKCuU)
+[*] [Purposing, Modeling & Simulation (Part 1)](https://www.youtube.com/watch?v=Doul1gZKCuU)
\ No newline at end of file
From dcadc7be164d6b4db9925dedd723942d3e72723c Mon Sep 17 00:00:00 2001
From: Greg Hays
Date: Wed, 17 Apr 2024 16:50:25 -0700
Subject: [PATCH 09/18] CODE RUB: Principles 0.2 Grammer Check
---
0. Introduction/0.2 Principles.md | 52 +++++++++++++++----------------
1 file changed, 26 insertions(+), 26 deletions(-)
diff --git a/0. Introduction/0.2 Principles.md b/0. Introduction/0.2 Principles.md
index 309ed10..4ecf018 100644
--- a/0. Introduction/0.2 Principles.md
+++ b/0. Introduction/0.2 Principles.md
@@ -12,41 +12,41 @@ The Standard also enforces the principles of measuring advanced engineering conc
Code written according to The Standard has to be simple. There are measures to ensure this simplicity takes place; these measures are as follows:
#### 0.2.0.0.0 Excessive Inheritance
-Any software written according to The Standard shall not have more than one level of inheritance. Over one level of inheritance will be considered excessive and prohibited except in cases of versioning for the vertical scaling of flows. Excessive inheritance has proven to be a source of confusion and difficulty in terms of readability and maintainability over the years.
+Any software written according to The Standard shall not have more than one level of inheritance. Over one level of inheritance will be considered excessive and prohibited except in versioning cases for the vertical scaling of flows. Excessive inheritance has proven to be a source of confusion and difficulty in terms of readability and maintainability over the years.
#### 0.2.0.0.1 Entanglement
##### 0.2.0.0.1.0 Horizontal Entanglement
-Building "common" components in every system with the promise to simplify the development process is another prohibited practice in Standard-compliant systems. This practice manifests itself in components with names like `Utils`, `Commons` or `Helpers`. These terminologies and what they imply in terms of false promised simplifications are not allowed. Any system built according to The Standard should be comprised of Brokers, Services, or Exposers, nothing more or less.
+Building "common" components in every system that promises to simplify development processes is another prohibited practice in Standard-compliant systems. This practice manifests itself in components with names like `Utils`, `Commons`, or `Helpers`. These terminologies and what they imply in terms of false promised simplifications are not allowed. Any system built according to The Standard should comprise Brokers, Services, or Exposers, nothing more or less.
Another example of horizontal entanglements is shared models across multiple independent flows - sharing exceptions, validation rules, or any other form of entanglement across multiple flows.
-
-
-
+
+
+
##### 0.2.0.0.1.1 Vertical Entanglement
This principle also applies to scenarios where base components are used. Unless these base components are native or external, they will not be allowed in a Standard-compliant system. Local base components create a vertical level of entanglement that harms the maintainability and readability of code. Vertical entanglements are just as harmful as `Commons` components, creating single points of failure across any system.
-Entanglements (vertical or horizontal) also prevent engineers in any system (especially newcomers) from fully understanding the depth of the system and fully own its functionality. They also prevent engineers from having the opportunity to build end-to-end flows when half of the functionality is presumably componentized for the sake of development expedition and simplicity.
+Entanglements (vertical or horizontal) also prevent engineers in any system (especially newcomers) from fully understanding the system's depth and fully owning its functionality. They also deter engineers from having the opportunity to build end-to-end flows when half of the functionality is componentized for the sake of development expedition and simplicity.
-
-
-
+
+
+
#### 0.2.0.0.2 Autonomous Components
-Every component in every system should be self-sufficient. Every component implements its own validations, tooling, and utilities in one of its dimensions with no hard dependency on any other external components except through dependency injection. This principle favors duplication over presumed simplification via code entanglement.
+Every component in every system should be self-sufficient. Every component implements its validations, tooling, and utilities in one of its dimensions with no hard dependency on any other external components except through dependency injection. This principle favors duplication over presumed simplification via code entanglement.
-Autonomous components will open up the opportunity for every engineer on every team to fully own every dependency and tool their component may need to fulfill its purpose. This may cause a bit of duplication in some of the code to open an equal opportunity for every developer to learn how to build and evolve a component fully.
+Autonomous components will open up the opportunity for every engineer on every team to fully own every dependency and tool their component may need to fulfill its purpose. This may cause some code duplication to open an equal opportunity for every developer to learn how to build and evolve a component fully.
##### 0.2.0.0.2.0 No Magic
-Autonomous components put all their routines up front, in plain sight of the engineer. No hidden routines, shared libraries, or magical extensions that require chasing references once an inevitable split of the big monolith begins to occur.
+Autonomous components put all their routines up front, in plain sight of the engineer. No hidden routines, shared libraries, or magical extensions that require chasing references once an inevitable split of the giant monolith begins to occur.
-We will treat Objects the way they are in nature: multi-dimensional components self-containerized like atoms in nature. These components perform their own validations, exception handling, tracing, security, localization, and everything else.
+We will treat Objects like they are in nature: multi-dimensional components self-containerized like atoms in nature. These components perform their validations, exception handling, tracing, security, localization, and everything else.
@@ -57,39 +57,39 @@ We will treat Objects the way they are in nature: multi-dimensional components s
Components built according to The Standard strictly adhere to the idea of _What You See Is What You Get_ (WYSIWYG) - everything concerning the components will be on the component itself.
## 0.2.1 Rewritability
-Every system should be developed with the rewritability principle in mind. The principle dictates that our assumptions in the systems we develop have the high potential of being reexamined and probably reconsidered. Every system should be easily rewritable as a measure of adherence to forever growing and changing business requirements.
+Every system should be developed with the rewritability principle in mind. This principle dictates that our assumptions in the systems we develop have the potential to be reexamined and probably reconsidered. Every system should be easily rewritable as a measure of adherence to forever growing and changing business requirements.
-Rewritable code is code that is easy to understand, modify and fully rewrite. Rewritable code is extremely modular and autonomous; it encourages engineers to evolve it with the least effort and risk possible.
+Rewritable code is easy to understand, modify, and fully rewrite. It is extremely modular and autonomous, encouraging engineers to evolve it with the least effort and risk possible.
-Rewritable code doesn't play tricks on the reader. It should be plug-n-play - fork, clone, build and run all its tests successfully with no issues. There are no hidden dependencies, injected routines at runtime, nor unknown prerequisites.
+Rewritable code doesn't play tricks on the reader. It should be plug-and-play�fork, clone, build, and run all its tests successfully with no issues. There are no hidden dependencies, injected routines at runtime, or unknown prerequisites.
## 0.2.2 Mono-Micro
-Build monolithic systems with a modular mindset, with every flow fully independent from other flows. For instance, we may build a monolithic system with a microservice mindset, meaning that any flow can be extracted out of the system and turned into its own microservice or lambda with the least effort possible.
+Build monolithic systems with a modular mindset, with every flow fully independent from other flows. For instance, we may build a monolithic system with a microservice mindset, meaning that any flow can be extracted from the system and turned into its microservice or lambda with the least effort possible.
-This principle goes hand in hand with the concept of autonomous components at a higher level where flows are also autonomous from their neighboring flow and their hosting system.
+This principle goes hand in hand with the concept of autonomous components at a higher level, where flows are also autonomous from their neighboring flow and their hosting system.
## 0.2.3 Level 0
-Code must be understandable by an entry-level individual in the engineering craft. The majority of engineers in our industry will always be those who are new to the craft. Our code base continues to live on based on how easy it is to understand by most engineers in the industry.
+Code must be understandable by an entry-level individual in the engineering craft. Since the majority of engineers in our industry will always be new to the craft, our code base continues to live based on its ease of understanding by most engineers in the industry.
-Level 0 engineers are our measure of success. Their ability to understand our code is our reassurance that this very code will continue to live and evolve with the next generation of engineers.
+Level 0 engineers are our measure of success. Their ability to understand our code guarantees that this code will continue to live and evolve with the next generation of engineers.
This principle also mandates that every engineer in the industry closely examine their code and pair with juniors in the field to see if they meet this principle.
## 0.2.4 Open Code
-Open code as a principle dictates that everything written according to The Standard should be commonly available to the public. This applies to internal architectural practices, trial libraries, and any other form of module development that doesn't allow every engineer to learn and evolve any given system. Developing internal tools that are not accessible shall inevitably harm the engineering experience for those who are trying to evolve these very tools.
+As a principle, open code dictates that everything written according to The Standard should be commonly available to the public. This applies to internal architectural practices, trial libraries, and any other form of module development that doesn't allow every engineer to learn and evolve any given system. Developing internal tools that are not accessible shall inevitably harm the engineering experience for those who are trying to evolve these very tools.
-The principle also recognizes that under extreme circumstances, such as when safety or security are threatened, or under some contractual obligation, one cannot make code, tooling, platforms, and patterns available to the public. However, making the source proprietary solely for personal or organizational gain is not permitted by The Standard.
+The principle also recognizes that under extreme circumstances, such as when safety or security are threatened or when one is under some contractual obligation, one cannot make code, tooling, platforms, and patterns available to the public. However, The Standard does not permit making the source proprietary solely for personal or organizational gain.
## 0.2.5 Airplane Mode (Cloud-Foreign)
-The Standard enforces the idea of airplane mode. Where engineers can stand up their entire infrastructure on their local machine without needing or having any network connection, this principle goes heavily against principles such as Cloud-Native applications, where a given system cannot stand and run without cloud infrastructure.
+The Standard enforces the idea of airplane mode. Where engineers can set up their entire infrastructure on their local machine without needing or having any network connection, this principle goes heavily against principles such as Cloud-Native applications, where a given system cannot stand and run without cloud infrastructure.
The Standard also encourages its adopters to develop the proper tooling to bridge the gap between cloud infrastructural components and local components such as queues, event hubs, and other tools to make it easily testable and modifiable.
## 0.2.6 No Toasters
-The Standard shall be taught man to man, not machine to man. No stylecops or analyzers should be implemented to force people into following The Standard. It should be driven by passion in the heart and conviction in the mind. The Standard should be taught by people to people. It's meant to foster an engineering culture of open discussions, conviction and understanding.
+The Standard shall be taught man to man, not machine to man. No stylecops or analyzers should be implemented to force people into following The Standard. It should be driven by passion in the heart and conviction in the mind. The Standard should be taught from person to person. It fosters an engineering culture of open discussions, conviction, and understanding.
## 0.2.7 Pass Forward
-The Standard shall be taught at no cost as it arrived to you at no cost. It should be passed forward to the next engineer also at no cost, regardless of their financial, social, or educational status. The Standard is pure knowledge given by the selfless to the selfless. There shall be no profiteering off of it; neither shall it be a reason for someone to belittle others or make them feel less. Teaching The Standard for profit violates it and denies the beneficiary (the violator) any further guidance from The Author.
+The Standard shall be taught at no cost as it arrived to you at no cost. It should also be passed forward to the next engineer at no cost, regardless of their financial, social, or educational status. The Standard is pure knowledge given by the selfless to the selfless. There shall be no profiteering off of it; neither shall it be a reason for someone to belittle others or make them feel less. Teaching The Standard for profit violates it and denies the beneficiary (the violator) any further guidance from The Author.
## 0.2.8 All-In/All-Out
-The Standard must be fully embraced or entirely rejected. Any system that incorporates only some aspects of The Standard will not be recognized as a standardized system. Any system that continues to adhere to previous versions of The Standard will be obligated to elevate their standards to reclaim their status of standardization.
+The Standard must be fully embraced or entirely rejected. Any system incorporating only some aspects of The Standard will not be recognized as a Standardized system. Any system that continues to adhere to previous versions of The Standard will be obligated to elevate its standards to reclaim its status of standardization.
\ No newline at end of file
From 5e8f13ce4295940fa004f17e063649e8c3b94970 Mon Sep 17 00:00:00 2001
From: Greg Hays
Date: Wed, 17 Apr 2024 17:08:38 -0700
Subject: [PATCH 10/18] CODE RUB: Testimonials 0.3 Additions
---
0. Introduction/0.3 Testimonials.md | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/0. Introduction/0.3 Testimonials.md b/0. Introduction/0.3 Testimonials.md
index bf652ac..df0cfbd 100644
--- a/0. Introduction/0.3 Testimonials.md
+++ b/0. Introduction/0.3 Testimonials.md
@@ -215,3 +215,14 @@ Proper adoption of TDD. Simplification. Set a standard that is known and used in
Software Developer Microsoft, India
+
+---
+
+ "With a cumulative of over 42 years of software development, The Standard principles people first attitude has shed a bright light on the software engineering industry.
+The extremly easy approach to these concepts help not just programming newcomers, but as well can greatly beneifit us old timers of the software world.
+When I first put a MS-Dos 5.25in. floppy in a PC with no hard drives, I never in a million years thought anything can be easier to follow."
+Hassan Habib and the Standard Community are the epitome of People First always someone somewhere to help or listen
+
+ Gregory Hays
+ Business Application Development
+ Southern California, USA
From b767771cf2fb74dbef4904436b98fcb1dc2b7dc9 Mon Sep 17 00:00:00 2001
From: Greg Hays
Date: Wed, 17 Apr 2024 17:42:26 -0700
Subject: [PATCH 11/18] CODE RUB: Brokers 1 Grammer Check
---
1. Brokers/1. Brokers.md | 86 ++++++++++++++++++++--------------------
1 file changed, 43 insertions(+), 43 deletions(-)
diff --git a/1. Brokers/1. Brokers.md b/1. Brokers/1. Brokers.md
index c897f01..93daded 100644
--- a/1. Brokers/1. Brokers.md
+++ b/1. Brokers/1. Brokers.md
@@ -6,14 +6,14 @@ They are wrappers around external libraries, resources, services, or APIs to sat
Brokers, in general, are meant to be disposable and replaceable - they are built with the understanding that technology evolves and changes all the time. Therefore, at some point in the lifecycle of a given application, it will be replaced with a recent technology that gets the job done faster.
-But Brokers also ensure that your business is pluggable by abstracting away any specific external resource dependencies from what your software is trying to accomplish.
+However, brokers also ensure that your business is pluggable by abstracting away any specific external resource dependencies from what your software is trying to accomplish.
-For instance, let's say you have an API built to consume and serve data from a SQL server. At some point, you decided that a better, more economical option for your API is to rely on a NoSql technology instead. Having a broker to remove the dependency on SQL will make it much easier to integrate with NoSql with the least time and cost humanly possible.
+For instance, you have an API built to consume and serve data from a SQL server. At some point, you decided that a better, more economical option for your API is to rely on a NoSQL technology instead. Having a broker to remove the dependency on SQL will make it much easier to integrate with NoSql with the least time and cost humanly possible.
## 1.1 On The Map
-In any given application, mobile, desktop, web, or simply an API, brokers usually reside at the "tail" of any app because they are the last point of contact between our custom code and the outside world.
+In any application, mobile, desktop, web, or simply an API, brokers usually reside at the "tail" of any app because they are the last point of contact between our custom code and the outside world.
-Whether the outside world in this instance is simply local storage in memory or an independent system that resides behind an API, they all have to reside behind Brokers in any application.
+Whether the outside world is simply local storage in memory or an independent system that resides behind an API, they all have to reside behind Brokers in any application.
In the following low-level architecture for a given API - Brokers reside between our business logic and the external resource:
@@ -27,9 +27,9 @@ In the following low-level architecture for a given API - Brokers reside between
There are a few simple rules that govern the implementation of any broker - these rules are:
### 1.2.0 Implements a Local Interface
-Brokers have to satisfy a local contract. They must implement a local interface to allow the decoupling between their implementation and the services that consume them.
+Brokers must satisfy a local contract and implement a local interface to allow decoupling between their implementation and the services that consume them.
-For instance, given that we have a local contract `IStorageBroker` that requires an implementation for any given CRUD operation for a local model `Student` - the contract operation would be as follows:
+For instance, given that we have a local contract, `IStorageBroker` that requires an implementation for any given CRUD operation for a local model `Student` - the contract operation would be as follows:
```csharp
public partial interface IStorageBroker
@@ -49,21 +49,21 @@ An implementation for a storage broker would be as follows:
SelectAll();
}
```
-A local contract implementation can be replaced at any point in time, from utilizing the Entity Framework as shown in the previous example to using a completely different technology like Dapper or an entirely different infrastructure like an Oracle or PostgreSQL database.
+A local contract implementation can be replaced at any point, from utilizing the Entity Framework as shown in the previous example to using a completely different technology like Dapper or an entirely different infrastructure like an Oracle or PostgreSQL database.
### 1.2.1 No Flow Control
-Brokers should not have any form of flow-control such as if-statements, while-loops, or switch cases - that's because flow-control code is considered to be business logic, and it fits better the services layer where business logic should reside, not the brokers.
+Brokers should not have any form of flow control, such as if statements, while loops, or switch cases. That's because flow-control code is considered business logic, and it fits better in the services layer, where business logic should reside, not the brokers.
For instance, a broker method that retrieves a list of students from a database would look something like this:
```csharp
public IQueryable SelectAllStudents() => SelectAll();
```
-A simple function that calls the native EntityFramework `DbSet` and return a local model like `Student`.
+A simple function that calls the native EntityFramework `DbSet` and returns a local model like `Student`.
### 1.2.2 No Exception Handling
-Exception handling is a form of flow control. Brokers are not supposed to handle any exceptions but rather let the exception propagate to the broker-neighboring services, where these exceptions can be properly mapped and localized.
+Exception handling is a form of flow control. Brokers are not supposed to handle exceptions but rather let them propagate to the broker-neighboring services, where they can be properly mapped and localized.
### 1.2.3 Own Their Configurations
@@ -107,23 +107,23 @@ For instance, in an e-mail notification broker, input parameters for a `.Send(..
}
```
-The primitive input parameters will ensure there are no strong dependencies between the broker-neighboring services and the external models.
+The primitive input parameters will ensure no strong dependencies between the broker-neighboring services and the external models.
Even in situations where the broker is simply a point of integration between your application and an external RESTful API, it's very highly recommended that you build your native models to reflect the same JSON object sent or returned from the API instead of relying on NuGet libraries, DLLs or shared projects to achieve the same goal.
### 1.2.5 Naming Conventions
-The contracts for the brokers shall remain as generic as possible to indicate the overall functionality of a broker; for instance, we say `IStorageBroker` instead of `ISqlStorageBroker` to indicate a particular technology or infrastructure.
+The contracts for brokers shall remain as generic as possible to indicate their overall functionality; for instance, we say `IStorageBroker` instead of `ISqlStorageBroker` to indicate a particular technology or infrastructure.
-With a single storage broker, it might be more convenient to maintain the same name as the contract. But in the case of concrete implementations of brokers, it all depends on how many brokers you have providing similar functionality. In our case, we have a concrete implementation of `IStorageBroker`, so the name would be `StorageBroker`.
+With a single storage broker, it might be more convenient to maintain the same name as the contract. However, in the case of concrete implementations of brokers, it all depends on how many brokers you have that provide similar functionality. In our case, we have a concrete implementation of `IStorageBroker`, so the name would be `StorageBroker`.
However, if your application supports multiple queues, storage, or e-mail service providers, you might need to start specifying the overall target of the component; for instance, an `IQueueBroker` would have multiple implementations such as `NotificationQueueBroker` and `OrdersQueueBroker`.
-But if the concrete implementations target the same model and business logic, then a diversion to the technology might be more befitting. In this case, for instance, `IStorageBroker`, two different concrete implementations could be `SqlStorageBroker` and `MongoStorageBroker`. This case is typical in situations where the intention is to reduce production infrastructure costs.
+However, if the concrete implementations target the same model and business logic, a diversion to the technology might be more beneficial. In this case, for instance, `IStorageBroker`, two different concrete implementations could be `SqlStorageBroker` and `MongoStorageBroker`. This case is typical in situations where the intention is to reduce production infrastructure costs.
### 1.2.6 Language
Brokers speak the language of the technologies they support.
-For instance, in a storage broker, we say `SelectById` to match the SQL `Select` statement, and in a queue broker, we say `Enqueue` to match the language.
+For instance, in a storage broker, we say `SelectById` to match the SQL `Select` statement; in a queue broker, we say `Enqueue` to match the language.
-If a broker is supporting an API endpoint, then it shall follow the RESTFul semantics, such as `POST`, `GET` or `PUT`. Here's an example:
+If a broker supports an API endpoint, it shall follow the RESTFul semantics, such as `POST`, `GET`, or `PUT`. Here's an example:
```csharp
@@ -133,13 +133,13 @@ If a broker is supporting an API endpoint, then it shall follow the RESTFul sema
```
### 1.2.7 Up & Sideways
-Brokers cannot call other brokers; because brokers are the first point of abstraction, they require no additional abstractions or dependencies other than a configuration access model.
+Brokers cannot call other brokers because they are the first point of abstraction and require no additional abstractions or dependencies other than a configuration access model.
-Brokers can't also have services as dependencies as the flow in any given system shall come from the services to the brokers and not the other way around.
+Brokers also can't have services as dependencies, as the flow in any given system shall come from the services to the brokers and not the other way around.
For instance, even when a microservice has to subscribe to a queue, brokers will pass forward a listener method to process incoming events but not call the services that provide the processing logic.
-The general rule here then would be that only services can call brokers; while brokers can only call external native dependencies.
+The general rule here is that only services can call brokers, while brokers can only call external native dependencies.
## 1.3 Organization
Brokers supporting multiple entities, such as Storage brokers, should leverage partial classes to break down the responsibilities per entity.
@@ -155,9 +155,9 @@ For instance, if we have a storage broker that provides all CRUD operations for
The primary purpose of this particular organization, leveraging partial classes, is to separate the concern for each entity to a finer level, which should make the maintainability of the software much higher.
-But broker file and folder naming conventions strictly focus on the plurality of the entities they support and the singularity for the overall resource supported.
+Broker file and folder naming conventions strictly focus on the plurality of the entities they support and the singularity of the overall resource supported.
-For instance, we say `IStorageBroker.Students.cs`. And we also say `IEmailBroker` or `IQueueBroker.Notifications.cs` - singular for the resource and plural entities.
+For instance, we say `IStorageBroker.Students.cs`. We also say `IEmailBroker` or `IQueueBroker.Notifications.cs` - singular for the resource and plural entities.
The same concept applies to the folders or namespaces containing these brokers.
@@ -179,34 +179,34 @@ namespace OtripleS.Web.Api.Brokers.Queues
```
## 1.4 Broker Types
-In most of the applications built today, some common Brokers are usually needed to get an enterprise application up and running - some of these are Storage, Time, APIs, Logging, and Queues.
+In most applications built today, some common Brokers are usually needed to get an enterprise application up and running - some of these are Storage, Time, APIs, Logging, and Queues.
-Some brokers interact with existing resources on the system, such as *time* to allow broker-neighboring services to treat *time* as a dependency and control how a particular service would behave based on the value of time at any point in the past, present or the future.
+Some brokers interact with existing system resources, such as *time*, to allow broker-neighboring services to treat time as a dependency and control how a particular service would behave based on the value of *time* at any point in the past, present, or future.
### 1.4.0 Entity Brokers
-Entity brokers provide integration points with external resources that the system needs to fulfill business requirements.
+Entity brokers provide integration points with the system's external resources to fulfill business requirements.
For instance, entity brokers integrate with storage, providing capabilities to store or retrieve records from a database.
Entity brokers are also like queue brokers, providing a point of integration to push messages to a queue for other services to consume and process to fulfill their business logic.
-Broker-neighboring services can only call entity brokers because they require a level of validation on the data they receive or provide before proceeding any further.
+Broker-neighboring services can only call entity brokers because they require a level of validation on the data they receive or provide before proceeding.
### 1.4.1 Support Brokers
-Support brokers are general purpose brokers, they provide the functionality to support services, but they have no characteristic that makes them different from any other system.
+Support brokers are general-purpose brokers. They provide the functionality to support services, but they have no characteristics that make them different from any other system.
-A good example of support brokers is the `DateTimeBroker` - a broker specifically made to abstract away the business layer's strong dependency on the system date time.
+An excellent example of a support broker is the `DateTimeBroker`, a broker specifically designed to abstract away the business layer's strong dependency on the system date time.
-Time brokers don't target any specific entity, and they are almost the same across many systems.
+Time brokers don't target any specific entity; they are almost the same across many systems.
Another example of support brokers is the `LoggingBroker` - they provide data to logging and monitoring systems to enable the system's engineers to visualize the overall flow of data across the system and be notified in case any issues occur.
-Support Brokers may be called across the entire business layer: on foundation, processing, orchestration, coordination, management, or aggregation services, unlike Entity Brokers. That's because logging brokers are required as a supporting component in the system to provide all the capabilities needed for services to log their errors or calculate a date or any other supporting functionality.
+Support Brokers may be called across the entire business layer: foundation, processing, orchestration, coordination, management, or aggregation services, unlike Entity Brokers. Logging brokers are required as a supporting component in the system to provide all the capabilities needed for services to log their errors, calculate a date, or perform any other supporting functionality.
You can find real-world examples of brokers in the OtripleS project [here](https://github.com/hassanhabib/OtripleS/tree/master/OtripleS.Web.Api/Brokers).
## 1.5 Implementation
-Here's a real-life implementation of a full storage broker for all CRUD operations for `Student` entity:
+Here's a real-life implementation of an entire storage broker for all CRUD operations for `Student` entity:
###### For IStorageBroker.cs:
```csharp
@@ -334,38 +334,38 @@ namespace OtripleS.Web.Api.Brokers.Storages
```
## 1.6 Summary
-Brokers are the first layer of abstraction between your business logic and the outside world. But they are not the only layer of abstraction; because there will still be a few native models that leak through your brokers to your broker-neighboring services. It is natural to avoid doing any mappings outside the realm of logic; in our case, here, the foundation services.
+Brokers are the first layer of abstraction between your business logic and the outside world. But they are not the only layer of abstraction because a few native models will still leak through your brokers to your broker-neighboring services. It is natural to avoid doing any mappings outside the realm of logic, in our case, the foundation services.
-For instance, in a storage broker, regardless of ORM used, some native exceptions from your ORM (EntityFramework, for example) will occur, such as `DbUpdateException` or `SqlException`. In that case, we need another layer of abstraction to play the role of a mapper between these exceptions and our core logic to convert them into local exception models.
+For instance, in a storage broker, regardless of the ORM used, some native exceptions from your ORM (EntityFramework, for example) will occur, such as `DbUpdateException` or `SqlException`. In that case, we need another layer of abstraction to act as a mapper between these exceptions and our core logic to convert them into local exception models.
-This responsibility lies in the hands of the broker-neighboring services. I also call them foundation services, these services are the last point of abstraction before your core logic that consists of local models and contracts.
+This responsibility lies in the hands of the broker-neighboring services. I also call them foundation services; these services are the last point of abstraction before your core logic, which consists of local models and contracts.
## 1.7 FAQs
-Over time, some common questions arose from the engineers I worked with throughout my career. Since some of these questions reoccurred on several occasions, I thought it might be helpful to aggregate them here for everyone to learn about other perspectives around brokers.
+Over time, some common questions arose from the engineers I worked with throughout my career. Since some of these questions reoccurred on several occasions, it might be helpful to aggregate them here so everyone can learn about other perspectives on brokers.
#### 1.7.0 Is the Brokers Pattern the same as the Repository Pattern?
-Not exactly, at least from an operational standpoint, brokers seem to be more generic than repositories.
+From an operational standpoint, brokers seem to be more generic than repositories.
-Repositories usually target storage-like operations, mainly towards databases. But brokers can be an integration point with any external dependency such as e-mail services, queues, and other APIs.
+Repositories usually target storage-like operations, mainly towards databases. However, brokers can be an integration point for any external dependency, such as e-mail services, queues, and other APIs.
A more similar pattern for brokers is the Unit of Work pattern. It mainly focuses on the overall operation without having to tie the definition or the name with any particular operation.
-All these patterns, in general, try to implement the same SOLID principles: separation of concern, dependency injection, and single responsibility.
+In general, all these patterns try to implement the same SOLID principles: separation of concern, dependency injection, and single responsibility.
But because SOLID are principles and not exact guidelines, it's expected to see all different implementations and patterns to achieve that principle.
#### 1.7.1 Why can't the brokers implement a contract for methods that return an interface instead of a concrete model?
That would be an ideal situation, but that would also require brokers to do a conversion or mapping between the native models returned from the external resource SDKs or APIs and the internal model that adheres to the local contract.
-Doing that on the broker level will require pushing business logic into that realm, which is entirely outside that component's purpose.
+Doing that on the broker level will require introducing business logic into that realm, which is outside the purpose of that component.
-We define business logic code as any intended sequential, selective, or iteration code. Brokers do not get unit tested because they have no business logic. They may be a part of an acceptance or an integration test but certainly not a part of unit-level tests - simply because they don't contain any business logic in them.
+We define business logic code as any intended sequential, selective, or iteration code. Brokers are not unit-tested because they lack business logic. They may be part of an acceptance or integration test but certainly not part of unit-level tests simply because they do not contain business logic.
#### 1.7.2 If brokers were a layer of abstraction from the business logic, why do we allow external exceptions to leak through them onto the services layer?
-Brokers are only the *first* layer of abstraction, but not the only one. The broker-neighboring services are responsible for converting the native exceptions occurring in a broker into a more local exception model that can be handled and processed internally within the business logic realm.
+Brokers are only the *first* layer of abstraction, but not the only one. Broker-neighboring services are responsible for converting the native exceptions occurring in a broker into a more local exception model that can be handled and processed internally within the business logic realm.
-Business logic emerges in the processing, orchestration, coordination, and aggregation layers where all the exceptions, all the returned models, and all operations are local to the system.
+Business logic emerges in the processing, orchestration, coordination, and aggregation layers, where all the exceptions, returned models, and operations are local to the system.
#### 1.7.3 Why do we use partial classes for brokers who handle multiple entities?
Since brokers must own their configurations, it makes more sense to partialize when possible to avoid reconfiguring every storage broker for each entity.
@@ -373,11 +373,11 @@ Since brokers must own their configurations, it makes more sense to partialize w
Partial classes are a feature in the C# language, but it should be possible to implement the same behavior through inheritance in other programming languages.
#### 1.7.4 Are brokers the same as providers (Provider Pattern)?
-No. Providers blur the line between services (business logic) and brokers (integration layer) - brokers are targeting particular components within the system that are disposable. Providers seem to include more than just that.
+No. Providers blur the line between services (business logic) and brokers (integration layer). Brokers target particular disposable components within the system, but providers include more than just that.
[*] [Implementing Abstract Components (Brokers)](https://www.youtube.com/watch?v=6NlgSskQXSo)
[*] [Implementing Abstract Components (Part 2)](https://www.youtube.com/watch?v=d1cqEKMBmno)
-[*] [Generating Model Migrations w/ EntityFramework](https://www.youtube.com/watch?v=6sD5zz9BuOk)
+[*] [Generating Model Migrations w/ EntityFramework](https://www.youtube.com/watch?v=6sD5zz9BuOk)
\ No newline at end of file
From 938094c587435af08a6a0ca22b7272a43a6abb32 Mon Sep 17 00:00:00 2001
From: Greg Hays
Date: Wed, 17 Apr 2024 17:57:17 -0700
Subject: [PATCH 12/18] CODE RUB: SPAL 1.8 Grammer Check
---
1. Brokers/1.8 Broker Provider Abstraction.md | 34 +++++++++----------
1 file changed, 17 insertions(+), 17 deletions(-)
diff --git a/1. Brokers/1.8 Broker Provider Abstraction.md b/1. Brokers/1.8 Broker Provider Abstraction.md
index 853fdd4..5f3c77c 100644
--- a/1. Brokers/1.8 Broker Provider Abstraction.md
+++ b/1. Brokers/1.8 Broker Provider Abstraction.md
@@ -1,17 +1,17 @@
# 1.8 Standardized Provider Abstraction Libraries (SPAL)
-This special chapter will discuss the providers brokers may rely on to execute a certain operation. Most Brokers rely on external libraries or routines that are not owned by the engineers developing the system. These external libraries may or may not follow best practices in terms of abstraction and testing.
+This particular chapter will discuss the providers brokers may rely on to execute a certain operation. Most Brokers rely on external libraries or routines not owned by the engineers developing the system. These external libraries may or may not follow best practices regarding abstraction and testing.
-The Standard mandates relying on standardized providers for systems design and development. These standardized providers must have the following charactristics:
+The Standard mandates relying on standardized providers for systems design and development. These standardized providers must have the following characteristics:
## 1.8.0 Extensibility
-Standardized provider abstraction libraries must be extensidble to support more external providers. For instnace, a library that supports communicating with a database to SQL Server, must be extensible enough to support communicating with MongoDB or MariaDb or any other providers without any additional costs from the consumer of these libraries.
+Standardized provider abstraction libraries must be extensible to support more external providers. For example, a library that supports communicating with a database to SQL Server must be extensible enough to support communicating with MongoDB, MariaDB, or any other providers without any additional costs from the consumer of these libraries.
### 1.8.0.0 Configurability
-In order for Standardized libraries to be usable with several providers, it must allow engineers to configure them to target a particular provider local or remote. For instance, provider abstraction libraries can be configured in the following fashion:
+For Standardized libraries to be usable with several providers, it must allow engineers to configure them to target a particular provider, local or remote. For instance, provider abstraction libraries can be configured in the following fashion:
```csharp
public class StorageBroker
@@ -28,7 +28,7 @@ public class StorageBroker
}
```
-Another example for using a different provider would be:
+Another example of using a different provider would be:
```csharp
public class StorageBroker
@@ -46,9 +46,9 @@ public class StorageBroker
```
## 1.8.1 Distributability
-Abstraction libraries must allow several engineers to publish their own extensions of the library. The library does not need to have implementations of all providers in one binary. Instead, these libraries must provide an interface or a contract that all other extensions need to implement to support a certain provider.
+Abstraction libraries must allow several engineers to publish their extensions of the library. The library does not need to have implementations of all providers in one binary. Instead, these libraries must provide an interface or a contract that all other extensions must implement to support a specific provider.
-For instance, Let's assume we have the core standardized contract `Standard.Storages.Core`. We may publish a library `Standard.Storages.Sql` library. Anyone else can also publish `Standard.Storages.MongoDb` to support the very same interface. An interface would look something like this:
+For instance, Let's assume we have the core standardized contract `Standard.Storages.Core`. We may publish a library called `Standard.Storages.Sql`. Anyone else can also publish `Standard.Storages.MongoDb` to support the same interface. An interface would look something like this:
```csharp
public interface IStorageProvider
@@ -60,12 +60,12 @@ public interface IStorageProvider
void Configure(Options options);
}
```
-This contract capabilities must be the bare minimum any provider must be able to provide. However, the additional options in the provider extension may expose more capabilities that may or may not exist in other libraries.
+This contract's capabilities must be the bare minimum any provider can provide. However, the additional options in the provider extension may expose more capabilities that may or may not exist in other libraries.
## 1.8.2 External Mockability (Cloud-Foreign)
-Standardized provider libraries must allow communications with mocked local phantom APIs. For instance, if a system requires a communication with a queue or an event bus in the cloud, the provider library abstracting that technology must allow a local connection for Acceptance Testing and Airplane-Mode runs which we talked about earlier as Cloud Foriegn Principle.
+Standardized provider libraries must allow communications with mocked local phantom APIs. For instance, if a system requires communication with a queue or an event bus in the cloud, the provider library abstracts that technology must allow a local connection for Acceptance Testing and Airplane-Mode runs, which we discussed earlier as Cloud Foriegn Principle.
-External Mockability may rely on other external library that implements a patterns such as PACT to create phantom or fake external API instance running on the local machine or network. Here's an example:
+External Mockability may rely on other external libraries that implement patterns such as PACT to create phantom or fake external API instances running on the local machine or network. Here's an example:
```csharp
@@ -82,17 +82,17 @@ public class EventBroker
}
}
```
-In the above snippet, the `TargetServerType` can be either `Remote` or `Local` as mandatory options but the engineers developing the library may add other options if they so choose.
+In the above snippet, the `TargetServerType` can be either `Remote` or `Local` as mandatory options, but the engineers developing the library may add other options if they so choose.
-Abstract provider libraries must mimic the exact same behavior of their providers. For instance, in a queue-listening scenario, these libraries must expose an API that supports eventing for incoming messages even if these messages were local not from any external service such as the cloud.
+Abstract provider libraries must mimic the exact behavior of their providers. For instance, in a queue-listening scenario, these libraries must expose an API that supports eventing for incoming messages, even if they are local and not from an external service such as the cloud.
-It is also acceptable to have the option to support local intranet networks and governed networks that are not connected to the public internet in certain scenarios using these very same libraries.
+It is also acceptable to have the option to support local intranet networks and governed networks that are not connected to the public internet in specific scenarios using these very same libraries.
-Standardized provider abstraction libraries are sub-systems that are also required to have their own Brokers, Services and Exposure layers according to The Standard. These libraries will further simplify the development of customer-facing systems with well-defined exceptions to handle, expectations and also simpler modifications since they are open-source and Standard-Compliant.
+Standardized provider abstraction libraries are subsystems that must have their own Brokers, Services, and Exposure layers according to The Standard. These libraries will further simplify the development of customer-facing systems with well-defined exceptions to handle expectations and simpler modifications since they are open-source and Standard-Compliant.
## 1.8.3 Local to Global
-Engineers may develop their own local provider abstraction libraries in the same solution assuming that they couldn't find an existing effort to support their needs. By doing so, these engineers are encouraged to open-source and publish that abstraction project to support other engineers in The Standard Community who may have the same needs.
+Engineers may develop their local provider abstraction libraries using the same solution, assuming they need help finding an existing effort to support their needs. By doing so, these engineers are encouraged to open-source and publish that abstraction project to support other engineers in The Standard Community who may have the exact needs.
-This practice encourages engineers everywhere to create collective effort and hive-mind pattern to solve a problem once and share it with the rest of The Standard Engineering Community for further enhancement and support.
+This practice encourages engineers everywhere to create a collective effort and hive-mind patterns to solve a problem once and share it with the rest of The Standard Engineering Community for further enhancement and support.
-Standard-Compliant edge (customer-facing) systems should no longer add any libraries that are not Standard-Compliant to their APIs, Desktop or Web Applications or any other systems.
+Standard-compliant edge (customer-facing) systems should no longer add any non-standard libraries to their APIs, Desktop or Web Applications, or any other systems.
\ No newline at end of file
From 8102fcb16ee132f956c53d69c0a612a86e75cccb Mon Sep 17 00:00:00 2001
From: Greg Hays
Date: Wed, 17 Apr 2024 18:16:24 -0700
Subject: [PATCH 13/18] CODE RUB: Exposers 3 Grammer Check
---
3. Exposers/3. Exposers.md | 34 +++++++++++++++++-----------------
1 file changed, 17 insertions(+), 17 deletions(-)
diff --git a/3. Exposers/3. Exposers.md b/3. Exposers/3. Exposers.md
index 8a3482f..6435cff 100644
--- a/3. Exposers/3. Exposers.md
+++ b/3. Exposers/3. Exposers.md
@@ -1,9 +1,9 @@
# 3 Exposers
## 3.0 Introduction
-Exposers are disposable components in any system that has the single responsibility of exposing your core business logic functionality by mapping its responses to a certain protocol. For instance, in RESTful communications, an API controller would be responsible for returning a `200` code for a successful response. The same thing applies to other protocols such as gRPC or SOAP or any other protocol of communication between distributed systems.
+Exposers are disposable components in any system that expose your core business logic functionality by mapping its responses to a specific protocol. For instance, in RESTful communications, an API controller would be responsible for returning a `200` code for a successful response. The same applies to other protocols, such as gRPC, SOAP, or any other protocol of communication between distributed systems.
-Exposer components are similar to Brokers. They are the last point of contact between the core business logic and the outside world. They are built with the intent that they will be detached from the current system at some point in time to allow the very same core logic to integrate with modern systems, protocols or interfaces.
+Exposer components are similar to Brokers. They are the last point of contact between the core business logic and the outside world. They are built with the intent that they will be detached from the current system at some point in time, allowing the same core logic to integrate with modern systems, protocols, or interfaces.
@@ -12,36 +12,36 @@ Exposer components are similar to Brokers. They are the last point of contact be
## 3.0.0 Purpose
-In general, exposure components main responsibility is to allow someone or something to interact with your business logic. In that core purpose a precise mapping bit by bit to every possible response from your core business logic should be communicated cautiously with the consumer to that logic. I say cautiously because sometimes certain internal issues in the system are not required to be exposed to the outside world. This mapping of responses can usually be a zero effort as the protocol and the language your code business logic communicate are the same such as the case with libraries produced to run on the system that use the same technologies or programming languages.
+In general, the primary responsibility of exposure components is to allow someone or something to interact with your business logic. For that core purpose, a precise mapping bit by bit to every possible response from your core business logic should be communicated cautiously with the consumer to that logic. I say cautiously because sometimes, specific internal issues in the system are not required to be exposed to the outside world. This mapping of responses can usually be a zero effort as the protocol and the language your code business logic communicates are the same as with libraries produced to run on the system that uses the same technologies or programming languages.
-But there are occassions where the outside world stateless protocol doesn't necessarily match the same value of a response. In which case it becomes an exposer component responsibility to make a successful mapping both ways in and out of the system. API controllers are a great example of that. They will communicate a `4xx` issue when there's a validation exception of some type and return a deserialized JSON value if the communication was successful. But there are also more details around problem details, error codes and other levels of mapping and communication that we will discuss in upcoming chapters within this section.
+However, there are occasions where the outside world stateless protocol doesn't necessarily match the exact value of a response. In this case, it becomes an exposer component's responsibility to make a successful mapping both ways in and out of the system. API controllers are a great example of that. They will communicate a `4xx` issue when there's a validation exception of some type and return a deserialized JSON value if the communication is successful. However, there are also more details about problem details, error codes, and other levels of mapping and communication that we will discuss in upcoming chapters within this section.
### 3.0.0.0 Pure Mapping
-The most important aspect of exposure components is that they are not allowed to communicate with brokers of any type. And they are not allowed to contain any form of business logic within them. By business logic here I mean no sequence of routine calls, no iteration or selection/decision making. The same way it is with brokers, they only link an existing realm with the outside realm to achieve a certain value.
+The most important aspect of exposure components is that they are not allowed to communicate with brokers of any type. They are also not allowed to contain any form of business logic within them. By business logic here, I mean no sequence of routine calls, no iteration, or selection/decision-making. In the same way, it is with brokers only link an existing realm with the outside realm to achieve a specific value.
## 3.0.1 Types of Exposure Components
-Exposure components have three different types. Which are either communication protocols, user interfaces or simply an IO routine. Let's talk about those breifly.
+Exposure components are of three types: communication protocols, user interfaces, and IO routines. Let's discuss those briefly.
### 3.0.1.0 Communication Protocols
-An exposure component that is a communication protocol can vary from simple RESTful APIs, to SOAP communication or gRPC. They can also be a simple client in a library where consumers would just install the library in their projects and consume your core logic through the client APIs. These examples are all of the same type of exposure components.
+An exposure component that is a communication protocol can vary from simple RESTful APIs to SOAP communication or gRPC. It can also be a simple client in a library, where consumers would just install the library in their projects and consume your core logic through the client APIs. These examples are all of the same type of exposure components.
-The differentiator here is that a communication protocol is usually event-based. Triggered by an incoming communication and treated with a response of any kind. Communication protocols are usually for system-to-system integrations but they can be accessible and understandble by humans for testing and debugging purposes.
+The differentiator here is that a communication protocol is usually event-based. It is triggered by an incoming communication and treated with a response of any kind. Communication protocols are usually for system-to-system integrations, but they can be accessible and understandable by humans for testing and debugging purposes.
### 3.0.1.1 User Interfaces
-Another type of exposer components are user interfaces. This can vary from Web, mobile or desktop applications including simple command lines. They mainly target end-users for communication but can be automated by other systems. Especially with command line user interfaces. In this day and age, user interfaces can also include virtual and augmented realities, metaverses and any other form of software.
+Another type of exposure component is the user interface. This can vary from Web, mobile, or desktop applications to simple command lines. User interfaces mainly target end-users for communication but can be automated by other systems, especially with command-line interfaces. In this day and age, user interfaces can also include virtual and augmented realities, metaverses, and any other form of software.
-There are occasions where Human-Machine-Interfaces (HMI) can also fall into that level of exposure components. For instance, the buttons on a cellphone, keyboards we use everyday and any form of hardware that can interact directly with core business logic interfaces as an exposure component. The same theory applies to the Internet of Things (IoT) components and many others where a human has to utilize a component to leverage a certain capability to their own advantage in anyway.
+There are occasions where Human-Machine-Interfaces (HMI) can also fall into that level of exposure components. For instance, the buttons on a cellphone, keyboards we use daily, and any form of hardware that can interact directly with core business logic interfaces as an exposure component. The same theory applies to the Internet of Things (IoT) components and many others where a human has to utilize a component to leverage a specific capability to their advantage.
### 3.0.1.2 I/O Components
-Some exposure components are not necessarily a system interfacing with another system. Neither they are purposed to communicate with humans. They are daemons or IO based components that do something in the background without a trigger. usually these components are time-based and they may leverage existing protocols or just simply interface directly with the core business logic which are both viable options.
+Some exposure components are not necessarily systems interfacing with another system, and they are not purposely designed to communicate with humans. They are daemons or IO-based components that do something in the background without a trigger. Usually, these components are time-based, and they may leverage existing protocols or interface directly with the core business logic, both of which are viable options.
## 3.0.2 Single Point of Contact
-Exposure components are only allowed to communicate with one and only one service. Integrating with multiple services would turn an exposure component into either orchestration or aggregation services which are both not allowed to exist as core logic in that realm of exposure.
+Exposure components are only allowed to communicate with one and only one service. Integrating with multiple services would turn an exposure component into either orchestration or aggregation services, which are not allowed to exist as core logic in that realm of exposure.
-The single point of contact rule also ensures the ease of disposability of the exposure component itself. It ensures the integration is simple and single-purposed enough with controlled dependencies (only one) that it can be rewired to virtually any protocol at any point in time with the least cost possible.
+The single point of contact rule also ensures the ease of disposability of the exposure component itself. It ensures the integration is simple and single-purposed enough with controlled dependencies (only one) that it can be rewired to virtually any protocol at any point with the least cost possible.
## 3.0.3 Examples
-Let's take API controllers as an example for a real-world exposure component in any given system.
+Let's take API controllers as an example of a real-world exposure component in any system.
```csharp
@@ -75,10 +75,10 @@ public async ValueTask> PostStudentAsync(Student student)
}
```
-The code snippet above is for an API method that `POST` a student model into the core business logic of a schooling system (OtripleS). In a technology like ASP.NET, controllers take care of handling mapping incoming JSON request into the `Student` model so the controller can utilize that model with an integrated system.
+The code snippet above is for an API method that `POST` a student model into the core business logic of a schooling system (OtripleS). In a technology like ASP.NET, controllers handle mapping incoming JSON requests into the `Student` model so that the controller can utilize that model with an integrated system.
-However, you will also see the controller code tries to map every possible categorical exception into it's respective REST protocol. This is just a simple snippet to show what an exposure component may look like. But we will talk more about the rules and conditions for controllers in the next chapter in The Standard.
+However, you will also see that the controller code tries to map every possible categorical exception into its respective REST protocol. This is just a simple snippet to show what an exposure component may look like. But we will talk more about the rules and conditions for controllers in the next chapter of The Standard.
## 3.0.4 Summary
-In summary, exposure components are very thin layer that doesn't contain any intelligence or logic in it. it is not meant to orchestrate, or call multiple core business logic services. And it only focuses on the duplex mapping aspect of the communication between one system and another.
+In summary, exposure components are very thin layers that don't contain any intelligence or logic. They are not meant to orchestrate or call multiple core business logic services. They only focus on the duplex mapping aspect of communication between one system and another.
\ No newline at end of file
From 6f65c4245e97c7d907335d0c69d7ce378533d2e8 Mon Sep 17 00:00:00 2001
From: Greg Hays
Date: Thu, 18 Apr 2024 11:39:53 -0700
Subject: [PATCH 14/18] CODE RUB: Communication Protocols 3.1 Grammer Check
---
.../3.1 Communication Protocols.md | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/3. Exposers/3.1 Communication Protocols/3.1 Communication Protocols.md b/3. Exposers/3.1 Communication Protocols/3.1 Communication Protocols.md
index e71ac9f..ab93b7c 100644
--- a/3. Exposers/3.1 Communication Protocols/3.1 Communication Protocols.md
+++ b/3. Exposers/3.1 Communication Protocols/3.1 Communication Protocols.md
@@ -1,7 +1,7 @@
# 3.1 Communication Protocols
## 3.1.0 Introduction
-In the exposure realm, one of the most common methodologies to build a communication structure between several systems is to use a communication protocol. These protocols have evolved over the years from SOAP to REST to many other communication protocols and principles that manifested their own technologies to expose APIs to distributed systems.
+In the exposure realm, one of the most common methodologies for building a communication structure between several systems is using a communication protocol. These protocols have evolved over the years from SOAP to REST to many other communication protocols and principles that manifested their technologies to expose APIs to distributed systems.
In the .NET world, technology has evolved with the evolution of architecture from SOA with WCF to Microservices with REST. The evolution continues, but the principles change less often. In these upcoming chapters, we will discuss the most common communication protocols with a standardized way to implement them for enterprise-level applications.
@@ -9,27 +9,27 @@ In the .NET world, technology has evolved with the evolution of architecture fro
Communication protocols are required to accomplish two things when integrating with core business logic. Results communication and Error reporting. Let's talk about those briefly:
#### 3.1.0.0.0 Results Communication
-Any communication protocol must satisfy the principle of returning a core business logic result. This result can be serialized into some unified language like `JSON` or be communicated as is. In the case of API libraries, there is usually no need to serialize and deserialize data. But that comes with the limitation that only technologies that integrate with these libraries can benefit from it.
+Any communication protocol must satisfy the principle of returning a core business logic result. This result can be serialized into a unified language like `JSON` or communicated as is. In the case of API libraries, there is usually no need to serialize and deserialize data. However, that comes with the limitation that only technologies that integrate with these libraries can benefit from it.
Communicating results may also be encapsulated with a status of some kind. In the case of RESTful API communications, a `200` code can accompany the returned serialized `JSON` result. These codes allow the consumers to understand the next course of action. Some `2xx` results may require a delayed action if the response is just `Accepted` but not necessarily `Created`.
#### 3.1.0.0.1 Error Reports
-The core business logic should provide a detailed report of all the validation errors in a particular request. The communication protocols' responsibility is to represent these error reports in their original form or serialize the report in a language easily deserialized and convertible back into the Exception original form on the client side.
+The core business logic should provide a detailed report of all the validation errors in a particular request. The communication protocols' responsibility is to represent these error reports in their original form or serialize them in a language easily deserialized and convertible back into the Exception original form on the client side.
-Serialized error reports shall also have their own codes, so the client knows the next course of action. We recommend following a standardized way of communicating errors with documentation, preferably to help guide consumers to develop the best clients for these APIs.
+Serialized error reports shall also have their own codes so the client knows the next course of action. We recommend following a standardized way of communicating errors with documentation, preferably to help guide consumers in developing the best clients for these APIs.
### 3.1.0.1 Common Types
Let's explore some of the most common types of communication protocols in this section.
#### 3.1.0.1.0 REST
-REST is a Representational state transfer protocol with certain constraints that explicitly define the form of communication, error reporting, and its very stateless nature. At the time of authoring this Standard, RESTful APIs were the most common form of communication between distributed systems. RESTful APIs are technology agnostic. Any technology or a programming language can implement them, and they allow these technologies to communicate with each other statelessly without any hard dependency on the server or the client's choice of technology.
+REST is a Representational state transfer protocol with certain constraints that explicitly define the form of communication, error reporting, and its very stateless nature. When this Standard was authored, RESTful APIs were the most common form of communication between distributed systems. RESTful APIs are technology-agnostic. Any technology or a programming language can implement them, and they allow these technologies to communicate with each other statelessly without any hard dependency on the server or the client's choice of technology.
#### 3.1.0.1.1 Libraries
The other most common communication protocol is APIs implemented within libraries. For instance, NuGet packages are published and distributed libraries that allow developers to leverage a localized core business logic or communicate with an external resource to achieve a specific goal.
#### 3.1.0.1.2 Other Types
-There are several other types of communication protocols. Some are older, and others are about to present themselves in the software industry. These types are like SOAP with manifestations like WCF, gRPC, GraphQL, and several other protocols.
+There are several other types of communication protocols. Some are older, and others are about to emerge in the software industry. These types are like SOAP with manifestations like WCF, gRPC, GraphQL, and several others.
-We will mainly focus on RESTful APIs as a more common communication protocol with a brief touch on Libraries. And as we evolve and learn further, so will our Standard, which will include more and more different communication protocols and develop in terms of patterns, as well.
+We will mainly focus on RESTful APIs as a more common communication protocol, with a brief touch on Libraries. As we evolve and learn further, so will our Standard, which will include more and more different communication protocols and develop in terms of patterns.
-Let's get started with RESTful APIs as a communication protocol and dive deeper into the different aspects of that exposer component.
\ No newline at end of file
+Let's start with RESTful APIs as a communication protocol and then explore the different aspects of the exposer component.
\ No newline at end of file
From a54db84c57a0e5d31061d926b37460d53fe36d89 Mon Sep 17 00:00:00 2001
From: Greg Hays
Date: Thu, 18 Apr 2024 12:44:28 -0700
Subject: [PATCH 15/18] CODE RUB: RESTFul APIs 3.1.1 Grammer Check
---
.../3.1.1 RESTful APIs/3.1.1 RESTful APIs.md | 101 +++++++++---------
1 file changed, 49 insertions(+), 52 deletions(-)
diff --git a/3. Exposers/3.1 Communication Protocols/3.1.1 RESTful APIs/3.1.1 RESTful APIs.md b/3. Exposers/3.1 Communication Protocols/3.1.1 RESTful APIs/3.1.1 RESTful APIs.md
index 6e53bb5..0d70045 100644
--- a/3. Exposers/3.1 Communication Protocols/3.1.1 RESTful APIs/3.1.1 RESTful APIs.md
+++ b/3. Exposers/3.1 Communication Protocols/3.1.1 RESTful APIs/3.1.1 RESTful APIs.md
@@ -1,10 +1,10 @@
# 3.1.1 RESTful APIs
## 3.1.1.0 Introduction
-RESTful API controllers are a liaison between the core business logic layer and the outside world. They sit on the other side of the core business realm of any application. In a way, API Controllers are just like Brokers. They ensure a successful integration between our core logic and the rest of the world.
+RESTful API controllers are liaisons between the core business logic layer and the outside world. They sit on the other side of any application's core business realm. In a way, API Controllers are just like Brokers. They ensure a successful integration between our core logic and the rest of the world.
## 3.1.1.1 On the Map
-Controllers sit at the edge of any system. Regardless whether this system is a monolithic platform or simple microservice. API controllers today even apply to smaller lambdas or cloud functions. They play the role of a trigger to access these resources in any system through REST.
+Controllers sit at the edge of any system, regardless of whether this system is a monolithic platform or a simple microservice. API controllers today even apply to smaller lambdas or cloud functions. They act as a trigger to access these resources in any system through REST.
@@ -12,15 +12,15 @@ Controllers sit at the edge of any system. Regardless whether this system is a m
-The consumer side of controllers can vary. In production systems these consumers can be other services requiring data from a particular API endpoint. They can be libraries built as wrappers around the controller APIs to provide a local resource with external data. But consumers can also be just engineers testing endpoints, validating their behaviors through swagger documents.
+The consumer side of controllers can vary. In production systems, these consumers can be other services that require data from a particular API endpoint. They can be libraries built as wrappers around the controller APIs to provide a local resource with external data. However, consumers can also be engineers testing endpoints and validating their behaviors through swagger documents.
## 3.1.1.2 Characteristics
-There are several rules and principles that govern the implementation of RESTful API endpoints. Let's discuss those here.
+Several rules and principles govern the implementation of RESTful API endpoints. Let's discuss those here.
### 3.1.1.2.0 Language
-Controllers speak a different language when it comes to implementing their methods as compared to services and brokers. For instance, if a broker that interfaces with a storage uses a language such as `InsertStudentAsync`, and its corresponding service implementation uses something like `AddStudentAsync` the controller equivalent will be using RESTful language such as `PostStudentAsync`.
+Controllers speak a different language when it comes to implementing their methods than services and brokers. For instance, if a broker that interfaces with a storage uses a language such as `InsertStudentAsync` and its corresponding service implementation uses something like `AddStudentAsync`, the controller equivalent will use a RESTful language such as `PostStudentAsync`.
-There are only a handful of terminologies a controller would use to express a certain operation. Let's draw the map here for clarity:
+A controller would use only a handful of terminologies to express a particular operation. Let's draw the map here for clarity:
| Controllers | Services | Brokers |
|------------------------ |--------------------------- | ------------------------------------------|
@@ -29,15 +29,15 @@ There are only a handful of terminologies a controller would use to express a ce
| Put | Modify | Update |
| Delete | Remove | Delete |
-The language controllers speak is called Http Verbs. Their range are wider than the aforementioned CRUD operations. For instance, there is PATCH which allows API consumers to update only portions of a particular document. PATCH is rarely used today from my experience in productionized applications. But I may specialize a special section for them at some point in time in future versions of The Standard.
+The language controllers speak is called HTTP Verbs. Their range is wider than the aforementioned CRUD operations. For instance, there is PATCH, which allows API consumers to update only portions of a particular document. From my experience in productionized applications, PATCH is rarely used today. However, I may specialize in them in a special section in future versions of The Standard.
#### 3.1.1.2.0.0 Beyond CRUD Routines
-But as we mentioned before, controller can interface with more than just a foundation service. They can interface with higher-order business logic function. For instance, a processing service may offer an `Upsert` routine. In which case a typical Http Verb wouldn't be able to satisfy a combinational routine such as an `Upsert`. In which case we resolve to the intial state of `Post` assuming the resource doesn't exist.
+As we mentioned before, a controller can interface with more than just a foundation service. They can interface with higher-order business logic functions. For instance, a processing service may offer an `Upsert` routine. In this case, a typical Http Verb wouldn't be able to satisfy a combinational routine such as an `Upsert`. In this case, we resolve to the initial state of `Post`, assuming the resource doesn't exist.
-It may become useful to notify our consumers if we decided to modify instead of add which operation we decided to go with. But that's a case by case implementation and more often than ever, consumers don't really care to learn that piece of information. The same idea applies to other languages non-foundation services may use. Such as `Process` or `Calculate` or any other business-specific language higher or hyper advanced order services may choose.
+It is helpful to notify our consumers if we modify instead of adding which operation we choose. But that's a case-by-case implementation, and more often than ever, consumers don't really care to learn that piece of information. The same idea applies to other languages non-foundation services may use, such as `Process` or `Calculate` or any other business-specific language higher or hyper-advanced order services may choose.
#### 3.1.1.2.0.1 Similar Verbs
-Sometimes, especially with basic CRUD operations, you will need the same Http Verb to describe two different routines. For instnace, integrating with both `RetrieveById` and `RetrieveAll` both resolve to a `Get` operation on the RESTful realm. In which case each function will have a different name, while maintainig the same verb as follows:
+Sometimes, especially with basic CRUD operations, you will need the same Http Verb to describe two different routines. For instance, integrating with both `RetrieveById` and `RetrieveAll` resolves to a `Get` operation on the RESTful realm. In this case, each function will have a different name while maintaining the same verb as follows:
```csharp
[HttpGet]
@@ -53,13 +53,13 @@ public async ValueTask> GetStudentByIdAsync(Guid studentId
}
```
-As you can see above, the differentiator here is both the function name `GetAllStudents` versus `GetStudentByIdAsync` but also the `Route` at the same time. We will discuss routes shortly, but the main aspect here is the ability to implement multiple routines with different names even if they resolve to the same Http Verb.
+As you can see above, the differentiator here is simultaneously the function name `GetAllStudents` versus `GetStudentByIdAsync` and the `Route`. We will discuss routes shortly, but the central aspect is the ability to implement multiple routines with different names, even if they resolve to the same Http Verb.
#### 3.1.1.2.0.2 Routes Conventions
-RESTful API controllers are accessible through routes. a route is simply a url that is used combined with an Http Verb so the system knows which routine it needs to call to match that route. For instance, if I need to retrieve a student with Id `123` then my api route would be as follows: `api/students/123`. And if I want to retrieve all the students in some system, I could just call `api/students` with `GET` verb.
+RESTful API controllers are accessible through routes. A route is simply a URL combined with an Http verb, so the system knows which routine it needs to call to match that route. For instance, if I need to retrieve a student with id `123`, my API route would be `api/students/123`. And if I want to retrieve all the students in some system, I could just call `api/students` with the `GET` verb.
##### 3.1.1.2.0.2.0 Controller Routes
-The controller class in a simple ASP.NET application can be simply setup at the top of the controller class declaration with a decoration as follows:
+The controller class in a simple ASP.NET application can be set at the top of the controller class declaration with a decoration as follows:
```csharp
[ApiController]
@@ -70,10 +70,10 @@ public class StudentsController
}
```
-The route there is a template that defines the endpoint to start with `api` and trailed by omitting the term "Controller" from the class name. So `StudentsController` would endup being `api/students`. It's important that all controllers should have a plural version of the contract they are serving. Unlike services where we say `StudentService` controllers would be the plural version with `StudentsController`.
+The route there is a template that defines the endpoint to start with `api` and trail by omitting the term "Controller" from the class name. So `StudentsController` would end up being `api/students`. All controllers must have a plural version of the contract they are serving. Unlike services where we say `StudentService`, controllers would be the plural version with `StudentsController`.
##### 3.1.1.2.0.2.1 Routine Routes
-The same idea applies to methods within the controller class. As we say in the code snippet above, we decorated `GetStudentByIdAsync` have had an `HttpGet` decoration with a particular route identified to append to the existing controller overall route. For instance if the controller route is `api/students`, a routine with `HttpGet("{studentId})` would result in a route that looks like this: `api/students/{studentId}`.
+The same idea applies to methods within the controller class. As the code snippet above says, we decorated `GetStudentByIdAsync` with an `HttpGet` decoration with a particular route identified to append to the existing controller overall route. For instance if the controller route is `api/students`, a routine with `HttpGet("{studentId})` would result in a route that looks like this: `api/students/{studentId}`.
The `studentId` then would be mapped in as an input parameter variable that *must* match the variable defined in the route as follows:
@@ -85,7 +85,7 @@ public async ValueTask> GetStudentByIdAsync(Guid studentId
}
```
-But sometimes these routes are not just url parameters. Sometimes they contain a request within them. For instance, let's say we want to post a library card against a particular student record. Our endpoint would look something like this: `api/students/{studentId}/librarycards` with a `POST` verb. In this case we have to distinguish between these two input parameters with proper naming as follows:
+But sometimes, these routes are not just URL parameters. Sometimes, they contain a request within them. For instance, we want to post a library card against a particular student record. Our endpoint would look like `api/students/{studentId}/librarycards` with a `POST` verb. In this case, we have to distinguish between these two input parameters with proper naming as follows:
```csharp
[HttpPost("{studentId}/librarycards")]
@@ -96,12 +96,12 @@ public async ValueTask> PostLibraryCardAsync(Guid stud
```
##### 3.1.1.2.0.2.2 Plural Singular Plural
-When defining routes in a RESTful API, it is important to follow the global naming conventions for these routes. The general rule is to access a collection of resources, then target a particular entity, then again acess a collection of resources within that entity and so on and so forth. For instance, in the library card example above `api/students/{studentId}/librarycards/{librarycardId}` you can see we started by accessing all students, then targetted a student with a particular id, then we wanted to access all library cards attached to that student then target a very particular card by referencing its id.
+When defining routes in a RESTful API, it is important to follow the global naming conventions for these routes. The general rule is to access a collection of resources, then target a particular entity, then again access a collection of resources within that entity, and so on and so forth. For instance, in the library card example above, `api/students/{studentId}/librarycards/{librarycardId}`, we started by accessing all students and then targeting a student with a particular ID. We wanted to access all library cards attached to that student and then target a very particular card by referencing its ID.
-That convention works perfectly in one-to-many relationships. But what about one-to-one relationships? Let's assume a student may have one and only one library card at all times. In which case our route would still look something like this: `api/students/{studentId}/librarycards` with a `POST` verb, and an error would occur as `CONFLICT` if a card is already in place regardless whether the Ids match or not.
+That convention works perfectly in one-to-many relationships. But what about one-to-one relationships? Let's assume a student may have one and only one library card at all times. In which case, our route would still look something like this: `api/students/{studentId}/librarycards` with a `POST` verb, and an error would occur as `CONFLICT` if a card is already in place regardless of whether the Ids match or not.
##### 3.1.1.2.0.2.2 Query Parameters & OData
-But the route I recommend is the flat-model route. Where every resource lives on it's own with it's own unique routes. In our case here pulling a library card for a particular student would be as follows: `api/librarycards?studentId={studentId}` or simply use a slightly advanced global technology such as OData where the query would just be `api/librarycards?$filter=studentId eq '123'`.
+But the route I recommend is the flat-model route. Where every resource lives on its own with its unique routes, in our case here, pulling a library card for a particular student would be as follows: `api/librarycards?studentId={studentId}` or use slightly advanced global technology such as OData where the query would be `api/librarycards?$filter=studentId eq '123'`.
Here's an example of implementing basic query parameters:
@@ -124,26 +124,26 @@ public async ValueTask> GetAllLibraryCards()
}
```
-The same idea applies to `POST` for a model. instead of posting towards: `api/students/{studentId}/librarycards` - we can leverage the contract itself to post against `api/librarycards` with a model that contains the student id within. This flat-route idea can simplify the implementation and aligns perfectly with the overall theme of The Standard. Keeping things simple.
+The same idea applies to `POST` for a model. Instead of posting towards: `api/students/{studentId}/librarycards` - we can leverage the contract itself to post against `api/librarycards` with a model that contains the student id within. This flat-route idea can simplify the implementation and aligns perfectly with the overall theme of The Standard. We are keeping things simple.
### 3.1.1.2.1 Codes & Responses
-Responses from an API controller must be mapped towards codes and responses. For instance, if we are trying to add a new student to a schooling system. We are going to `POST` student and in retrun we receive the same body we submitted with a status code `201` which means the resoruce has been `Created`.
+Responses from an API controller must be mapped towards codes and responses. For instance, if we are trying to add a new student to a schooling system. We are going to `POST` a student, and in return, we receive the same body we submitted with a status code `201`, which means the resource has been `Created`.
-There are three main categories where responses can fall into. The first is the success category. Where both the user and the server have done their part and the request has succeeded. The second category is the User Error Codes, where the user request has an issue of any type. In which case a `4xx` code will be returned with detailed error message to help users fix their requests for perform a successful operation. The third case is the System Error Codes, where the system has run into an issue of any type internal or external and it needs to communicate a `5xx` code to indicate to the user that something internally have gone wrong with the system and they need to contact support.
+There are three main categories into which responses can fall. The first is the success category. Both the user and the server have done their part, and the request has been successful. The second category is the User Error Codes, where the user request has an issue of any type. In this case, a `4xx` code will be returned with a detailed error message to help users fix their requests to perform a successful operation. The third case is the System Error Codes, where the system has run into an issue of any type, internal or external, and it needs to communicate a `5xx` code to indicate to the user that something internally has gone wrong with the system and they need to contact support.
-Let's talk about those codes and their scenarios in details here.
+Let's talk about those codes and their scenarios in detail here.
#### 3.1.1.2.1.0 Success Codes (2xx)
-Success codes either indicates a resource has been created, updated, deleted or retreived. And some cases it indicates that a request has been submitted successfully in an eventual-consistency manner that may or may not succeed in the future. Here's the details for each:
+Success codes indicate a resource has been created, updated, deleted, or retrieved. In some cases, it indicates that a request has been submitted successfully in an eventual consistent manner that may or may not succeed. Here are the details for each:
| Code | Method | Details |
|---------------------------------------|-----------------------------------|-------------------------------------------------------------------|
-| 200 | Ok | Used for successful GET, PUT and DELETE operations. |
+| 200 | Ok | Used for successful GET, PUT, and DELETE operations. |
| 201 | Created | Used for successful POST operations |
| 202 | Accepted | Used for request that was delegated but may or may not succeed |
-Here's some examples for each:
+Here are some examples for each:
In a retrieve non-post scenario, it's more befitting to return an `Ok` status code as follows:
@@ -171,7 +171,7 @@ public async ValueTask> PostStudentAsync(Student student)
}
```
-In eventual consistency cases, where a resource posted isn't really persisted yet, we enqueue the request and return an `Accepted` status to indicate a process will start:
+In eventual consistency cases, where a resource posted is not persisted yet, we enqueue the request and return an `Accepted` status to indicate a process will start:
```csharp
[HttpPost)]
public async ValueTask> PostStudentAsync(Student student)
@@ -183,11 +183,11 @@ public async ValueTask> PostStudentAsync(Student student)
}
```
-The Standard rule for eventual consistency scenarios is to ensure the submitter has a token of some type so requestors can inquire about the status of their request with a different API call. We will discuss these patterns in a different book called The Standard Architecture.
+The Standard rule for eventual consistency scenarios is to ensure the submitter has a token of some type so requestors can inquire about the status of their request using a different API call. We will discuss these patterns in a different book called The Standard Architecture.
#### 3.1.1.2.1.1 User Error Codes (4xx)
-This is the second category of API responses. Where a user request has an issue in it and the system is required to help the user understand why their request was not successful. For instance, assume a client is submitting a new student to a schooling system. If the student Id is invalid a `400` or `Bad Request` code should be returned with a problem detail that explains what exactly is the reason for the failure of the request.
+This is the second category of API responses. In this category, a user request has an issue, and the system is required to help the user understand why their request was not successful. For instance, assume a client is submitting a new student to a schooling system. If the student ID is invalid, a `400` or `Bad Request` code should be returned with a problem detail that explains what exactly caused the request to fail.
Controllers are responsible for mapping the core layer categorical exceptions into proper status codes. Here's an example:
@@ -206,9 +206,9 @@ public async ValueTask> GetStudentByIdAsync(Guid studentId
}
```
-So as shown in this code snippet, we caught a categorical validation exception and mapped it into a `400` error code which is `BadRequest`. The access to inner exception here is for the purpose of extracting a problem detail out of the `Data` property on the inner exception which contains all the dictionary values of the error report.
+So, as shown in this code snippet, we caught a categorical validation exception and mapped it into a `400` error code, which is `BadRequest`. Access to the inner exception here is for the purpose of extracting a problem detail out of the `Data` property on the inner exception, which contains all the dictionary values of the error report.
-But sometimes controllers have to dig deeper. Catching a particular local exception not just the categorical. For instance, say we want to handle `NotFoundStudentException` with an error code `404` or `NotFound`. Here's how we would accomplish that:
+But sometimes, controllers have to dig deeper. Catching a particular local exception, not just the categorical. For instance, say we want to handle `NotFoundStudentException` with an error code `404` or `NotFound`. Here's how we would accomplish that:
```csharp
[HttpGet("{studentId}")]
@@ -226,7 +226,7 @@ public async ValueTask> GetStudentByIdAsync(Guid studentId
}
```
-In the code snippet above, we had to examine the inner exception type to validate the localized exception from within. This is the advantage of the unwrapping and wrapping process we discussed in section 2.3.3.0.2 of The Standard. Controller may examine multiple types within the same block as well as follows:
+In the code snippet above, we had to examine the inner exception type to validate the localized exception from within. This is the advantage of the unwrapping and wrapping process discussed in section 2.3.3.0.2 of The Standard. The controller may examine multiple types within the same block as well as follows:
```csharp
...
@@ -251,12 +251,12 @@ With that in mind, let's detail the most common mappings from exceptions to code
| 423 | Locked | LockedException |
| 424 | FailedDependency | InvalidReferenceException |
-There are more `4xx` status codes out there. But As of this very moment they can either be automatically generated by the web framework like in ASP.NET or there are no useful scenarios for them yet. For instance, a `401` or `Unauthorized` error can be automatically generated if the controller endpoint is decorated with authorization requirement.
+There are more `4xx` status codes out there. But they can either be automatically generated by the web framework, like in ASP.NET, or there are no useful scenarios for them yet. For instance, a `401` or `Unauthorized` error can be automatically generated if the controller endpoint is decorated with an authorization requirement.
#### 3.1.1.2.1.2 System Error Codes (5xx)
-System error codes are the third and last possible type of codes that may occur or be returned from an API endpoint. Their main responsibility is to indicate in general that the consumer of the API endpoint is at no fault. Something bad happened in the system, and the engineering team is required to get involved to resolve the issue. That's why we log our exceptions with a severity level at the core business logic layer so we know how urgent the matter may be.
+System error codes are the third and last possible type of code that may occur or be returned from an API endpoint. Their main responsibility is to indicate that the API endpoint consumer is generally at no fault. Something terrible happened in the system, and the engineering team must get involved to resolve the issue. That's why we log our exceptions with a severity level at the core business logic layer so we know how urgent the matter may be.
-The most common http code that can be communicated on a server-side issue is the `500` or `InternalServerError` code. Let's take a look at a code snippet that deals with this situation:
+The most common Http code that can be communicated on a server-side issue is the `500` or `InternalServerError` code. Let's take a look at a code snippet that deals with this situation:
```csharp
[HttpGet("{studentId}")]
@@ -274,23 +274,21 @@ public async ValueTask> GetStudentByIdAsync(Guid studentId
}
```
-In the above snippet we completely ignored the inner exception and mainly focused on the categorical exception for security reasons. Mainly to not allow internal server information to be exposed in an API response other than something as simple as `Dependency error occurred, contact support.` Since the consumer of the API is required to perform no action whatsoever other than creating a ticket for the support team.
+In the above snippet, we ignored the inner exception and mainly focused on the categorical exception for security reasons. Primarily to not allow internal server information to be exposed in an API response other than something as simple as `Dependency error occurred, contact support.` Since the API consumer is required to perform no action whatsoever other than creating a ticket for the support team, Ideally, these issues should be caught out of Acceptance Tests, which we will discuss shortly in this chapter. But there are times where there's a server blip that may cause a memory leakage of some sort or any other internal infrastrucrual issues that won't be caught by end-to-end testing in any way.
-Ideally, these issues should be caught out of Acceptance Tests which we will discuss shortly in this chapter. But there are times where there's a server blip that may cause a memory leakage of some sort or any other internal infrastrucrual issues that won't be caught by end-to-end testing in any way.
-
-In terms of types of exceptions that may be handled, it's a little smaller when it comes server error here's the details:
+The types of exceptions that may be handled are smaller regarding server errors. Here are the details:
| Code | Method | Exception |
|---------------------------------------|-----------------------------------|-------------------------------------------------------------------|
| 500 | InternalServerError | DependencyException or ServiceException |
| 507 | NotFound | InsufficientStorageException (Internal Only) |
-There's also an interesting case where two teams agree on a certain swagger document, and the back-end API development team decides to build corresponding API endpoints with methods that are not yet implemented to communicate to the other team that the work hasn't started yet. In which case using error code `501` is sufficient which is just a code for `NotImplemented`.
+There's also an interesting case where two teams agree on a specific swagger document, and the back-end API development team decides to build corresponding API endpoints with methods yet to be implemented to communicate to the other team that the work has yet to start. In this case, the error code `501` is sufficient, just a code for `NotImplemented`.
-It is also important to mention that the native `500` error code can be communicated in ASP.NET applications through `Problem` method. We are relying on a library `RESTFulSense` to provide more codes than the native implementation can offer, but more importantly provide a problem detail serialization option and deserialization option on the client side.
+It is also important to mention that the native `500` error code can be communicated in ASP.NET applications through the `Problem` method. We are relying on a library, `RESTFulSense`, to provide more codes than the native implementation can offer, but more importantly, to provide a problem detail serialization option and deserialization option on the client side.
#### 3.1.1.2.1.3 All Codes
-Other than the ones mentioned in previous sections, and for documentation purposes, here's the all of the `4xx` and `5xx` codes an API could communicate according to the latest standardized API guidelines:
+Other than the ones mentioned in previous sections, and for documentation purposes, here are all of the `4xx` and `5xx` codes an API could communicate according to the latest standardized API guidelines:
|Status|Code|
|--- |--- |
@@ -337,7 +335,7 @@ Other than the ones mentioned in previous sections, and for documentation purpos
We will explore incorporating some of these codes in future revisions of The Standard as needed.
### 3.1.1.2.2 Single Dependency
-Exposer components can have one and only one dependency. This dependency must be a Service component. it cannot be a Broker or any other native dependency that Brokers may use to pull configurations or any other type of dependencies.
+Exposer components can have one and only one dependency. This dependency must be a Service component. It cannot be a Broker or any other native dependency that Brokers may use to pull configurations or any other type of dependencies.
When implementing a controller, the constructor can be implemented as follows:
@@ -357,10 +355,10 @@ public class StudentsController : RESTFulController
```
### 3.1.1.2.3 Single Contract
-This charactristic comes out of the box with the single dependency rule. If Services can only serve and receive one contract then the same rule will apply to controllers. They can return a contract, a list of objects with the same contract or portion of the contract when passing in Ids or queries.
+This characteristic comes out of the box with the single dependency rule. If Services can only serve and receive one contract, then the same rule will apply to controllers. When passing in IDs or queries, they can return a contract or a list of objects with the same contract or portion of the contract.
## 3.1.1.3 Organization
-Controllers should be located under `Controllers` folder and belong within a `Controllers` namespace. Controller do not need to have their own folders or namespaces as they perform a simple exposure task and that's all.
+Controllers should be located under the `Controllers` folder and belong within a `Controllers` namespace. However, controllers do not need to have their own folders or namespaces as they perform a simple exposure task.
Here's an example of a controller namespace:
@@ -377,7 +375,7 @@ namespace GitFyle.Core.Api.Controllers
```
## 3.1.1.4 Home Controller
-Every system should implement an API endpoint that we call `HomeController`. The controller only responsibility is to return a simple message to indicate that the API is still alive. Here's an example:
+Every system should implement an API endpoint that we call `HomeController`. The controller's only responsibility is to return a simple message to indicate that the API is still alive. Here's an example:
```csharp
using Microsoft.AspNetCore.Mvc;
@@ -395,10 +393,10 @@ namespace OtripleS.Web.Api.Controllers
}
```
-Home controllers are not required to have any security on them. They open a gate for heartbeat tests to ensure the system as an entity is running without checking any external dependencies. This practice is very important to help engineers know when the system is down and quickly act on it.
+Home controllers are not required to have any security. They open a gate for heartbeat tests to ensure the system as an entity is running without checking any external dependencies. This practice is very important to help engineers know when the system is down and quickly act on it.
## 3.1.1.5 Tests
-Controllers can be potentially unit tested to verify the mapping of exceptions to error codes are in place. But that's not a pattern I have been following myself so far. However, what is more important is Acceptance tests. Which verify all the components of the system are fully and successfully integrated with one another.
+Controllers can be potentially unit tested to verify the mapping of exceptions to error codes is in place. But that's not a pattern I have been following myself so far. What is more important is Acceptance tests, which verify that all the system components are fully and successfully integrated.
Here's an example of an acceptance test:
@@ -423,12 +421,11 @@ public async Task ShouldPostStudentAsync()
}
```
-Acceptance tests are required to cover every available endpoint on a controller. They are also responsible for cleaning up any test data after the test is completed. It is also important to mention that resources that are not owned by the microservice like database, must be emulated with applications such as `WireMock` and many others.
+Acceptance tests are required to cover every available endpoint on a controller and are responsible for cleaning up any test data after the test is completed. It is also important to mention that resources not owned by the microservice, like the database, must be emulated with applications such as `WireMock` and many others.
-Acceptance tests are also implemented after the fact unlike unit tests. An endpoint has to be fully integrated and functional before a test is written to ensure the success of implementation is in place.
+Acceptance tests are also implemented after the fact, unlike unit tests. An endpoint has to be fully integrated and functional before a test is written to ensure implementation success is in place.
[*] [Acceptance Tests (Part 1)](https://www.youtube.com/watch?v=WWN-9ahbdIU)
-[*] [Acceptance Tests (Part 2)](https://www.youtube.com/watch?v=ANqj9pldfso)
-
+[*] [Acceptance Tests (Part 2)](https://www.youtube.com/watch?v=ANqj9pldfso)
\ No newline at end of file
From 1677e6af74d976ae1b82e20c75a26e125d6e6b03 Mon Sep 17 00:00:00 2001
From: Greg Hays
Date: Thu, 18 Apr 2024 13:35:31 -0700
Subject: [PATCH 16/18] CODE RUB: User Interfaces 3.2 Grammer Check
---
.../3.2 User Interfaces.md | 62 +++++++++----------
1 file changed, 31 insertions(+), 31 deletions(-)
diff --git a/3. Exposers/3.2 User Interfaces/3.2 User Interfaces.md b/3. Exposers/3.2 User Interfaces/3.2 User Interfaces.md
index 4a2d616..3c869a9 100644
--- a/3. Exposers/3.2 User Interfaces/3.2 User Interfaces.md
+++ b/3. Exposers/3.2 User Interfaces/3.2 User Interfaces.md
@@ -1,34 +1,34 @@
# 3.2 User Interfaces
## 3.2.0 Introduction
-User Interfaces or UI are a type of exposer component mainly targeting humans for interaction with the core business layer, unlike Communication protocols primarily used in distributed systems. UIs are forever evolving regarding technologies and methodologies with which humans can interact with any given system. This goes from web applications to virtual/augmented realities�voice-activated systems and, more recently, brain-waves-activated systems.
+User Interfaces or UI are a type of exposer component mainly targeting humans for interaction with the core business layer, unlike Communication protocols primarily used in distributed systems. UIs are forever evolving in terms of technologies and methodologies with which humans can interact with any given system. This goes from web applications to virtual/augmented realities�voice-activated systems and, more recently, brain-waves-activated systems.
-Developing user interfaces can be much more challenging in terms of experiences. There isn't a global standard today for what an intuitive understanding is. It heavily relies on culture, commonalities, and many other forever-changing variables today. This Standard will outline the principles and rules for building modular, maintainable, and pluggable UI components. But there will be a different Standard for outlining user experiences, human interactions, and the theory of intuitiveness.
+Developing user interfaces can be much more challenging in terms of experiences. Today, there needs to be a global standard for intuitive understanding. It heavily relies on culture, commonalities, and many other forever-changing variables. This Standard will outline the principles and rules for building modular, maintainable, and pluggable UI components. However, there will be a different Standard for outlining user experiences, human interactions, and the theory of intuitiveness.
This Standard also briefly highlights specific guidelines regarding rendering choices, server, client, or hybrid, as is the case with the tri-nature of everything. Let's dive deeper into the principles and rules that govern building UI components.
### 3.2.0.0 Principles & Rules
-Just like every other exposer component type, UIs must be able to map processes, results, and errors to their consumers. Some of these UI components will require a test-driven approach. Some others are more like Brokers, just wrappers around 3rd party or native UI components. Let's talk about these principles here.
+Like every other exposer component type, UIs must be able to map processes, results, and errors to their consumers. Some of these UI components will require a test-driven approach. Some others are more like Brokers, just wrappers around third-party or native UI components. Let's discuss these principles here.
#### 3.2.0.0.0 Progress (Loading)
The most important principle in building UI components is to develop intelligence to keep the user engaged while a particular process is running, such as a simple spinner or a progress bar to keep users informed at all times of what's happening behind the scenes.
-It's a violation of The Standard to indicate progress of any type if nothing really is going on in the background. It falls into the practice of wasting end-users time and lying to them about the actual status of the system. But assuming the system is busy working on a particular request, three levels of communication can happen on an exposer component to communicate progress. Let's discuss those in detail:
+It violates The Standard to indicate progress if nothing is happening in the background. It falls into the practice of wasting end-users' time and lying to them about the actual status of the system. However, assuming the system is busy working on a particular request, three levels of communication can happen on an exposer component to communicate progress. Let's discuss those in detail:
#### 3.2.0.0.0.0 Basic Progress
-The basic progress approach is where you present a status with a label like "Waiting ..." or a spinner with no further indication, which is the bare minimum of progress indication. No UI should just freeze or stop their hanging while requests are being processed in the background, assuming an eventual-consistency pattern is not attainable for the current business need.
+The basic progress approach is where you present status with a label like "Waiting ..." or a spinner with no further indication, which is the bare minimum of progress indication. No UI should freeze or stop their hanging while requests are being processed in the background, assuming an eventual consistency pattern is not attainable for the current business need.
-Some web applications show a forever progress bar at the very top of the page to indicate that progress is happening. From an experience perspective and depending on the visibility level of these progress bars, it may or may not be easy to miss by end users. Some other engineering teams have chosen to play a simple animation to keep users engaged with visual progress without any indication of the details of that progress.
+Some web applications show a forever progress bar at the top of the page to indicate progress. From an experience perspective, depending on the visibility level of these progress bars, they may be challenging for end users to miss. Some other engineering teams have chosen to play a simple animation to keep users engaged with visual progress without any indication of the details of that progress.
#### 3.2.0.0.0.1 Remaining Progress
-A bit above the bare minimum, there is an indication of remaining time or progress to be completed before the request is processed. An indication such as "40% remaining" or something more specific like "5 minutes remaining ..." to help end-users understand or guestimate how long a time or effort is left; or, there are patterns where engineers would indicate how many tasks remain without showing what these tasks are.
+Above the bare minimum, there is an indication of remaining time or progress to be completed before the request is processed. An indication such as "40% remaining" or something more specific like "5 minutes remaining ..." to help end-users understand or guestimate how long a time or effort is left; or, there are patterns where engineers would indicate how many tasks remain without showing what these tasks are.
-Sometimes a remaining progress update is as detailed as UI engineers can get. For instance, if you are downloading a file from the internet. You can't be more precise than "saying x percent of the bits remaining to be downloaded" with no further details. Some game developers also choose to visualize the internet speeds and available disk space to keep the end-user engaged in the system. And these are all acceptable patterns in this Standard.
+Sometimes, a remaining progress update is as detailed as UI engineers can get. For instance, if you are downloading a file from the internet, you can't be more precise than "saying x percent of the bits remaining to be downloaded" with no further details. Some game developers also choose to visualize the internet speeds and available disk space to keep the end-user engaged in the system. These are all acceptable patterns in this Standard.
#### 3.2.0.0.0.2 Detailed Progress
The highest level of reporting progress is the detailed progress type, where the UI component is fully transparent with its consumers by reporting every step of progress. This type of progress is more common in scientific applications. Engineers in debugging mode may enable a feature where all the underlying activity in the system is visualized through the UI.
-This type helps end-users understand what is happening behind the scenes and allows them to communicate better details to support engineers to help them fix an issue if the process fails. But this process isn't always preferred in terms of experience, considering that some details need to be hidden for security reasons.
+This type helps end-users understand what is happening behind the scenes and allows them to communicate better details to support engineers to help them fix an issue if the process fails. However, this process is only sometimes preferred in terms of experience, considering that some details need to be hidden for security reasons.
In summary, selecting the correct type of progress indicator in a UI depends on the business flow, the type of users interacting with the system, and several other variables we will discuss in The Experience Standard.
@@ -43,33 +43,33 @@ Simple results indications, especially with submitted requests rather than retri
##### 3.2.0.0.1.1 Partial Details
An overview of the nature of the request and where it stands in terms of status and timestamps: The other type of results or success indication is to present end-users with partial details. Partial details are usually helpful when it comes to providing the end-user with a "ticket number" to help end-users follow up on their requests later to inquire about the status. This pattern is typical in e-commerce applications where every purchase request may be returned with a tracking number to help customers and customer support personnel assess the requests.
-Detailed results can also be very helpful for the visualization of the success process, especially with requests containing multiple parts. Larger requests, such as an application to join a university or the like, may include attachments, multiple pages of details, and confidential information such as payment details or social security numbers.
+Detailed results can also be beneficial for visualizing the success process, especially with requests containing multiple parts. Larger requests, such as an application to join a university or the like, may include attachments, multiple pages of details, and confidential information such as payment details or social security numbers.
##### 3.2.0.0.1.2 Full Details
-In some cases, one may prefer to report full details about the submitted request, especially with smaller requests where it may help end-users review their requests. Some engineers may display full details as an extra confirmation step before submitting the request. But full details can also include more than just the request details. It could include a status update from the server a long with an assigned point of contact or an officer from maintenance and support teams.
+Sometimes, one may prefer to report full details about the submitted request, especially with smaller requests that may help end-users review their requests. Some engineers may display full details as an extra confirmation step before submitting the request. However, full details can also include more than just the requested details. It could include a status update from the server a long with an assigned point of contact or an officer from maintenance and support teams.
It's a violation of The Standard to redirect end-users at the submittal of their requests with no indication of what happened.
#### 3.2.0.0.2 Error Reports
-Error reports primary responsibility is to inform end-users of what happened, why it happened, and the next course of action. Some error reports don't indicate any course of action, which can be a poor experience depending on the business flow. But the bare minimum in error reporting is the basic indication of the error with essential details. Let's talk about those types here.
+Error reports primary responsibility is to inform end-users of what happened, why it happened, and the next course of action. Some error reports don't indicate any course of action, which can be a poor experience depending on the business flow. However, the bare minimum in error reporting is the basic indication of the error with essential details. Let's talk about those types here.
##### 3.2.0.0.2.0 Informational
-The bare minimum of error reports is the informational type. Indicating an error occurred and why it happened. Something like: "Request failed. Try again" or "Request failed, contract support". There are also informational errors that are time-based. Something like: "Our servers are currently experiencing a high volume of requests. Please try again later". These informational error reports are necessary to keep the end-user engaged with the system.
+The bare minimum of error reports is the informational type, which indicates an error occurred and why it happened, such as "Request failed. Try again" or "Request failed contact support." There are also informational errors that are time-based, such as "Our servers are currently experiencing a high volume of requests. Please try again later." These informational error reports are necessary to keep the end-user engaged with the system.
-Informational error reports are governed by the context and the type of users receiving them. In a scientific application the more details the better. For some other systems, it is important to shift the technical language of the errors to a more less technical language. For instance, we can't communicate: "Student Id cannot be null, empty or widespace". We should select a more readable language such as: "Please provide a valid student Id".
+Informational error reports are governed by the context and the type of users receiving them. In a scientific application, the more details, the better. For some other systems, it is important to shift the technical language of the errors to a less technical language. For instance, we can't communicate: "Student id cannot be null, empty, or whitespace". We should select a more readable language, such as "Please provide a valid student Id".
-##### 3.2.0.0.2.1 Referencial/Implicit Actions
-The second type of error report is the referential type. When an error occurs, it automatically informs the support team and returns a reference of a support ticket to end users so they can follow up. You may see this a lot when video games fail to start, or certain applications are unable to initialize. Referential error reports are the best for certain business flows since they take care of all the actions, send an email to the end user with the reference number, and follow up within a couple of days to report the status.
+##### 3.2.0.0.2.1 Referential/Implicit Actions
+The second type of error report is the referential type. When an error occurs, it automatically informs the support team and returns a reference of a support ticket to end users so they can follow up. You may see this often when video games fail to start or specific applications cannot initialize. Referential error reports are the best for particular business flows since they take care of all the actions, email the end user the reference number, and follow up within a couple of days to report the status.
The fewer actions a system requires users to take after a failure has occurred, the better. Since end-users have already accomplished their tasks in submitting requests, it becomes even more convenient if the original request is queued up, such as with high-volume enterprise systems, so end-users don't have to re-submit the same data.
##### 3.2.0.0.2.2 Actionable
-The second type of error report is the actionable report. Errors provide an additional action for the users to go further in their request. For instance, error reports can provide a button to try again or submit other details request back to the engineering and support teams.
+The second type of error report is the actionable report. Errors provide an additional action for the users to go further in their request. For instance, error reports can give a button to try again or submit other details requests back to the engineering and support teams.
-Some reports will provide a different route to accomplish the same task in more hybrid legacy and modernized applications. These actionable reports are more convenient than informational reports. However, they would still require their end-users to take more actions and keystrokes, leading to a certain level of inconvenience.
+Some reports will provide a different route to accomplish the same task in more hybrid legacy and modernized applications. These actionable reports are more convenient than informational reports. However, they would still require their end-users to take more actions and keystrokes, leading to some inconvenience.
#### 3.2.0.0.3 Single Dependency
-Any exposer component can only integrate with a single dependency at a time. For UI components, contract purity ensures that a UI is not given more data than it needs. This is where a new type of foundational-like service is implemented to enforce this pattern, and all other details, such as audit fields, timestamps, and such, are taken care of away from the UI component sight.
+Any exposer component can only integrate with a single dependency at a time. For UI components, contract purity ensures that a UI is not given more data than it needs. To enforce this pattern, a new type of foundational-like service is implemented, and all other details, such as audit fields, timestamps, and such, are taken care of away from the UI component's sight.
We will talk in detail about view services shortly as we progress talking about UI exposers.
@@ -82,37 +82,37 @@ Just like the data flow in any service. We have brokers -> Services -> Exposers.
-UI exposer components, as shown above, can be Bases, Components, or Containers. Each of these types has a specific responsibility to ensure the maximum maintainability and pluggability of the system according to The Standard. Let's discuss these three types here:
+UI exposer components, as shown above, can be Bases, Components, or Containers. Each of these types has a specific responsibility to ensure the maximum maintainability and pluggability of the system, according to The Standard. Let's discuss these three types here:
##### 3.2.0.0.4.0 Bases
Base or Base Components are just like Brokers in the data flow. They are simple thin wrappers around native or 3rd party components. Their primary responsibility is to abstract away the hard dependency on non-local components to allow the system's configuration to switch to any other external or native UI components with the least effort possible.
-Base components also make it easier to mock out any external or native components behavior and focus the effort on ensuring the local component performs the expected way. We will discuss in the following chapter base components for web applications in Blazor and other technologies.
+Base components also make it easier to mock out any external or native component's behavior and focus the effort on ensuring the local component performs as expected. In the following chapter, we will discuss base components for web applications in Blazor and other technologies.
##### 3.2.0.0.4.1 Components
-UI Components are a hybrid between a Service and a Controller in the data pipeline. In a way, components contain *some* business logic in the handling of interactions with certain base components. But they are also limited by integrating with one and only one view service. Components are test-driven. They require writing tests to ensure they behave as expected. But they also contain almost no iteration, selection, or sequencing data logic within them.
+UI Components are a hybrid between a Service and a Controller in the data pipeline. In a way, components contain *some* business logic in handling interactions with certain base components. However, they are also limited by integrating with one and only one view service. Components are test-driven. They require writing tests to ensure they behave as expected. However, they also contain almost no iteration, selection, or sequencing data logic.
The most important aspect of UI components is the intersection between the UI flow and the data flow. They are responsible for leveraging their data dependency (view services) and their base components to become easily pluggable into container components (like pages with routes in web applications).
##### 3.2.0.0.4.2 Containers
-Container components are orchestrators/aggregators of components. They are the actual route or the page end-users interact with. Containers cannot have any level of UI logic in them. They cannot leverage base components. And they may have any number of UI components as the business flow requires.
+Container components are orchestrators/aggregators of components. They are the actual route or page end-users interact with. Containers cannot have any level of UI logic in them, cannot leverage base components, and may have any number of UI components as the business flow requires.
-As it is the case with every category of components, containers cannot integrate with other containers. The rule applies across the board for every data or UI component.
+As is the case with every category of components, containers cannot integrate with other containers. The rule applies across the board for every data or UI component.
#### 3.2.0.0.5 UI Component Types
-UI components come in all different shapes and sizes. The hosting environment and the type of devices that serve these components play a significant role in determining the technologies and the capabilities a particular UI component may have. Let's talk about the different types of UI components in this section.
+UI components come in all different shapes and sizes. The hosting environment and the type of devices that serve these components play a significant role in determining the technologies and capabilities a particular UI component may have. In this section, we will discuss the different types of UI components.
##### 3.2.0.0.5.0 Web Applications
-The most popular type of UI application is the web application because of its ease of use. They require no installation of any kind. They have no dependency on the operating system running the system, or the type of devices users may be using. They can run on PCs, tablets, mobile phones, TVs, and watches that support web browsing.
+The most popular type of UI application is web application because of its ease of use. Web applications require no installation of any kind, and they are not dependent on the operating system running the system or the type of devices users may be using. They can run on PCs, tablets, mobile phones, TVs, and watches that support web browsing.
-Web frameworks have evolved greatly in the last few years due to their popularity, as mentioned earlier. Some frameworks allow engineers to write web applications in many programming languages. The web assembly evolution has also opened the door for engineers to develop even more scalable frameworks with their preferred technologies and languages.
+Web frameworks have evolved dramatically in the last few years due to their popularity, as mentioned earlier. Some frameworks allow engineers to write web applications in many programming languages. The evolution of web assembly has also opened the door for engineers to develop even more scalable frameworks with their preferred technologies and languages.
-Server-side applications and client-side applications. Web applications are developed in two different types in terms of rendering. We will discuss the advantages and disadvantages of each type in addition to the hybrid model in the next few chapters of The Standard.
+Server-side applications and client-side applications. In terms of rendering, web applications are developed in two different types. In the next few chapters of The Standard, we will discuss the advantages and disadvantages of each type in addition to the hybrid model.
##### 3.2.0.0.5.1 Mobile Applications
-The second most popular platform today to develop UIs is the mobile world. Developing mobile applications comes with challenges as they depend heavily on the operating system, the phone's size in terms of resolution, and the available native controls. Mobile applications are also always client-side apps. They are just like Desktop applications. They must be compiled, provisioned, and published to an app store so consumers can download, install, and leverage them in their daily activities.
+The second most popular platform today for developing UIs is the mobile world. Developing mobile applications comes with challenges as they depend heavily on the operating system, the phone's size in terms of resolution, and the available native controls. Mobile applications are also always client-side apps. They are just like Desktop applications. They must be compiled, provisioned, and published to an app store so consumers can download, install, and leverage them daily.
-The most significant advantage of mobile applications is that they allow offline interactions like mobile games, editing apps, and streaming services with offline capabilities. But building mobile applications with web frameworks is becoming increasingly popular as the web is a universal ecosystem that allows end-users to experience software the same way on their PCs, browsers, and mobile applications. This trend shall eventually enable engineers to develop systems for all ecosystems at the least cost possible.
+The most significant advantage of mobile applications is that they allow offline interactions, such as mobile games, editing apps, and streaming services with offline capabilities. However, building mobile applications with web frameworks is becoming increasingly popular, as the web is a universal ecosystem that allows end-users to experience software the same way on their PCs, browsers, and mobile applications. This trend will eventually enable engineers to develop systems for all ecosystems at the least possible cost.
##### 3.2.0.0.5.2 Other Types
-There are other types of UI components that we may not cover in our Standard. These types are console/terminal applications, desktop applications, video games, virtual/augmented reality software, wearable devices, and voice-activated systems. The world of Human-Machine-Interface HMI is evolving so rapidly in the age of the metaverse that we might need to create special chapters for these different types at some point.
+We may need to cover other types of UI components in our Standard. These types are console/terminal applications, desktop applications, video games, virtual/augmented reality software, wearable devices, and voice-activated systems. The world of Human-Machine-Interface HMI is evolving so rapidly in the age of the metaverse that we might need to create special chapters for these different types at some point.
\ No newline at end of file
From 0f5263444f80f4b6f057617af441f9e0c24bb3bc Mon Sep 17 00:00:00 2001
From: Greg Hays
Date: Thu, 18 Apr 2024 17:00:03 -0700
Subject: [PATCH 17/18] CODE RUB: Web Applications 3.2.1 Grammer Check
---
.../3.2.1 Web Applications.md | 114 +++++++++---------
1 file changed, 57 insertions(+), 57 deletions(-)
diff --git a/3. Exposers/3.2 User Interfaces/3.2.1 Web Applications/3.2.1 Web Applications.md b/3. Exposers/3.2 User Interfaces/3.2.1 Web Applications/3.2.1 Web Applications.md
index c72b8db..0b40d03 100644
--- a/3. Exposers/3.2 User Interfaces/3.2.1 Web Applications/3.2.1 Web Applications.md
+++ b/3. Exposers/3.2 User Interfaces/3.2.1 Web Applications/3.2.1 Web Applications.md
@@ -1,9 +1,9 @@
# 3.2.1 Web Applications
## 3.2.1.0 Introduction
-Web applications are the most common type of exposer components today. They are much easier to use than other known exposer UI components in the software industry. The web software market is also much easier for an engineer to publish and update than mobile applications, making it quite attractive for newer engineers. But more importantly, web applications have a much more diverse set of technologies than mobile applications.
+Web applications are the most common type of exposer components today. They are much easier to use than other known exposer UI components in the software industry. The web software market is also much easier for engineers to publish and update than mobile applications, making it attractive for newer engineers. But more importantly, web applications have a much more diverse set of technologies than mobile applications.
-In this chapter, we will use Blazor technology to demonstrate implementing The Standard principles for web applications. But as I previously mentioned, The Standard is technology-agnostic, meaning it can apply to any web technology without any issues.
+This chapter will use Blazor technology to demonstrate implementing the Standard principles for web applications. However, as mentioned previously, the Standard is technology-agnostic, meaning it can be applied to any web technology without issues.
## 3.2.1.1 On the Map
Web applications are usually set at the other end of any system. They are the terminals that humans use to interact with the system. Let's take a look at where they are located on the map:
@@ -14,20 +14,20 @@ Web applications are usually set at the other end of any system. They are the te
-As shown above, web applications are somewhat similar to core APIs, except that they have a different group of components in terms of visualization, such as Pages, Components, and Bases. There's an intersection between two main flows in every web application. The presentation flow and the data/business flow. Depending on where a web application lives in terms of high-level architecture, its location determines whether its backend (BFF or Backend of Frontend) is a business flow or just data flow. Let's discuss these details in the characteristics section of this chapter.
+As shown above, web applications are similar to core APIs, except that they have a different group of components in terms of visualization, such as Pages, Components, and Bases. There's an intersection between two main flows in every web application. The presentation flow and the data/business flow. Depending on where a web application lives in terms of high-level architecture, its location determines whether its backend (BFF or Backend of Frontend) is a business flow or just data flow. Let's discuss these details in the characteristics section of this chapter.
## 3.2.1.2 Charactristics
-Brokers, Services, View Services, Bases, Components, and Pages. Web applications are usually six essential components. Since we've already discussed the data flow components in the Services portion of The Standard, in this section, we will discuss the UI aspect (Bases, Components, and Pages) with a slight detail about view services.
+Brokers, Services, View Services, Bases, Components, and Pages. Web applications usually have six essential components. Since we've already discussed the data flow components in the Services portion of The Standard, this section will discuss the UI aspect (Bases, Components, and Pages) with a slight detail about view services.
Let's discuss these characteristics here.
### 3.2.1.2.0 Anatomy
-UI components consist of base components, core components and pages. They all separate the responsibility of integration, rendering, and routing users to a particular UI functionality.
+UI components consist of base components, core components, and pages. They all separate the responsibility of integration, rendering, and routing users to a particular UI functionality.
Let's talk about these types in detail.
#### 3.2.1.2.0.0 Base Component
-Base components are just like brokers. They are wrappers around native or external UI components. Their primary responsibility is to abstract away any hard dependency on non-local UI capability. For instance, let's say we want to offer the capability to create text boxes for data insertion/capture. The native `` tag could offer this capability. But exposing or leveraging this tag in our core UI components is dangerous. Because it creates a hard dependency on non-abstract UI components, if we decide to use some 3rd party UI component at any point in time, we would need to change these native `` tags across all the components that use them. That's not an optimum strategy.
+Base components are just like brokers. They are wrappers around native or external UI components. Their primary responsibility is to abstract away any hard dependency on non-local UI capability. Let's say we want to offer the ability to create text boxes for data insertion/capture. The native `` tag could offer this capability. However, exposing or leveraging this tag in our core UI components is dangerous. Because it creates a hard dependency on non-abstract UI components, if we decide to use some 3rd party UI component at any point, we would need to change these native `` tags across all the components that use them. That's not an optimum strategy.
Let's take a look at a visualization for base component functionality:
@@ -37,7 +37,7 @@ Let's take a look at a visualization for base component functionality:
-As seen in the above example, base components will wrap an external or native UI component and then expose APIs to seamlessly and programmatically interact with that component. Occasionally, these APIs will represent parameters, functions, or delegates to interact with the component based on the business flow.
+As seen in the above example, base components will wrap an external or native UI component and then expose APIs to seamlessly and programmatically interact with that Component. Occasionally, these APIs will represent parameters, functions, or delegates to interact with the Component based on the business flow.
##### 3.2.1.2.0.0.0 Implementation
Let's take a look at a simple Base component for solving this problem:
@@ -57,7 +57,7 @@ public partial class TextBoxBase : ComponentBase
}
```
-In the code above, we wrapped the `` tag with our base component `TextBoxBase` and offered an input parameter `Value` to be passed into that component so it can pass it down to the native UI element. Additionally, we provided a public function `SetValue` to allow for programmatically mimicking the user's behavior to test drive the consuming component of this base element.
+In the code above, we wrapped the `` tag with our base component `TextBoxBase` and offered an input parameter `Value` to be passed into that Component so it can pass it down to the native UI element. Additionally, we provided a public function, `SetValue` to allow for programmatically mimicking the user's behavior to test drive the consuming Component of this base element.
##### 3.2.1.2.0.0.1 Utilization
Now, when we try to leverage this base component at the core components level, we can call it as follows:
@@ -69,14 +69,14 @@ Now, when we try to leverage this base component at the core components level, w
The `@ref` aspect will allow the backend code to interact with the base component programmatically behind the scenes to call any existing functionality.
##### 3.2.1.2.0.0.2 Restrictions
-Components can only use base components. Pages may not use them, and other Base components may not use them. But more importantly, it's preferred those base components would only wrap around one and only one non-local component.
+Components can only use base components. Pages may not use them, and other Base components may not use them. But more importantly, it's preferred those base components would only wrap around one and only one non-local Component.
And just like Brokers, Base Components do not have any business logic. They don't handle exceptions, do any calculations, or any form of sequential, iterative, or selective business logic operations. These operations are either data-based, where they belong to view services and downstream APIs, or UI-based, where they belong to Core Components.
Base components also don't handle exceptions, they don't throw their exceptions, and they don't perform any validations.
#### 3.2.1.2.0.1 Core Component
-Core components are just like services in the data flow. They are test-driven, but they are also restricted with one and only one dependency at all times. Core components leverage Base components, to perform a business-specific flow. They are less generic than Base components because they orchestrate and communicate with a very particular service for the data flow.
+Core components are just like services in the data flow. They are test-driven, but they are also restricted to one and only one dependency at all times. Core components leverage Base components to perform a business-specific flow. They are less generic than Base components because they orchestrate and communicate with a very particular service for the data flow.
Here's a visualization of core components architecture:
@@ -86,20 +86,20 @@ Here's a visualization of core components architecture:
-In a way, core components are UI and Data components orchestrators. They will leverage one or many Base components to construct a business-specific flow such as a student registration form, then send the signal to view services to persist that data and return responses or report errors.
+Core components are UI and Data component orchestrators. They will leverage one or many Base components to construct a business-specific flow, such as a student registration form, then send the signal to view services to persist that data and return responses or report errors.
-Core Components are three main parts. Elements, Styles and Actions. Let's discuss these parts here:
+Core Components are three main parts. Elements, Styles, and Actions. Let's discuss these parts here:
#### 3.2.1.2.0.1.0 Elements
-Elements are mainly the markup pieces you find in the `.razor` file in any component. These elements should always be of Base Components. They are the skeleton of any Core Component. These Elements may or may not expose sub-routines such as a Button Click or a reactionary routine such as Button color change on hover and so on.
+Elements are mainly the markup pieces you find in the `.razor` file in any component. These elements should always be Base Components. They are the skeleton of any Core Component. These Elements may or may not expose sub-routines, such as a Button Click, or a reactionary routine, such as a Button color change on hover, and so on.
-Elements can be tested trough three main ways. Existence, Properties and Actions.
+Elements can be tested in three main ways. Existence, Properties, and Actions.
#### 3.2.1.2.0.1.0.0 Existence
-First and foremost, we need to ensure the element is loaded and is present on the screen. This can be done in three different ways. Either by property assignment, searching by id, or searching for all types. Here's some examples:
+First and foremost, we need to ensure the Element is loaded and is present on the screen. This can be done in three different ways. Either by property assignment, searching by id, or searching for all types. Here are some examples:
#### 3.2.1.2.0.1.0.0.0 Property Assignment
-Every Component should have a corresponding property that gets attached to the Element at runtime. For instnace, assume we have a `StudentRegistrationComponent` as follows:
+Every Component should have a corresponding property attached to the Element at runtime. For instance, assume we have a `StudentRegistrationComponent` as follows:
```csharp
public class StudentRegistrationComponent: ComponentBase
@@ -107,7 +107,7 @@ public class StudentRegistrationComponent: ComponentBase
public TextBoxBase NameTextBox {get; set;}
}
```
-In the above code, we defined `NameTextBox` of the same type as the Base Component that will be attached to it. Once that property is define, we will need to write a failing test that verifies that this Element exists as follows:
+In the above code, we defined `NameTextBox` as the same type as the Base Component that will be attached to it. Once that property is defined, we will need to write a failing test that verifies that this Element exists as follows:
```csharp
public void ShouldLoadNameTextBox()
@@ -122,17 +122,17 @@ public void ShouldLoadNameTextBox()
}
```
-The above test will fail. That's simply because there's no markup that corresponds to the `NameTextBox` property on rendering-time. Let's make this test pass by changing the markup in `StudentRegistrationComponent.razor` as follows:
+The above test will fail. That's simply because no markup corresponds to the `NameTextBox` property on rendering-time. Let's make this test pass by changing the markup in `StudentRegistrationComponent.razor` as follows:
```html
```
-Now our test will pass. That's simply because the property gets dynamically instantiatied as render-time once the page loads.
+Our test will now pass. That's simply because the property is dynamically instantiated at render time once the page loads.
#### 3.2.1.2.0.1.0.0.1 Searching by Id
-Sometimes Property Assignment is not an option. There are scenarios where components load dynamically a set of nested components that we may not have access to them in design-time. In this case, searching by Id is our best option to ensure we have the right component in hand.
+Sometimes, Property Assignment is not an option. There are scenarios where components load dynamically a set of nested components that we may not have access to at design time. In this case, searching by ID is our best option to ensure we have the right Component in hand.
-Here's an example. Assume we have a list of components that loads dynamically by being given an list of students. We use the student object `Id` as an identifier for every component. Our code looks as follows:
+Here's an example. Assume we have a list of components that loads dynamically by being given a list of students. We use the student object `Id` as an identifier for every Component. Our code looks as follows:
```csharp
public partial StudentListComponent : ComponentBase
@@ -146,7 +146,7 @@ public partial StudentListComponent : ComponentBase
}
```
-On load - we call a view service to pull asynchronously a list of all students. Now, we need to take that list and dynamically load a nested view for each student. Let's write a failing test for this first:
+On load - we call a view service to pull a list of all students asynchronously. We need to take that list and dynamically load a nested view for each student. Let's write a failing test for this first:
```csharp
public void ShouldLoadStudentsAsync()
@@ -179,7 +179,7 @@ public void ShouldLoadStudentsAsync()
}
```
-In the above tests, we looked for components that match the student `Id` then we verified they exists. let's make that test pass as follows:
+In the above tests, we looked for components that matched the student `ID`, and verified they existed. Let's make that test pass as follows:
```html
@@ -187,10 +187,10 @@ In the above tests, we looked for components that match the student `Id` then we
>
```
-We use `PrettyBlazor` library to markup our iteration behavior with `` tag. Now our tests should pass by finding and verifying each created component once they load on the screen.
+We use the `PrettyBlazor` library to markup our iteration behavior with the `` tag. Now, our tests should pass by finding and verifying each created Component once they load on the screen.
#### 3.2.1.2.0.1.0.0.2 General Search
-There are scenarios where we don't have a key or an `Id` to find the element. We expect a list of "things" to load on the screen without any data or information on them. In which case we are going to have to resolve to General search mechanism where we rely on the count of the rendered components against the count that we expect as follows:
+There are scenarios where we don't have a key or an `Id` to find the Element. We expect a list of "things" to load on the screen without any data or information on them. In this case, we are going to have to resolve the General search mechanism where we rely on the count of the rendered components against the count that we expect as follows:
```csharp
public void ShouldLoadManyElements()
@@ -209,12 +209,12 @@ public void ShouldLoadManyElements()
}
```
-The Standard advices against having unknown-typed components like these loaded on the screen as they give a lot less control for engineers to know preceisely what's going on. But in gaming scenarios this could be the only option.
+The Standard advises against having unknown-typed components like these loaded on the screen as they give engineers much less control over what's going on. But in gaming scenarios, this could be the only option.
#### 3.2.1.2.0.1.0.1 Properties
-The other aspect that we consier when developing Core Components are the properties on these very components. These could properties on the Core Component itself or the Base Component. For instnace, we want to verify that a `LabelBase` component has the property information on it such as `First Name` or `Last Name`.
+The other aspect we consider when developing Core Components is their properties. These could be properties of the Core Component itself or the Base Component. For instance, we want to verify that a `LabelBase` component has property information such as `First Name` or `Last Name`.
-Let's start by setting up for a test.
+Let's start by setting up a test.
```csharp
public class StudentRegistrationComponent: ComponentBase
@@ -250,14 +250,14 @@ The test here will verify the label will always have the property value `First N
```html
```
-By simply doing that, now we verified the Element exists with the right Property or information on it.
+We verified that the Element exists with the right property or information by simply doing that.
-The same thing applies to properties on the Core Component itself, like having view models that load on initialization then get assigned to certain base components. We will show that example shortly.
+The same thing applies to properties on the Core Component itself, like having view models that load on initialization and then get assigned to certain base components. We will show that example shortly.
#### 3.2.1.2.0.1.0.2 Actions
-Testing actions is one of the most important parts of testing any Element. We want to ensure that when a button is clicked a certain action is triggered. These actions can also change a property, create a new element or simply trigger another action. The possibilities there are as many as the very pattern of Tri-Nature itself.
+Testing actions is one of the most important parts of testing any Element. We want to ensure that a certain action is triggered when a button is clicked. These actions can also change a property, create a new element, or trigger another action. There are as many possibilities as there are in the very pattern of Tri-Nature itself.
-Let's assume our `StudentRegistrationComponent` is supposed to trigger a call for a `StudentViewService` on Button click event. Let's start with a simple failing test as follows:
+Let's assume our `StudentRegistrationComponent` is supposed to trigger a call for a `StudentViewService` on the Button click event. Let's start with a simple failing test as follows:
```csharp
[Fact]
@@ -283,7 +283,7 @@ public void ShouldSubmitStudent()
}
```
-In the above test, we propose making a component implementation that will trigger calling `AddStudentViewAsync` from a `StudentViewService` once the button is clicked. This implies that there's a correlation between clicking a button and triggering an action.
+In the above test, we propose implementing a component that will trigger calling `AddStudentViewAsync` from a `StudentViewService` once the button clicks. This implies a correlation between clicking a button and triggering an action.
Let's write an implementation for this behavior. On the component code side, we should have the following function as follows:
@@ -310,17 +310,17 @@ The above code implements a `RegisterStudentAsync` function that will pass `Stud
OnClick=@RegisterStudentAsync />
```
-In the above markup, we attached the `SubmitButton` property to the Element, then we also passed `OnClick` event with the `RegisterStudentAsync` routine - so when the button is clicked the routine will trigger and we should be able to verify it in our unit tests.
+In the above markup, we attached the `SubmitButton` property to the Element and passed the `OnClick` event with the `RegisterStudentAsync` routine. When the button is clicked, the routine will trigger, and we should be able to verify it in our unit tests.
#### 3.2.1.2.0.1.1 Styles
-Core Components also carries more than just elements. They carry certain styles to ensure the user-experience is fitting the type of business they're trying to accomplish. While Elements or Base Components can also carry their own styles, it's important to realize that styles are better fit into Core Components to ensure the modularity of Base Components to fit whatever style is enforced by Core Components.
+Core Components also carry more than just elements. They have certain styles to ensure the user experience fits the type of business they're trying to accomplish. While Elements or Base Components can also carry their own styles, it's important to realize that styles are better suited to Core Components to ensure the modularity of Base Components to fit whatever style is enforced by Core Components.
-Testing styles isn't a very common thing in the UI world. Especially when it comes to test-driving styles in C# as code. The Standard enforces the idea of leveraging the same programming language (when possible) across all different aspects of a project. That also includes infrastructure, pipelines, styles, actions and everything else in between. This principle ensures that in addition to having standardized patterns that the learning curve for engineers working on any project is as minimum as possible.
+Testing styles are rare in the UI world. Especially when it comes to test-driving styles in C# as code. The Standard enforces the idea of leveraging the same programming language (when possible) across all different aspects of a project. That also includes infrastructure, pipelines, styles, actions, and everything else in between. This principle ensures that the learning curve for engineers working on any project is as minimal as possible, in addition to having standardized patterns.
-For testing styles on Core Components, we are going to leverage a library called SharpStyles. The library translates C# code into CSS styles flawlessly.
+We will leverage a library called SharpStyles to test styles on Core Components. The library flawlessly translates C# code into CSS styles.
-Let's consider a scenario where we want our `SubmitButton` on the registration component above to have a blue color for it's background. Let's add a `Style` property on our component as follows:
+Let's consider a scenario where we want our `SubmitButton` on the registration component above to have a blue color for its background. Let's add a `Style` property on our Component as follows:
First of all, we need to create a C# model with the identifiers we would like to have in our CSS style as follows:
@@ -332,7 +332,7 @@ public class StudentRegistrationStyle : SharpStyle
}
```
-This model will get translated into a CSS class called `submit-button` later when we start rendering the component. Let's leverage this new model in our component as follows:
+This model will be translated into a CSS class called `submit-button` when we start rendering the Component. Let's leverage this new model in our Component as follows:
```csharp
public partial class StudentRegistrationComponent : ComponentBase
@@ -343,7 +343,7 @@ public partial class StudentRegistrationComponent : ComponentBase
}
```
-Now that we have a new property for styles, we just need to hook this property to a markup that will transform these styles/models into pure native CSS. We will need to create a `StyleBase` Element/Base Component to take care of the abstraction side for us - so we don't have any hard dependency on SharpStyle library as follows:
+Now that we have a new property for styles, we need to hook this property to a markup that will transform these styles/models into pure native CSS. We will need to create a `StyleBase` Element/Base Component to take care of the abstraction side for us - so we don't have any hard dependency on the SharpStyle library as follows:
The markup side of that will look as follows:
@@ -403,7 +403,7 @@ public void ShouldRenderContainerWithStyles()
}
```
-With a failing test like this, we can now go ahead and start writing an implementation to satisfy the conditions for this test as follows.
+With a failing test like this, we can now start writing an implementation to satisfy the following conditions for this test.
On the component code side, let's generate the expected style object:
@@ -429,7 +429,7 @@ public partial class StudentRegistrationComponent : ComponentBase
}
```
-Then, on the markup side let's attach all the properties to their respective Elements as follows:
+Then, on the markup side, let's attach all the properties to their respective Elements as follows:
```html
```
-With that our tests should pass, and this would be a quick demonstration of a standardized way of testing styles for UI components.
+Our tests should pass, and this would be a quick demonstration of a standardized way of testing styles for UI components.
#### 3.2.1.2.0.1.2 Actions
-Actions in Core Components are very similar to Actions in Base Components or Elements. it is important however to understand that every Action can easily be verified by either changing a property or style, creating other components or simply triggering other actions. It can also be a combination of one or many of the aforementioned. For instance, a submit button could change the properties on existing elements by making them disabled while triggering a call/action to another service. It should all be verifiable as we discussed above.
+Actions in Core Components are very similar to Actions in Base Components or Elements. It is important, however, to understand that every action can easily be verified by either changing a property or style, creating other components, or simply triggering other actions. It can also be a combination of one or many of those above. For instance, a submit button could change the properties of existing elements by making them disabled while triggering a call/action to another service. It should all be verifiable, as we discussed above.
##### 3.2.1.2.0.1.0 Full Implementation & Tests
@@ -474,7 +474,7 @@ public partial class StudentRegistrationComponent : ComponentBase
}
}
```
-The above code shows the different types of properties within any given component. The dependency view service maps raw API models/data into consumable UI models. And the `State` determines whether a component should be `Loading`, `Content` or `Error`. The data view model binds incoming input to one unified model, `StudentView`. And the last three are Base level components used to construct the form of registration.
+The above code shows the different types of properties within any given component�the dependency view service maps raw API models/data into consumable UI models. The `State` determines whether a component should be `Loading`, `Content`, or `Error`. The data view model binds incoming input to one unified model, `StudentView`. The last three are base-level components used to construct the form of registration.
Let's take a look at the markup side of the core component:
@@ -535,7 +535,7 @@ public void ShouldRenderComponent()
}
```
-The test above will verify that all the components are assigned a reference property, and no external dependency calls have been made. It validates that the code in the `OnIntialized` function on the component level is validated and performing as expected.
+The test above will verify that all the components are assigned a reference property and that no external dependency calls have been made. It will also validate that the code in the `OnIntialized` function on the component level is validated and performing as expected.
Now, let's take a look at the submittal code validations:
@@ -593,19 +593,19 @@ public void ShouldSubmitStudentAsync()
}
```
-The test above validates that on submittal, the student model is populated with the data set programmatically through the base component instance; and verifies all these components are rendered on the screen before end-users by validating each base component an assigned instance on runtime or render-time.
+The test above validates that on submittal, the student model is populated with the data set programmatically through the base component instance and verifies all these components are rendered on the screen before end-users by validating each base component an assigned instance on runtime or render-time.
##### 3.2.1.2.0.1.1 Restrictions
-Core components have similar restrictions to Base components because they cannot call each other at that level. There's a level of Orchestration Core Components that can combine multiple components to exchange messages but they don't render anything on their own the same way Orchestration services delegate all the work to their dependencies.
+Core components have similar restrictions to Base components because they cannot call each other at that level. There's a level of Orchestration Core Components that can combine multiple components to exchange messages. Still, they don't render anything independently, the same way Orchestration services delegate all the work to their dependencies.
-One view service corresponds to one core component, which renders one and only one view model. But Core components are also not allowed to call more than one view service. And in that, they always stay true to the view model.
+One view service corresponds to one core component, which renders one and only one view model. However, core components are also not allowed to call more than one view service. And in that, they always stay true to the view model.
-View services may do their own orchestration-level work in an extremely complex flow, but we recommend keeping things at a flat level. These very same view services perform nothing but mapping and adding audit fields in addition to basic structural validations.
+View services may do their orchestration-level work in an extremely complex flow, but we recommend keeping things at a flat level. These same view services perform nothing but mapping and adding audit fields and basic structural validations.
#### 3.2.1.2.0.2 Pages
-In every web application, pages are a fundamental mandatory container component that needs to exist so end-users can navigate to them. Pages mainly hold a route, communicate a parameter from that route and combine core-level components to represent a business value.
+In every web application, pages are a fundamental mandatory container component that needs to exist so end-users can navigate to them. Pages mainly hold a route, communicate a parameter from that route, and combine core-level components to represent a business value.
-A good example of pages is dashboards. Dashboard pages contain multiple components like tiles, notifications, headers, and sidebars with references to other pages. Pages don't hold any business logic in and of themselves, but they delegate all route-related operations to their child components.
+An excellent example of a page is a dashboard. Dashboard pages contain multiple components, such as tiles, notifications, headers, and sidebars, that reference other pages. Pages don't hold any business logic in and of themselves, but they delegate all route-related operations to their child components.
Let's take a look at a simple page implementation:
@@ -617,18 +617,18 @@ Let's take a look at a simple page implementation:
```
-Pages are much simpler than core or base components. They don't require unit testing, and they don't necessarily need a backend code. They purely reference their components without reference (unless required) and help serve that content when navigating via a route.
+Pages are much simpler than core or base components. They don't require unit testing and don't necessarily need backend code. They purely reference their components without reference (unless required) and help serve that content when navigating via a route.
#### 3.2.1.2.0.3 Unobtrusiveness
-For all UI components, it's a violation to include code from multiple technologies/languages on the same page. For instance, a CSS style code, C# code, and HTML markup cannot all exist in the same file. They need to separate into their own files.
+It's a violation to include code from multiple technologies/languages on the same page for all UI components. For instance, CSS style code, C# code, and HTML markup cannot all exist in the same file. They need to be separated into their own files.
-The unobtrusiveness rule helps prevent cognitive pollution for engineers building UI components and makes the system much easier to maintain. That's why every component can nest its files beneath it if the IDE/Environment used for development allows for partial implementations as follows:
+The unobtrusiveness rule helps prevent cognitive pollution for engineers building UI components and makes the system much easier to maintain. That's why every Component can nest its files beneath it if the IDE/Environment used for development allows for partial implementations as follows:
- StudentRegisterationComponent.razor
- StudentRegisterationComponent.razor.cs
- StudentRegisterationComponent.razor.css
-The node file here, `.razor` file, has all the markup needed to kick off the component's initialization. At the same time, both nested files are supporting files for simple UI logic code and styling. This level of organization (especially in Blazor) doesn't require any referencing for these nested/support files but may not be the case for other technologies, so I urge engineers to do their best to fit that model/Standard.
+The node file here, `.razor`, has all the markup needed to kick off the Component's initialization. At the same time, both nested files are supporting files for simple UI logic code and styling. This level of organization (especially in Blazor) doesn't require any referencing for these nested/support files. Still, this may not be the case for other technologies, so I urge engineers to do their best to fit that model/Standard.
#### 3.2.1.2.0.4 Organization
All UI components are listed under a Views folder in the solution. Let's take a look:
@@ -638,4 +638,4 @@ All UI components are listed under a Views folder in the solution. Let's take a
- Components
- Pages
-This tri-nature conforming organization should make it easier to shift reusable components and make it also easier to find these components based on their categories. I will leave it up to the engineers' preference to determine whether to break down these components further by folders/namespaces or leave them all at the same level, given the nesting is in place.
\ No newline at end of file
+This tri-nature conforming organization should make it easier to shift reusable components and make it also easier to find these components based on their categories. Given the nesting is in place, I will leave it up to the engineers' preference to break down these components further by folders/namespaces or leave them all at the same level.
\ No newline at end of file
From 3e2be189372ec27a9d38169452f6a59e8f418509 Mon Sep 17 00:00:00 2001
From: Greg Hays
Date: Thu, 18 Apr 2024 18:17:26 -0700
Subject: [PATCH 18/18] CODE RUB: Processings 2.2 Grammer Check Reprocess
---
.../2.2 Processings/2.2 Processings.md | 68 +++++++++----------
1 file changed, 34 insertions(+), 34 deletions(-)
diff --git a/2. Services/2.2 Processings/2.2 Processings.md b/2. Services/2.2 Processings/2.2 Processings.md
index b1025a7..cd78d1a 100644
--- a/2. Services/2.2 Processings/2.2 Processings.md
+++ b/2. Services/2.2 Processings/2.2 Processings.md
@@ -2,7 +2,7 @@
## 2.2.0 Introduction
-Processing services are the layer where a higher order of business logic is implemented. They may combine (or orchestrate) two primitive-level functions from their corresponding foundation service to introduce a newer functionality. They may also call one primitive function and change the outcome with a little bit of added business logic. And sometimes processing services are there as a pass-through to introduce balance to the overall architecture.
+Processing services are the layer where a higher order of business logic is implemented. They may combine (or orchestrate) two primitive-level functions from their corresponding foundation service to introduce newer functionality. They may also call one primitive function and change the outcome with a little bit of added business logic. Sometimes, processing services are there as a pass-through to introduce balance to the overall architecture.
Processing services are optional, depending on your business need - in a simple CRUD operations API, processing services and all the other categories of services beyond that point will cease to exist as there is no need for a higher order of business logic at that point.
@@ -27,11 +27,11 @@ TryCatch(async () =>
});
```
-Processing services make Foundation services nothing but a layer of validation on top of the existing primitive operations. This means that Processing services functions are beyond primitive, and they only deal with local models as we will discuss in the upcoming sections.
+Processing services make Foundation services nothing but a layer of validation on top of the existing primitive operations. This means that Processing services functions are beyond primitive and only deal with local models, as we will discuss in the upcoming sections.
## 2.2.1 On The Map
-When used, Processing services live between foundation services and the rest of the application. they may not call Entity or Business brokers, but they may call Utility brokers such as logging brokers, time brokers, and any other brokers that offer supporting functionality and are not specific to any particular business logic. Here's a visual of where processing services are located on the map of our architecture:
+Processing services live between foundation services and the rest of the application when used. They may not call Entities or Business brokers. Still, they may call Utility brokers such as logging brokers, time brokers, and any other brokers that offer supporting functionality and are not specific to any particular business logic. Here's a visual of where processing services are located on the map of our architecture:
@@ -39,12 +39,12 @@ When used, Processing services live between foundation services and the rest of
-On the right side of a Processing service lies all the non-local models and functionality, whether it's through the brokers or the models that the foundation service is trying to map into local models.
-On the left side of Processing services are pure local functionality, models, and architecture. Starting from the Processing services themselves, there should be no trace or track of any native or non-local models in the system.
+On the right side of a Processing service lies all the non-local models and functionality, whether through the brokers or the models, and the foundation service is trying to map them into local models.
+On the left side of Processing services are pure local functionality, models, and architecture. Starting from the Processing services, there should be no trace or track of any native or non-local models in the system.
## 2.2.2 Characteristics
-Processing services in general are combiners of multiple primitive-level functions to produce a higher-order business logic. but they have many more characteristics than just that, let's talk about those here.
+Processing services, in general, are combiners of multiple primitive-level functions to produce a higher-order business logic. But they have many more characteristics than just that; let's talk about those here.
### 2.2.2.0 Language
@@ -53,7 +53,7 @@ Usually, processing services combine two or more primitive operations from the f
#### 2.2.2.0.0 Functions Language
-At a glance, the Processing services language changes from primitive operations such as `AddStudent` or `RemoveStudent` to `EnsureStudentExists` or `UpsertStudent`. they usually offer more advanced business logic operations to support a higher-order functionality.
+The Processing services language changes from primitive operations such as `AddStudent` or `RemoveStudent` to `EnsureStudentExists` or `UpsertStudent`. They usually offer more advanced business logic operations to support higher-order functionality.
Here are some examples of the most common combinations a processing service may offer:
| Processing Operation | Primitive Functions |
@@ -63,27 +63,27 @@ Here are some examples of the most common combinations a processing service may
| VerifyStudentExists | RetrieveAllStudents |
| TryRemoveStudentAsync | RetrieveStudentById + RemoveStudentByIdAsync |
-As you can see, the combination of primitive functions processing services might also include adding an additional layer of logic on top of the existing primitive operation. For instance, `VerifyStudentExists` takes advantage of the `RetrieveAllStudents` primitive function, and then adds a boolean logic to verify whether the returned student by an Id from a query exists or not before returning a `boolean`.
+As you can see, the combination of primitive functions processing services might also include adding an additional layer of logic on top of the existing primitive operation. For instance, `VerifyStudentExists` takes advantage of the `RetrieveAllStudents` primitive function and then adds a boolean logic to verify the returned student by an Id from a query actually exists or not before returning a `boolean`.
#### 2.2.2.0.1 Pass-Through
-Processing services may borrow some of the terminology a foundation service may use. for instance, in a pass-through scenario, a processing service with be as simple as `AddStudentAsync`. We will discuss the architecture-balancing scenarios later in this chapter.
-Unlike Foundation services, Processing services are required to have the identifier `Processing` in their names. for instance, we say `StudentProcessingService`.
+Processing services may borrow some of the terminology a foundation service uses. For instance, in a pass-through scenario, a processing service could be as simple as `AddStudentAsync`. We will discuss the architecture-balancing scenarios later in this chapter.
+Unlike Foundation services, Processing services are required to have the identifier `Processing` in their names. For instance, we say `StudentProcessingService`.
#### 2.2.2.0.2 Class-Level Language
More importantly, Processing services must include the name of the entity that is supported by their corresponding Foundation service.
-For instance, if a Processing service is dependent on a `TeacherService`, then the Processing service name must be `TeacherProcessingService`.
+For instance, if a Processing service depends on a `TeacherService`, then the Processing service name must be `TeacherProcessingService`.
### 2.2.2.1 Dependencies
-Processing services can only have two types of dependencies. a corresponding Foundation service, or a Utility broker. That's simply because Processing services are nothing but an extra higher-order level of business logic, orchestrated by combined primitive operations on the Foundation level.
-Processing services can also use Utility brokers such as `TimeBroker` or `LoggingBroker` to support its reporting aspect. but it shall never interact with an Entity or Business broker.
+Processing services can only have two types of dependencies: a corresponding Foundation service or a Utility broker. That's simply because Processing services are nothing but an extra higher-order level of business logic orchestrated by combined primitive operations on the Foundation level.
+Processing services can also use Utility brokers such as `TimeBroker` or `LoggingBroker` to support their reporting aspect, but they shall never interact with an Entity or Business broker.
### 2.2.2.2 One-Foundation
-Processing services can interact with one and only one Foundation service. In fact, without a foundation service there can never be a Processing layer. and just like we mentioned above about the language and naming, Processing services take on the same entity name as their Foundation dependency.
-For instance, a processing service that handles higher-order business logic for students will communicate with nothing but its foundation layer, which would be `StudentService` for instance. That means that processing services will have one and only one service as a dependency in its construction or initiation as follows:
+Processing services can interact with only one Foundation service. In fact, without a foundation service, there can never be a Processing layer. And just like we mentioned above about the language and naming, Processing services take on the exact same entity name as their Foundation dependency.
+For instance, a processing service that handles higher-order business logic for students will communicate with nothing but its foundation layer, which would be `StudentService`. That means that processing services will have one and only one service as a dependency in its construction or initiation as follows:
```csharp
public class StudentProcessingService
@@ -99,8 +99,8 @@ However, processing services may require dependencies on multiple utility broker
### 2.2.2.3 Used-Data-Only Validations
-Unlike the Foundation layer services, Processing services only validate what it needs from its input. For instance, if a Processing service is required to validate a student entity exists, and its input model just happens to be an entire `Student` entity, it will only validate that the entity is not `null` and that the `Id` of that entity is valid. the rest of the entity is out of the concern of the Processing service.
-Processing services delegate full validations to the layer of services that is concerned with that which is the Foundation layer. here's an example:
+Unlike the Foundation layer services, Processing services only validate what they need from their input. For instance, if a Processing service is required to validate that a student entity exists, and its input model just happens to be an entire `Student` entity, it will only validate that the entity is not `null` and that the `Id` of that entity is valid. The rest of the entity is out of the Processing service's concern.
+Processing services delegate full validations to the layer of services that are concerned with that, which is the Foundation layer. Here's an example:
```csharp
public ValueTask UpsertStudentAsync(Student student) =>
@@ -121,21 +121,21 @@ TryCatch(async () =>
});
```
-Processing services are also not very concerned about outgoing validations except for what is going to be used within the same routine. For instance, if a Processing service is retrieving a model, and it's going to use this model to be passed to another primitive-level function on the Foundation layer, the Processing service will be required to validate that the retrieved model is valid depending on which attributes of the model it uses.
-For Pass-through scenarios, however, processing services will delegate the outgoing validation to the foundation layer.
+Processing services are also not very concerned about outgoing validations except for what they will use within the same routine. For instance, if a Processing service is retrieving a model and will use this model to be passed to another primitive-level function on the Foundation layer, the Processing service will be required to validate that the retrieved model is valid, depending on which attributes of the model it uses.
+However, processing services will delegate the outgoing validation to the foundation layer for Pass-through scenarios.
## 2.2.3 Responsibilities
-The processing service's main responsibility is to provide higher-order business logic. This happens along with the regular signature mapping and various use-only validations which we will discuss in detail in this section.
+Processing service's main responsibility is to provide higher-order business logic. This happens along with the regular signature mapping and various use-only validations, which we will discuss in detail in this section.
### 2.2.3.0 Higher-Order Logic
-Higher-order business logic are functions that are above primitive. For instance, `AddStudentAsync` function is a primitive function that does one thing and one thing only. But higher-order logic is when we try to provide a function that changes the outcome of a single primitive function like `VerifyStudentExists` which returns a boolean value instead of the entire object of the `Student`, or a combination of multiple primitive functions such as `EnsureStudentExistsAsync` which is a function that will only add a given `Student` model if and only if the aforementioned object doesn't already exist in storage. Here are some examples:
+Higher-order business logic are functions that are above primitive. For instance, `AddStudentAsync` function is a primitive function that does one thing and one thing only. But higher-order logic is when we try to provide a function that changes the outcome of a single primitive function like `VerifyStudentExists`, which returns a boolean value instead of the entire object of the `Student`, or a combination of multiple primitive functions such as `EnsureStudentExistsAsync` which is a function that will only add a given `Student` model if and only if the object mentioned above doesn't already exist in storage. Here are some examples:
#### 2.2.3.0.0 Shifters
-The shifter pattern in higher-order business logic is when the outcome of a particular primitive function is changed from one value to another. Ideally, a primitive type such as a `bool` or `int` is not a completely different type as that would violate the purity principle.
-For instance, in a shifter pattern, we want to verify if a student exists or not. We don't want the entire object, but just whether it exists in a particular system or not. Now, this seems like a case where we only need to interact with one and only one foundation service and we are shifting the value of the outcome to something else. Which should fit perfectly in the realm of the processing services. Here's an example:
+The shifter pattern in a higher-order business logic is when the outcome of a particular primitive function is changed from one value to another. Ideally, a primitive type such as a `bool` or `int` is not a completely different type as that would violate the purity principle.
+For instance, in a shifter pattern, we want to verify whether a student exists. We want only some objects, but we want to know whether they exist in a particular system. Now, this is a case where we only need to interact with one and only one foundation service, and we are shifting the value of the outcome to something else, which should fit perfectly in the realm of the processing services. Here's an example:
```csharp
public ValueTask VerifyStudentExists(Guid studentId) =>
@@ -150,7 +150,7 @@ TryCatch(async () =>
});
```
-In the snippet above, we provided a higher-order business logic, by returning a boolean value of whether a particular student with a given `Id` exists in the system or not. There are cases where your orchestration layer of services isn't really concerned with all the details of a particular entity but just knowing whether it exists or not as a part of an upper business logic or what we call orchestration.
+In the snippet above, we provided higher-order business logic by returning a boolean value indicating whether a particular student with a given `Id` exists in the system. There are cases where your orchestration layer of services isn't really concerned with all the details of a particular entity but just knows whether it exists or not as part of an upper business logic or what we call orchestration.
Here's another popular example of a processing services shifting pattern:
@@ -167,12 +167,12 @@ TryCatch(() =>
});
```
-In the example above, we provided a function to retrieve the count of all students in a given system. It's up to the designers of the system to determine whether to interpret a `null` value retrieved for all students to be an exception case that was not expected to happen or return a `0`; all depending on how they manage the outcome.
-In our case here we validate the outgoing data as much as the incoming, especially if it's going to be used within the processing function to ensure further failures do not occur for upstream services.
+In the example above, we provided a function to retrieve the count of all students in a given system. The system's designers determine whether to interpret a `null` value retrieved for all students as an exception case that was not expected to happen or return a `0`, depending on how they manage the outcome.
+In our case, we validate the outgoing data as much as the incoming data, especially if it will be used within the processing function to ensure further failures do not occur for upstream services.
#### 2.2.3.0.1 Combinations
-The combination of multiple primitive functions from the foundation layer to achieve a higher-order business logic is one of the main responsibilities of a processing service. As we mentioned before, some of the most popular examples are for ensuring a particular student model exists as follows:
+Combining multiple primitive functions from the foundation layer to achieve a higher-order business logic is one of the main responsibilities of a processing service. As we mentioned before, some of the most popular examples are for ensuring a particular student model exists as follows:
```csharp
public async ValueTask EnsureStudentExistsAsync(Student student) =>
@@ -194,22 +194,22 @@ TryCatch(async () =>
});
```
-In the code snippet above, we combined `RetrieveAll` with `AddAsync` to achieve a higher-order business logic operation. The `EnsureAsync` operation needs to verify something or an entity exists first before trying to persist it. The terminology around these higher-order business logic routines is very important. Its importance lies mainly in controlling the expectations of the outcome and the inner functionality. But it also ensures less cognitive resources from the engineers are required to understand the underlying capabilities of a particular routine.
-The conventional language used in all of these services also ensures that redundant capability will not be created mistakenly. For instance, an engineering team without any form of standard might create `TryAddStudentAsync` while already having an existing functionality such as `EnsureStudentExistsAsync` which does the same thing. The convention here with the limitation of the size of capabilities a particular service may have ensured redundant work shall never occur on any occasion.
-There are so many different instances of combinations that can produce a higher-order business logic, for instance, we may need to implement a functionality that ensures a student is removed. We use `EnsureStudentRemovedByIdAsync` to combine a `RetrieveById` and a `RemoveById` in the same routine. It all depends on what level of capabilities an upstream service may need to implement such a functionality.
+In the code snippet above, we combined `RetrieveAll` with `AddAsync` to achieve a higher-order business logic operation. The `EnsureAsync` operation needs to verify something or an entity exists first before trying to persist it. The terminology around these higher-order business logic routines is very important. Its importance lies mainly in controlling the expectations of the outcome and the inner functionality. However, it also ensures that engineers do not require fewer cognitive resources to understand the underlying capabilities of a particular routine.
+The conventional language used in all of these services also ensures that redundant capability will not be created mistakenly. For instance, an engineering team without any form of standard might create `TryAddStudentAsync` while already having an existing functionality such as `EnsureStudentExistsAsync`, which does the same thing. With the limitation of the size of capabilities a particular service may have, the convention here ensured redundant work should never occur on any occasion.
+There are so many different combinations that can produce higher-order business logic. For instance, we may need to implement functionality that ensures a student is removed. We use `EnsureStudentRemovedByIdAsync` to combine a `RetrieveById` and a `RemoveById` in the same routine. It all depends on what level of capabilities an upstream service may need to implement such a functionality.
### 2.2.3.1 Signature Mapping
-Although processing services operate fully on local models and local contracts, they are still required to map foundation-level services' models to their local models. For instance, if a foundation service is throwing `StudentValidationException` then processing services will map that exception to `StudentProcessingDependencyValidationException`. Let's talk about mapping in this section.
+Although processing services operate fully on local models and local contracts, they are still required to map foundation-level services' models to their own local models. For instance, if a foundation service throws `StudentValidationException` then processing services will map that exception to `StudentProcessingDependencyValidationException`. Let's talk about mapping in this section.
#### 2.2.3.1.0 Non-Exception Local Models
-In general, processing services are required to map any incoming or outgoing objects with a specific model of its own. But that rule doesn't always apply to non-exception models. For instance, if a `StudentProcessingService` is operating based on a `Student` model, and there's no need for a special model for this service, then the processing service may be permitted to use the same model from the foundation layer.
+In general, processing services are required to map any incoming or outgoing objects with a specific model of its own. But that rule only sometimes applies to non-exception models. For instance, if a `StudentProcessingService` is operating based on a `Student` model, and there's no need for a special model for this service, then the processing service may be permitted to use the exact same model from the foundation layer.
#### 2.2.3.1.1 Exception Models
-When it comes to processing services handling exceptions from the foundation layer, it is important to understand that exceptions in our Standard are more expressive in their naming conventions and their role than any other model. Exceptions here define the what, where, and why every single time they are thrown.
-For instance, an exception that's called `StudentProcessingServiceException` indicates the entity of the exception which is the `Student` entity. Then it indicates the location of the exception which is the `StudentProcessingService`. Lastly, it indicates the reason for that exception which is `ServiceException` indicating an internal error to the service that is not a validation or a dependency of nature that happened.
+When processing services handle exceptions from the foundation layer, it is important to understand that exceptions in our Standard are more expressive in their naming conventions and role than any other model. Exceptions here define the what, where, and why every single time they are thrown.
+For instance, an exception called `StudentProcessingServiceException` indicates the entity of the exception, which is the `Student` entity. Then, it indicates the location of the exception, which is the `StudentProcessingService`. Lastly, it indicates the reason for that exception, which is `ServiceException`, indicating an internal error to the service that is not a validation or a dependency of nature that happened.
Just like the foundation layer, processing services will do the following mapping to occurring exceptions from its dependencies:
| Exception | Wrap Inner Exception With | Wrap With | Log Level |