-
Notifications
You must be signed in to change notification settings - Fork 10.3k
Azure DevOps build/release pipeline best practices and artifacts to reproduce it #949
Comments
P.S. my question is different from this #804 ("plan to host these solution in Azure DevOps?"), when my question is about best practices/artifacts.... not about - where you will host this app. |
Also in a book was phrase But even in complementary guide - didn't mentioned "helm" or deployment of "micro services application" (multiple different apps under one umbrella. |
Hi @SychevIgor, yes we are aware of this situation and there's someone already working in an Azure DevOps pipeline showing best practices. We’ll show it in upcoming versions of eShopOnContainers and guidance eBooks. Hope this helps. cc/ @nishanil |
Thanks @SychevIgor for the detailed feedback. Yes, @eiximenis is working on the individual DevOps pipeline for microservices. We will fix the book and the documentation once we have them ready. |
Thanks, @nishanil, for confirming this is being worked on. I am currently struggling with this myself. I would like to put forward the main issues I'm facing, which are currently not solved in the DevOps setup (or maybe I'm just not seeing it - really looking forward to this part of the ebook!), and see if you guys (@eiximenis) are planning on covering this in the next edition of the book. CI: upon commit to master, a method to test and build only the affected projects:
Furthermore, |
Hi @maurei, Regarding the monorepo, it's definitely not the recommended approach for a microservices architecture, however for eShopOnContainers, as a showcase or architectural patterns it does make sense, because it's easier to explore and work with. You can read more in issue #921. As for the first part of your question, is it really possible to detect the dependent microservices? I might be missing something but, how would you identify dependencies from integration events? |
We have a project in the common layer that contains all integration events that are shared, and microservices that use these events have a project reference to this project, therefore this should be identifiable by parsing the For integration events related to achieving data consistency across different microservices, we're using a generic integration event, something like For more specific, (domain-type is the correct term here?) integration events, these are still typically shared by minimally two microservices, hence we still feel it makes sense to share them in a common project. About that: I've been reading through the issues and your discussions with @CESARDELATORRE , about it, thanks for that, pretty helpful. In our case we're a small team (2 - 4 developers) and we feel that the extra overhead of maintaining such shared integration events through nuget packages and allowing for different versions of these integration events across the microservices, adds unwanted complexity. Forcing ourselves to use "actual same" integration events everywhere might sometimes be a bit tedious when you need to change affected code in multiple places
I'm looking forward to hearing your thoughts! |
OK, @maurei, it looks like you have all that's needed to identify dependencies between microservices, although it quite clear it's a highly opinionated solution, that's nice and working for your team, but it's obviously tied to that way of structuring your app. For that very same reason you'll probably have to find a solution that works in your specific setup, as I'm guessing @eiximenis is working on general cases. But a usual, there's no one-size-fits-all solution, so if it works for your team, great. Hope this helps. |
Hi everyone! 1st version of separated builds had been released into dev branch. We switched to YAML buillds so now we have the builds as code in the repo. You can find the build definitions in Anyway, currently all builds are triggered by every single push in our CI pipeline, which is far from best option. About what @maurei said (using gif diff), as we use Azure Devops for the builds, maybe the use of path filters could be a better (meaning easier) option. If path filters are not enough we could start thinking on other options including what @maurei said. In eShopOnContainers we have the (auto imposed) requirement to be in a single repo, and this come with a price when creating CI/CD pipelines. Also, one thing to notice (this is for @CESARDELATORRE and @nishanil): Switching from one monolithic build to N builds has great benefits (and its more microservice oriented) but as we are using the hosted agent we are paying a high price in the overall build time. Now each build is independent and runs in a fresh machine, so no Docker cache is reused between the builds. So every build has to download all the images including netcore SDK and runtime ones. I think that switching to a private build agent would allow us to reuse docker images and improve the overall build time. thoughts? |
@eiximenis Try triggers with path in Yaml? https://docs.microsoft.com/en-us/azure/devops/pipelines/build/triggers?view=azure-devops&tabs=yaml#paths |
Hi! |
Things that could be improved:
|
@eiximenis regarding your note about the absence of Docker caching of layers when using fresh machines every time, I think this article provides a feasible approach to solve that issue. |
@maurei, that article looks quite interesting, will explore in detail. I've been doing some experiments to speed up building (@ ~22 min/build * build frecuency = a lot of time) I have only tested this with docker-compose, and have taken build time down to ~14 min (~36% less). The general approach is to pre-restore commonly used packages in the That's kind of "priming the packages cache". But then copying the context at the beginning of the build takes longer. To achieve this, Dockerfile have to be something like this: FROM microsoft/dotnet:2.2-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/dotnet:2.2-sdk AS publish
WORKDIR /src
COPY . .
WORKDIR /src/src/Services/Ordering/Ordering.API
RUN dotnet restore --packages /src/packages
RUN dotnet publish --no-restore -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "Ordering.API.dll"] BTW, I dropped the RUN dotnet build --no-restore -c Release -o /app
RUN dotnet publish --no-restore -c Release -o /app Could this help somehow? |
I don't like this approach because docker build process should be independent of any state of the docker host. In fact (imho) the If the build process happens inside a Docker container, entire build process should happen on it: restoring dependences is part of that build process, and therefore should happen inside the build container. Unless you consider that the package restore is not part of the build process (and in this case you should have |
Yeah @eiximenis, I agree on the the host state point. What I like, or rather, don't dislike too much, is that it doesn't really matter if the packages folder is empty or outdated, in the worst case all packages will have to be restored, but I think it's kind of innocuous, because it might help but it "shouldn't" cause any harm. There's another approach I was exploring with @WolfspiritM some time ago in #650. It's restoring packages for all projects before building, along with some tweaks in .dockerignore: FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
COPY . .
RUN dotnet restore
WORKDIR /src/src/Services/Some.Project
RUN dotnet build --no-restore -c Release -o /app This is pretty fast to build the whole project, but twice as long to build a single container. That's why I got to the "primed packages cache" solution, that's something in-between. |
Hi @SychevIgor, have you taken a look at https://github.com/dotnet-architecture/eShopOnContainers/tree/dev/build/azure-devops regarding this issue? Do you thing it's good enough or are you missing something? |
@mvelosop it would be nice to add tests run, tests code coverage results and publish results. Here is our example:
|
@mvelosop also, we separated docker images build and helm chart publish to 2 different Build pipelines, because we don't want to increment image versions, if code didn't change or helm chart versions of charts didn't change. it also speedup the build process. In a release pipelines, we choose both "new image" and "helm chart" as triggers. We are actively using variables, (for example for docker-compose files), because it's easier to change variable than build definition. In our team, we are not running tests if it's not pullrequest or develop branch to speedup process. test runs on build agent are slow. |
@mvelosop we also removed *."api" from image name, because later in release, we can't use the same naming convention (helm can't create deployment when name containers ". or :" |
@mvelosop forgot about helm charts... Because we are writing code not only for our self (outsourcing) - it's bad idea to store helm charts only as build output in azure devops. |
Hi @SychevIgor, thanks for such a detailed real-world tips! highly valuable 😊 Pinging @nishanil and @eiximenis on this to check on the best way to incorporate this! Thanks! |
@mvelosop on build2019 MS announced unified pipelines... pipelines based on yaml. https://devblogs.microsoft.com/devops/whats-new-with-azure-pipelines/ probably you will later can introduce CI+CD |
@mvelosop I was wondering how long it currently takes to run the entire CI pipeline? |
First of all - thank you for this project and books. It's cool to have it all together and instead of each time send to a team million links- send just one.
But what probably missed from the microservices book and the sample- is devops process for microservices. Not trivial -deploy one app (from 10+)
As "Evangelists" from Microsoft DevOps constantly talking - "Microservices can't be used without devops effectively". I'm agree with it, but this (devops) topic not covered in a book yet. as well as in a project.
Why it's an issue/gap? for example - in all googlable samples (both in azure devops documentation/devops sample generator v2) we can find only trivial examples- how to deploy 1 app (even if we are deploying app to an aks via helm). But in eshop- you got 10+ different apps/microserives and it's unclear hot to trigger a deployment of modified microsoervces only. As well as some other questions related to devops that can affect overall adaptation of microservices.
If you will add to the book details about best practices, how best practices can be implemented using azure devops pipelines, maybe even artifacts in a repo to reproduce mentioned pipelines in our own projects- it will significantly simplify everything for us(customers/.net devs), because we will have some basic references without weeks of of googling/research and so on.
Maybe you will even add this project to azure devops generator. https://docs.microsoft.com/en-us/azure/devops/demo-gen/?view=azure-devops
What's you opinion about it?
@CESARDELATORRE @unaizorrilla @mvelosop
The text was updated successfully, but these errors were encountered: