Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix CircleCI build and Golang linting #186

Merged
merged 7 commits into from
Dec 3, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ jobs:
build:
working_directory: /go/pkg/mod/github.com/admiral
docker:
- image: circleci/golang:1.11
- image: circleci/golang:1.16
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we use 1.17 ?

steps:
- checkout
- run:
Expand Down Expand Up @@ -54,6 +54,7 @@ jobs:
MINIKUBE_WANTREPORTERRORPROMPT: false
MINIKUBE_HOME: /home/circleci
CHANGE_MINIKUBE_NONE_USER: true
resource_class: large
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious to know why we need a large build container (assuming it is a build container)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is for the integration tests. The container was increased to match the size that CircleCI has with the limits I increased minikube to. https://circleci.com/docs/2.0/configuration-reference/#resourceclass

steps:
- attach_workspace:
at: .
Expand Down Expand Up @@ -104,7 +105,7 @@ jobs:
./run.sh "1.16.8" "1.7.6" "../out"
publish-github-release:
docker:
- image: circleci/golang:1.11
- image: circleci/golang:1.16
working_directory: /go/pkg/mod/github.com/admiral
steps:
- attach_workspace:
Expand Down
3 changes: 1 addition & 2 deletions .github/workflows/golang-ci-lint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,14 +16,13 @@ jobs:
uses: golangci/golangci-lint-action@v2
with:
# Required: the version of golangci-lint is required and must be specified without patch version: we always use the latest patch version.
version: v1.29
version: latest

# Optional: working directory, useful for monorepos
# working-directory: somedir

# Optional: golangci-lint command line arguments.
args: >-
--out-format=github-actions
--skip-dirs=admiral/pkg/client/clientset/versioned
--tests=false
--timeout=5m
Expand Down
24 changes: 18 additions & 6 deletions admiral/pkg/apis/admiral/routes/handlers.go
Original file line number Diff line number Diff line change
Expand Up @@ -52,14 +52,16 @@ func (opts *RouteOpts) GetClusters(w http.ResponseWriter, r *http.Request) {
} else {
if len(clusterList) == 0 {
message := "No cluster is monitored by admiral"
log.Printf(message)
log.Println(message)
w.WriteHeader(200)
out, _ = json.Marshal(message)
w.Write(out)
} else {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(200)
w.Write(out)
}
_, err := w.Write(out)
if err != nil {
log.Println("Failed to write message: ", err)
}
}
}
Expand Down Expand Up @@ -87,7 +89,11 @@ func (opts *RouteOpts) GetServiceEntriesByCluster(w http.ResponseWriter, r *http
if len(serviceEntriesByCluster) == 0 {
log.Printf("API call get service entry by cluster failed for clustername %v with Error: %v", clusterName, "No service entries configured for cluster - "+clusterName)
w.WriteHeader(200)
w.Write([]byte(fmt.Sprintf("No service entries configured for cluster - %s", clusterName)))
_, err := w.Write([]byte(fmt.Sprintf("No service entries configured for cluster - %s", clusterName)))
if err != nil {
log.Println("Error writing body: ", err)
}

} else {
response = serviceEntriesByCluster
out, err := json.Marshal(response)
Expand All @@ -97,7 +103,10 @@ func (opts *RouteOpts) GetServiceEntriesByCluster(w http.ResponseWriter, r *http
} else {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(200)
w.Write(out)
_, err := w.Write(out)
if err != nil {
log.Println("failed to write resp body: ", err)
}
}
}
}
Expand Down Expand Up @@ -134,7 +143,10 @@ func (opts *RouteOpts) GetServiceEntriesByIdentity(w http.ResponseWriter, r *htt
} else {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(200)
w.Write(out)
_, err := w.Write(out)
if err != nil {
log.Println("failed to write resp body", err)
}
}
} else {
log.Printf("Identity not provided as part of the request")
Expand Down
21 changes: 13 additions & 8 deletions admiral/pkg/apis/admiral/server/server.go
Original file line number Diff line number Diff line change
Expand Up @@ -53,8 +53,6 @@ func (s *Service) Start(ctx context.Context, port int, routes Routes, filter []F
log.Printf("Starting admiral api server on port=%d", port)
log.Fatalln(s.server.ListenAndServe())

return

}

func (s *Service) newRouter(routes Routes, filter []Filter) *mux.Router {
Expand Down Expand Up @@ -86,12 +84,19 @@ func (s *Service) newRouter(routes Routes, filter []Filter) *mux.Router {
}

func waitForStop(s *Service) {
for {
select {
case <-s.ctx.Done():
log.Println("context done stopping server")
s.stop()
return
//for {
// select {
// case <-s.ctx.Done():
// log.Println("context done stopping server")
// s.stop()
// return
// }
//}
for range s.ctx.Done() {
log.Println("context done stopping server")
err := s.stop()
if err != nil {
log.Println("error stopping server: ", err)
}
}
}
Expand Down
2 changes: 1 addition & 1 deletion tests/create_cluster.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
k8s_version=$1

if [[ $IS_LOCAL == "false" ]]; then
sudo -E minikube start --vm-driver=none --cpus 4 --memory 4096 --kubernetes-version=$k8s_version &> $HOME/minikube.log 2>&1 < /dev/null
sudo -E minikube start --vm-driver=none --cpus 6 --memory 6144 --kubernetes-version=$k8s_version &> $HOME/minikube.log 2>&1 < /dev/null
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the increment in resources due to some new dependency?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Increased demands for the tests with added GRPC clients. I noticed the tests were flaky and seemed to be failing to timeout. Experience with running minikube has shown that the lack of responsiveness occurs when it becomes resource strapped so I increased it to account for this. I also had to increase the integration test vm resources to account for this change as well.

else
minikube start --memory=4096 --cpus=4 --kubernetes-version=$k8s_version --vm-driver "virtualbox"
#label node for locality load balancing
Expand Down