Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Talent Hiring and Performance Evaluation Libraries #13

Open
moul opened this issue Jun 20, 2023 · 2 comments
Open

Talent Hiring and Performance Evaluation Libraries #13

moul opened this issue Jun 20, 2023 · 2 comments

Comments

@moul
Copy link
Member

moul commented Jun 20, 2023

Goal

The goal of this project is to develop a series of libraries that enable efficient management of talent hiring and performance evaluation based on code. By leveraging code, we aim to achieve greater transparency, collaboration, and alignment in the evaluation process. Additionally, we intend to facilitate integration with on-chain project management systems, such as the Evaluation DAO (gnolang/gno#407) and Gnodes (gnolang/gno#382), to promote public visibility, accountability, and alignment with OKRs. The libraries will also be designed to seamlessly integrate with an upcoming technical test platform, where individuals can anonymously complete Gno-related puzzles and enter the hiring pipeline upon successful completion (gnoverse/acs#1, private).

Inspired by the principles of unit tests and regression tests in code, this project aims to create a continuously improving talent hiring and performance evaluation system. By leveraging code and community collaboration, the libraries enhance transparency, collaboration, and alignment in talent acquisition and performance management. The project embraces a long-term approach, ensuring the system evolves over time to meet changing needs and drive continuous improvement.

Objectives

  1. Collaboration: Enable collaborative evaluation and improvement of the hiring pipeline, performance evaluation criteria, and Gno-related puzzles. This includes allowing individuals to submit pull requests (PRs) to update their scorecards, OKRs, and puzzle content, as well as review PRs from others. Moreover, fostering collaboration will involve enhancing the set of questions asked during interviews and creating engaging puzzles through collective input.

  2. Composability: Design the libraries with composability in mind, ensuring that different engineering roles (e.g., devrel, code developer, tinkerer) can leverage the system. While each job may have specific requirements and onboarding processes, there should be a core set of shared requirements, OKRs, and onboarding materials for all engineering roles.

  3. Transparency: Develop a framework that promotes transparency in talent hiring and performance evaluation. The project aims to open-source the code, making it publicly accessible on GitHub. This will enable interested individuals to review and contribute to the process. During the off-chain phase, the repository can remain private, and certain parts, such as the hiring questions, can be kept private even when transitioning to an on-chain environment.

Implementation

The initial phase will involve developing the libraries in pure Go. This will facilitate off-chain, private implementation for testing and refinement. The libraries will focus on human-centric aspects, ensuring alignment with the talent hiring and performance evaluation requirements, including OKRs and the integration with the technical test platform. Later stages may include integration with other systems, emphasizing the collaboration and transparency aspects discussed above.

Deliverables

  1. Open-source libraries in Go for talent hiring and performance evaluation.
  2. Detailed documentation, guidelines, and sample code.
  3. Integration recommendations for on-chain project management and the technical test platform.
  4. Continuous improvement through community contributions.

Conclusion

The talent hiring and performance evaluation libraries aim to enhance alignment, transparency, collaboration, and composability. Integrating with on-chain project management and the technical test platform, this project empowers organizations beyond AiB to make data-driven decisions in talent acquisition and performance management.

@moul moul added the 💡 idea label Jun 20, 2023
@moul
Copy link
Member Author

moul commented Jun 20, 2023

FYI, here's how ChatGPT addresses this:

package main

import (
	"fmt"
)

// Define the Scorecard struct to represent the evaluation criteria
type Scorecard struct {
	Goals           []string
	Attributes      []string
	QuestionOptions map[string][]string
}

// Define the Candidate struct to represent a candidate
type Candidate struct {
	Name        string
	Experience  int
	Skills      []string
	Contribution string
}

// Define the Evaluation struct to store individual evaluation results
type Evaluation struct {
	EvaluatorName string
	Score         float64
}

// Define the VotingResult struct to store the voting outcome
type VotingResult struct {
	Score     float64
	VoteCount int
}

// Define the function to evaluate a candidate based on the provided scorecard
func EvaluateCandidate(candidate Candidate, scorecard Scorecard, evaluations []Evaluation) VotingResult {
	totalScore := 0.0
	voteCount := 0

	// Evaluate each individual's score and count the votes
	for _, evaluation := range evaluations {
		totalScore += evaluation.Score
		voteCount++
	}

	// Calculate the final score based on the average of individual scores
	finalScore := totalScore / float64(voteCount)

	result := VotingResult{
		Score:     finalScore,
		VoteCount: voteCount,
	}

	return result
}

func main() {
	// Create a sample scorecard
	scorecard := Scorecard{
		Goals:      []string{"Goal 1", "Goal 2", "Goal 3"},
		Attributes: []string{"Attribute 1", "Attribute 2", "Attribute 3"},
		QuestionOptions: map[string][]string{
			"Question 1": {"Option 1", "Option 2", "Option 3"},
			"Question 2": {"Option A", "Option B", "Option C"},
			"Question 3": {"Option X", "Option Y", "Option Z"},
		},
	}

	// Create a sample candidate
	candidate := Candidate{
		Name:        "John Doe",
		Experience:  8,
		Skills:      []string{"Skill 1", "Skill 2", "Skill 3"},
		Contribution: "Goal 2",
	}

	// Create sample evaluations
	evaluations := []Evaluation{
		{EvaluatorName: "Evaluator 1", Score: 4.2},
		{EvaluatorName: "Evaluator 2", Score: 3.8},
		{EvaluatorName: "Evaluator 3", Score: 4.5},
	}

	// Evaluate the candidate
	votingResult := EvaluateCandidate(candidate, scorecard, evaluations)

	// Print the voting result
	fmt.Printf("Candidate: %s\nFinal Score: %.2f\nVotes: %d\n", candidate.Name, votingResult.Score, votingResult.VoteCount)
}

@moul
Copy link
Member Author

moul commented Aug 1, 2023

@moul moul moved this to 🚫 Not Needed for Launch in 🚀 The Launch [DEPRECATED] Sep 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: 🔵 Not Needed for Launch
Development

No branches or pull requests

1 participant