-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
10/27/22: Hoda Heidari #1
Comments
Dear Professor Heidari, thank you for sharing your work on hybrid human-ML decision-making systems. Two questions emerged when reading your paper. (1) Following Proposition 1, does human-ML complementarity imply that the optimal joint decision strictly, and always outperforms both individual policies? Put differently, are there boundary conditions in the decision-making process where the synthesis of multiple agents (1) performs only equally well as each of the decision-makers alone, and (2) where the synthesis of the multiple agents leads to a worse decision? If such boundary conditions exist, is there a practical decision heuristic to test if the joint decision will outperform the individual ones? (2) On a related note, how does the optimization-based framework handle divergent conclusions within a single agent – i.e., a dilemma along internal processing of inputs, and subsequent multiplicity of inconsistent outputs? Maybe I am not fully grasping the complementarity analysis in section 4. From my understanding, diverging conclusions across agents (i.e., disagreement in output) get resolved through weighting different outputs according to their agent. For the case where an agent is torn between multiple alternatives, and when conclusions are conditional, how are decisions ultimately made? As a simplistic example, imagine a binary decision between output |
Dear Professor Heidari, thank you for sharing your research on the human-ML complementarity framework. I found it especially interesting that the contribution from human versus ML is largely affected by the consistency of human decisions and machine's access to features. However, I am curious whether a consistently larger weight on machine over human would eventually exclude human from certain decision making tasks. For example, with the fast enhancement of image classification techniques, machine may one day outperform human (who may make more inconsistent decisions) and thus gradually obtain larger weights, suggesting decreased contribution of human decisions. Do you foresee this as a possible future direction of human-ML complementarity in some areas? Also, with limited knowledge of hybrid human-ML system literature, I wonder how the human policies were specified and determined, given that the internal processing models may differ a lot across human decision makers. How did prior models account for the variance in inferences and heuristics that human would use? |
Dear Professor Heidari |
Professor Heidari, I found the discussion in the paper of some of the differences in human cognition vs ML systems particularly interesting. Clearly, each has their advantages and disadvantages. While the framework focuses mainly on a joint policy obtained from a weighted average of the human and ML decisions, what are your thoughts on a joint policy formed from a composition of the two agents? i.e. In other words, a joint policy obtained by augmenting the ML model's feature space with the human decision maker's estimates. My thinking is that such a policy would allow us to exploit the advantages of the human decision-maker (e.g. expertise, heuristics, qualitative data) along with the mathematical precision of the ML decision-maker through optimization (e.g. consistency, universality). |
Thank you for sharing your time and research with us, Prof. Heidari. In the paper, you considered instances where a third party who was independent would determine a join decision by joining the human and ML predictions. You noted some domains where this is feasible because those domains have human and ML decisions that are both credible. I was wondering if there were domains in which this framework ought never be applied? I'm sure in many domains the application would be determined reasonable on a case-by-case basis, but I wasn't sure whether in some domains this sort of combination would be completely infeasible? Thanks! |
Professor Heidari, Thank you for sharing your work! I have a question with regard to the inconsistency between human and ML systems. My take away of the paper is that the proposed framework uses this inconsistency to enhance the performance of Human-AI ensembles. However, many AI systems are aimed at behaving like human. I am wondering it it possible to use the same framework to guide designing of such systems if we change the objective function to consistency between human and models? If not possible, what kinds of challenges would you expect? |
Dear Dr. Heidari, Thank you for coming to present this paper. Though ML models have been showing increased accuracy in decision making in more and more complex situations, I can imagine that there is a general distrust from the public in their abilities, especially when the stakes for decisions are high (e.g., medical diagnoses or bail decisions). Without a doubt, human-ML complementarity should help to alleviate these worries, but probably not take them away completely. In your experience, have you seen public distrust act as a hurdle to the development of this field? |
Hi Dr. Heidari, Thank you for sharing your research with us! In your research, you said you narrowed down |
Thank you Dr. Heidari for sharing your work with us. In the paper, you mentioned that the application domains for this model can range from crowdsourced image classification to clinical radiology. While the framework can certain be applied to these contexts, would it make a difference that one is based on mass collaboration, while the other is based on small data, expert deliberation? |
Hi Professor Heidari, You focus only on situations where the relevant decision is a prediction, and I should add that you and your co-authors are very clear about this and the limitations it implies. So what would this framework look like for non-prediction tasks, e.g., recommendation, or for even-less-straightforward types of decisions? Since this is a known limitation, are you or any of your colleagues working on that, and if so, can you point us to some related work? Thanks! |
Hi Prof. Heidary, One difference between human and ML in predictive tasks that you noted: humans have rich experience amassed over the years across many different domains, while ML systems are often trained with a large number of observations for a specific task. Are there any developments in training ML systems across multiple domains (emulating the human experience)? If not, is there any value in doing so? |
Dear Dr. Heidari, thank you for sharing this paper about hybrid Human-ML complementarity model. I found the work enlightening in its attempt to build a higher-level, unifying framework that could make current (and future) models more comparable, and increase our understanding towards the area overall (since there are existing models following the proposed framework, suggesting certain logic already emerged behind human-ML model designing, yet unspoken). I am curious about the implementation of this framework into an applicable model. While the taxonomy and aggregation mechanisms make sense and are ideal, it is not easy to obtain data for it. Particularly, evaluating the strength and weakness of Human and ML decision-makings to make use of the strengths of both is extremely smart but also hard to measure. I wonder are there standardized methods to measure these strength and weakness, deciding the cut-off points, and evaluating the effectiveness/accuracy of these measurements? |
Prof. Heidari, thanks for sharing your work! Will the ML decision-making complement decision-making by human in situation where the decision is associated with high risk and may cause severe outcomes? |
Hi Professor Heidari, thanks so much for sharing this amazing paper. My question is about the application and actual use case of the hybrid human-ML decision-making systems. What do you think of the industry and particular kind of tasks that the hybrid human-ML decision-making systems will be common used in the future? How to promote its usage in real life? |
Prof. Heidari, I thought this was a very interesting paper. Last year we saw a talk from Prof. Jack Soll about the wisdom of the expert crowds. I was wondering if you had thought about multi-human and multi-ML decision-making systems? For example, multi-human expert systems tend to outperform many novices and singular experts. Is there a way to merge the widsom of the crowds with ML, and could there be some advantage to combining different ML systems with different assumptions together with multiple humans? It could get complicated fast, but I am curious. |
Hi Dr Heidari, thanks for sharing in advance! The taxonomy provides a very conducive framework in evaluating the sources of possible Human-AI complementarity. I am wondering if you could provide other concrete examples of complementarity analysis that weigh multiple sources of complementarity, and how, in these cases, the within-instance vs across-instance complementarity may be explained. Specifically, I am curious about how we can rigorously analyse cases where complementarity is led by the different internal processing between human and AI, as it seems to be a lot more difficult to quantify than the consistency example provided in the article. |
Thank you Professor Heidari for sharing your ideas with us. |
Hi Professor Heidari, |
Hi Professor Heidari, thank you for sharing your work with us! The section on Human vs ML strengths and weakness is predictive decision making is interesting since it provides the trade-off between humans and machine learning techniques. As in this paper, you deliberately “combining predictive decisions in static environments” to a broader goal, I wonder that what will the predictive decisions be like under non-static environments? Thank you! |
Hello Dr. Heidari! Thank you so much for sharing your research with us. In your paper you mention a number of advantages of human decision making over that of Machine Learning and vice versa. In your professional opinion, what is the most important advantage that human decision makers bring to the table? on the flip side, what is the single most important advantage that Machine Learning algorithms have over human decision making? |
Hi Prof., thanks for sharing your work. I was wondering if your formulation could take into account human (ideological) biases as well. If an individual human policy process is biased, but the observed features are ‘relevant,’ we can get a joint policy that is less optimal than the weighted average of the two. Your paper does talk about different input processing and perceptions for Humans and Machines, but that only seems to point toward behavioral biases. I figured that this might be addressed under consistency (but that concerns random disturbances rather than a more permanent shift in one's behavior). |
Hi Prof. Heidari, thanks for sharing such interesting work! I am wondering how to understand the power of human decision here. The Machine Learning itself is a combination of human and computer, which means that it is based on human behavior. So for the combining of human and ML in decision making you mentioned in the paper, is it necessary for us to distinguish the additional human that complemented ML, from the embedded human in the ML algorithms? And I am curious whether there is any overlap? |
Hi Professor! Thank you so much for sharing us such an instructive and interesting paper. I'm wondering what's the practical fields to apply this Human-AI combined analysis system? I'm extremely interesting about the usages of using this model into finance fields such as quant investment. |
Dear Dr. Heidari, thank you for sharing your work with us! Your research sounds really interesting and I wonder how you would like to deliver those contributions to the reality. As we all know, nowadays machine learning has play a significant role in most of the fields. It becomes inevitable that one day people need to find the break-even point to balance and maintain the relationship between human decision and ML predictive decisions. Would you mind interpreting or expanding a bit more on the topic of how human and ML predictive decisions should be aggregated optimally in your research? Also, what would be the next goal for you in your research exploration? |
Hi Prof. Heidari, thank you very much for presenting your work with us! Could you please elaborate on how the internal processing procedure is conducted? |
Hi Professor Heidari, thank you for sharing your amazing work with us! It is definitely a very interesting and insightful paper. I'm just curious about the performance of hybrid human-ML decision-making in non-static environments, which need this type of technology the most. What are your thoughts? Thank you! |
Hi Prof. Heidari, thank you so much for the talk. Do you think the result would generalize to inference type problem where human might run into computatinal limit and ML will not have sufficient inductive biases? |
Hi Prof. Heidari, This is super interesting work and we are glad to have you at our workshop! I am wondering what are the possible applications of this unifying framework for combining human and ML - could you give some examples for us to know how it works? |
Hi Prof. Heidari. Thank you for bring us such an interesting work. It's very insightful about how you define the aggregation mechanisms for complementarity. I was just wondering how can we combine a powerful algorithm with laypeople (noises) ? And how those noises affect the aggregation mechanisms and the performance of the final prediction ? |
Hi Professor Heidari, Thanks for sharing your paper with us. I was wondering in terms of the pros and cons of the hybrid between human decision-making and machine learning, is there any real-world application that we could envision in our daily life or in the industry. How could the society benefit from this hybrid setting? Thank you! |
Hi Prof Heidari, |
Professor Heidari, |
Dear Prof Heidari, |
Hi Dr. Heidari, as this is a pure framework/theory paper, I wonder if there is a framework for empirically testing the performance of these theoretical paradigms. In addition, is there a way to standardize the way of 'updating' the framework so that the framework can be generalized to more paradigms? Thank you! |
Hello, Professor Heidari, thanks for sharing this fulfilling paper! It is very interesting and inspirational to integrate comparative advantages in human acts to AI/ML. What I am curious about is that, while human behaviors can be in nature implicit and obscure, how do we represent (or label) the data in an intact way, or we could bypass this by opting for some self-learning technique? Thanks again! |
Hi Professor Heidari, thanks for sharing this exciting research with us! It is both insightful and inspiring! I am wondering if given the combination of human decision and machine learning, will this mechanism be able to make decisions on non-data-driven problems. |
Hello, Professor Heidari. Thanks for sharing this exciting research with us! In reviewing your paper, I am wondering that for the "within-instance complementarity" model in which humans and machines jointly decide on an outcome how do we determine the weight associated with the decision by the human and by machine? Will they contribute equally or if not, what factors do we account for when computing this weight? Thanks so much. Looking forward to your talk tomorrow! |
Hi Professor Heidari, Thanks for sharing your research with us. You emphasized in your paper that "combining the complementary strengths of humans and ML leads to higher quality decisions than those produced by each of them individually". I was wondering how to determine the responsibility if the decision is biased, and how would people's reactions change after the find ML making mistakes. Thank you! |
Hi Prof Heidari, My question is about the scope of the paper. As the settings studied do not include settings "where a human decision-maker makes the final call or to cases where predictions do not translate to decision in a straightforward manner," what do you think is the way forward in broadening the scope in future works? |
Hi Professor Heidari! Thank you for sharing your research with us! The article mentions testing hypotheses about the optimal aggregation schemes in practical settings. Would you care to explain further what kind of specific hypotheses it could generate and potential method to test these hypotheses? |
Hi Professor Heidari, Thanks for sharing this exciting research with us! Your paper is really intriguing. My question is, in the process of machine learning decision, does human's bias play any roles or make any impacts on it? Looking forward to watching your presentation tomorrow. |
Hello Professor Heidari, Thank you for sharing your research and I look forward to your presentation tomorrow. Your research topic is really intriguing. My questions is relevant to algorithmic biases and human biases. From previous presentations in the workshop, we've learned multiple efforts to address bias in implementation artificial intelligence. In your modeling that integrates human and machine predictive decision making, do you expect fewer biases compared to machine only, or human decision, separately? |
Hi Professor Heidari, Thank you for sharing the excellent research. It just came up to me that putting a lot weight on the machine learning results for decision-making could easily run into statistical discrimination. So would justice is also reason for the divergence between human can AI? |
Hi Professor Heidari! Thanks for sharing this exciting research with us! Hybrid human-ML teams are increasingly in charge of consequential decisions in various domains. I would like to hear more about the mechanism by which your combine human and ML judgments. Looking forward to seeing you tomorrow. |
Hi Professor Heidari, Thanks for sharing your research with us. I have a question as follows: |
Hello Professor Heidari, |
Hi Professor Heldari, Thanks for sharing your work on human-ML complimentary decision making. It's really interesting that you brought up the idea that the hybrid human-ML decision making can outperform the decision made individually. Is that's the case, I'm wondering how would you validate the result of the decision made by this hybrid system, if the outcome is beyond the scope of either human or ML individually? Given the example of using patient's medical record and perception of medical record to predict prescribed treatment, how could we validate this treatment if that's beyond the scope of our knowledge, and who's going to take the responsibility of implementing a wrong treatment? Thanks. |
Dear Professor Heldari, Thanks for sharing this interesting research with us. It;s inspiring to think about combining the complementary strengths of humans and ML and establish a solid framework based on this idea. My question is, what can be a efficient and reasonable measurement for determining the "quality" of hybrid work? Also, will this combination lead to any moral hazard? |
Hi Professor Heldari, Thanks for sharing your work. The mathematics part of the paper is convincing and I think the idea is not just human decisions serve as the supplement but work together toward the prediction problem. But here is one thing is that human decision might be biased and there can be hugh heterogeneity among human decision making. (Well people make decision at their own discretion.) Could you let us know how to deal with robustness issues related to the complementarity framework? (To me this paper is more focused on plausibility and showing it is a theoretically plausible framework and I am really interested in how we can solve robustness issue.) And I also notice there was a paper about Bayesian modeling of human & ai complementarity in the reference. (Steyvers et.al.'s PNAS paper) Could you let us more about know how Bayesian methods can be used under this context? Again, thank you so much on sharing this with us. |
Thank you Professor Heidari, for introducing us to such a novel paper and walking us through the creative design decision process. A couple of questions arose while reading your work -
|
Hi Prof. Heidari, Thank you for sharing your research with us. The recommendations and evidence put forth in your framework are promising for the improvement in ML applications. As a common concern in conversations regarding both ML and human decision making processes, I'm curious about your perspectives on the underlying problem of biased data that may discolor both human and algorithmic calculations. Are there any promising detection methods that can help differentiate unwanted social bias in the data that is being used? Thank you and I'm looking forward to your presentation. |
Hello Prof. Heidari, Based on my limited exposure to Human-ML interaction literature, I was wondering how the research models individual human decision makers' internal processing in a generalisable manner, as it is bound to be affected by distinct inferences, heuristics, and biases. This question further extends to the consideration of sequential decision-making processes in modelling, and how this may change the way research considers models. I look forward to your insights on these points. Thank you. |
Hi Professor Heidari, |
Hi Professor Heidari, I gained a lot from reading your paper and was impressed by the rigorousness of the setup. As you have shown that a complementarity always exists, I wonder is the optimal joint policy always unique? Does uniqueness matter? For the future application of this framework, do you expect one unique optimal joint policy being reached to assist decision making, or do you expect the development in machine learning will minimize w_H* and eventually achieve w_H* = 0? Thank you! |
Hi Professor Heidari, Combining human expertise and ML prediction power is definitely one of the hottest topic nowadays in HCI research. Your research reminds me of another empirical research in radiology done by Professor Nikhil Agarwal and his coauthors from MIT. One of his intriguing findings is that AI prediction can be most problematic when it is uncertain about the decision, e.g., predict the probability of A as 0.45, and B as 0.55. And AI + Human can be most beneficial in this kind of scenario in the sense of increasing accuracy. Therefore, I wonder if your framework can capture and explain this kind of unique advantage of the hybrid mode when AI is uncertain of the prediction. Best, |
Hi Professor Heidari, Thank you for sharing your great work with us. The within-subject inconsistency in decision making is a time and context dependent phenomenon and can be quite difficult to model due to various reasons including missing data and unobserved/unobservable confounding factors. My question is how would you propose to tackle this challenge of measurement of human covariates of complex (often times unmeasurable) nature and their interactions under your formalization so that our comparisons of ML and human decisions are not themselves biased due to measurement limitations. And how fundamental do you believe such a challenge is to the problem space proposed in the paper? Warm regards, |
Dear Professor, Thanks for sharing your findings with us. By reviewing the hybrid human-ML decision-making systems, I feel there is a fundamental difference between what humans and computers are good at. Humans are conscious and good at making plans and decisions in complex scenarios, but they are not good at doing a lot of data processing, while computers are good at efficient data processing but can't make basic judgments as easily as humans can. This significant difference between humans and machines also means that the benefits that people get if they work deeply with computers are much higher than the benefits that people get from cooperative transactions with other people, so computers are good aids for humans, not competitors. In addition, I have some related questions. Will a field like deep learning see the convergence of statistics and machine learning? In other words, will statisticians apply computer-intensive deep learning model paradigms? Will statistical tools be used by machine learning researchers to advance the field? |
Hi Professor, Thanks for sharing us the interesting idea. I would like to learn more about the empirical applications of this system. What do you think of the use on Manufacturing and Financial Industry? Thank you. |
Greetings, Professor Heidari |
Comment below with a well-developed question or comment about the reading for this week's workshop.
If you would really like to ask your question in person, please place two exclamation points before your question to signal that you really want to ask it.
Please post your question by Tuesday 11:59 PM, We will also ask you all to upvote questions that you think were particularly good. There may be prizes for top question askers.
The text was updated successfully, but these errors were encountered: