Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

10/27/22: Hoda Heidari #1

Open
GabeNicholson opened this issue Oct 22, 2022 · 82 comments
Open

10/27/22: Hoda Heidari #1

GabeNicholson opened this issue Oct 22, 2022 · 82 comments

Comments

@GabeNicholson
Copy link
Contributor

Comment below with a well-developed question or comment about the reading for this week's workshop.

If you would really like to ask your question in person, please place two exclamation points before your question to signal that you really want to ask it.

Please post your question by Tuesday 11:59 PM, We will also ask you all to upvote questions that you think were particularly good. There may be prizes for top question askers.

@bhavyapan bhavyapan changed the title 10/27: Hoda Heidari 10/27/22: Hoda Heidari Oct 22, 2022
@sdbaier
Copy link

sdbaier commented Oct 24, 2022

Dear Professor Heidari, thank you for sharing your work on hybrid human-ML decision-making systems. Two questions emerged when reading your paper.

(1) Following Proposition 1, does human-ML complementarity imply that the optimal joint decision strictly, and always outperforms both individual policies?

Put differently, are there boundary conditions in the decision-making process where the synthesis of multiple agents (1) performs only equally well as each of the decision-makers alone, and (2) where the synthesis of the multiple agents leads to a worse decision? If such boundary conditions exist, is there a practical decision heuristic to test if the joint decision will outperform the individual ones?

(2) On a related note, how does the optimization-based framework handle divergent conclusions within a single agent – i.e., a dilemma along internal processing of inputs, and subsequent multiplicity of inconsistent outputs?

Maybe I am not fully grasping the complementarity analysis in section 4. From my understanding, diverging conclusions across agents (i.e., disagreement in output) get resolved through weighting different outputs according to their agent. For the case where an agent is torn between multiple alternatives, and when conclusions are conditional, how are decisions ultimately made? As a simplistic example, imagine a binary decision between output 0 and 1. If H decides 0 and M decides 1, the final output of the model depends on the relative weighting of H and M. If H is torn between the two outcomes (i.e., 0 if condition C is met, 1 if C is not met), and/or M is unable to produce a singular output, how would the assessment of outputs and final decision look like?

@fiofiofiona
Copy link

Dear Professor Heidari, thank you for sharing your research on the human-ML complementarity framework. I found it especially interesting that the contribution from human versus ML is largely affected by the consistency of human decisions and machine's access to features.

However, I am curious whether a consistently larger weight on machine over human would eventually exclude human from certain decision making tasks. For example, with the fast enhancement of image classification techniques, machine may one day outperform human (who may make more inconsistent decisions) and thus gradually obtain larger weights, suggesting decreased contribution of human decisions. Do you foresee this as a possible future direction of human-ML complementarity in some areas?

Also, with limited knowledge of hybrid human-ML system literature, I wonder how the human policies were specified and determined, given that the internal processing models may differ a lot across human decision makers. How did prior models account for the variance in inferences and heuristics that human would use?

@taizeyu
Copy link

taizeyu commented Oct 25, 2022

Dear Professor Heidari
I would like to ask whether this high-quality decision-making of human-ML complementarity can be applied to the real world, or can it be achieved at a theoretical level. If so, where can it be applied?

@adamvvu
Copy link

adamvvu commented Oct 25, 2022

Professor Heidari,

I found the discussion in the paper of some of the differences in human cognition vs ML systems particularly interesting. Clearly, each has their advantages and disadvantages.

While the framework focuses mainly on a joint policy obtained from a weighted average of the human and ML decisions, what are your thoughts on a joint policy formed from a composition of the two agents? i.e.

$$ \pi(\mathbf{x}) = \pi_M(s_M(\mathbf{x}, \pi_H(s_H(\mathbf{x}))) $$

In other words, a joint policy obtained by augmenting the ML model's feature space with the human decision maker's estimates. My thinking is that such a policy would allow us to exploit the advantages of the human decision-maker (e.g. expertise, heuristics, qualitative data) along with the mathematical precision of the ML decision-maker through optimization (e.g. consistency, universality).

@borlasekn
Copy link

Thank you for sharing your time and research with us, Prof. Heidari. In the paper, you considered instances where a third party who was independent would determine a join decision by joining the human and ML predictions. You noted some domains where this is feasible because those domains have human and ML decisions that are both credible. I was wondering if there were domains in which this framework ought never be applied? I'm sure in many domains the application would be determined reasonable on a case-by-case basis, but I wasn't sure whether in some domains this sort of combination would be completely infeasible? Thanks!

@Hongkai040
Copy link

Professor Heidari,

Thank you for sharing your work!

I have a question with regard to the inconsistency between human and ML systems. My take away of the paper is that the proposed framework uses this inconsistency to enhance the performance of Human-AI ensembles. However, many AI systems are aimed at behaving like human. I am wondering it it possible to use the same framework to guide designing of such systems if we change the objective function to consistency between human and models? If not possible, what kinds of challenges would you expect?

@secorey
Copy link

secorey commented Oct 26, 2022

Dear Dr. Heidari,

Thank you for coming to present this paper. Though ML models have been showing increased accuracy in decision making in more and more complex situations, I can imagine that there is a general distrust from the public in their abilities, especially when the stakes for decisions are high (e.g., medical diagnoses or bail decisions). Without a doubt, human-ML complementarity should help to alleviate these worries, but probably not take them away completely. In your experience, have you seen public distrust act as a hurdle to the development of this field?

@Ry-Wu
Copy link

Ry-Wu commented Oct 26, 2022

Hi Dr. Heidari,

Thank you for sharing your research with us! In your research, you said you narrowed down
the scope of inquiry to static environments. I'm wondering if this framework can be applied to more dynamic environments? If not, what else should be taken into consideration?

@hsinkengling
Copy link

Thank you Dr. Heidari for sharing your work with us.

In the paper, you mentioned that the application domains for this model can range from crowdsourced image classification to clinical radiology. While the framework can certain be applied to these contexts, would it make a difference that one is based on mass collaboration, while the other is based on small data, expert deliberation?

@erweinstein
Copy link

Hi Professor Heidari,

You focus only on situations where the relevant decision is a prediction, and I should add that you and your co-authors are very clear about this and the limitations it implies. So what would this framework look like for non-prediction tasks, e.g., recommendation, or for even-less-straightforward types of decisions? Since this is a known limitation, are you or any of your colleagues working on that, and if so, can you point us to some related work? Thanks!

@zihua-uc
Copy link

Hi Prof. Heidary,

One difference between human and ML in predictive tasks that you noted: humans have rich experience amassed over the years across many different domains, while ML systems are often trained with a large number of observations for a specific task. Are there any developments in training ML systems across multiple domains (emulating the human experience)? If not, is there any value in doing so?

@Yuxin-Ji
Copy link

Dear Dr. Heidari, thank you for sharing this paper about hybrid Human-ML complementarity model. I found the work enlightening in its attempt to build a higher-level, unifying framework that could make current (and future) models more comparable, and increase our understanding towards the area overall (since there are existing models following the proposed framework, suggesting certain logic already emerged behind human-ML model designing, yet unspoken).

I am curious about the implementation of this framework into an applicable model. While the taxonomy and aggregation mechanisms make sense and are ideal, it is not easy to obtain data for it. Particularly, evaluating the strength and weakness of Human and ML decision-makings to make use of the strengths of both is extremely smart but also hard to measure. I wonder are there standardized methods to measure these strength and weakness, deciding the cut-off points, and evaluating the effectiveness/accuracy of these measurements?

@linhui1020
Copy link

Prof. Heidari, thanks for sharing your work! Will the ML decision-making complement decision-making by human in situation where the decision is associated with high risk and may cause severe outcomes?

@yujing-syj
Copy link

Hi Professor Heidari, thanks so much for sharing this amazing paper. My question is about the application and actual use case of the hybrid human-ML decision-making systems. What do you think of the industry and particular kind of tasks that the hybrid human-ML decision-making systems will be common used in the future? How to promote its usage in real life?

@bermanm
Copy link
Contributor

bermanm commented Oct 26, 2022

Prof. Heidari, I thought this was a very interesting paper. Last year we saw a talk from Prof. Jack Soll about the wisdom of the expert crowds. I was wondering if you had thought about multi-human and multi-ML decision-making systems? For example, multi-human expert systems tend to outperform many novices and singular experts. Is there a way to merge the widsom of the crowds with ML, and could there be some advantage to combining different ML systems with different assumptions together with multiple humans? It could get complicated fast, but I am curious.

@iefis
Copy link

iefis commented Oct 26, 2022

Hi Dr Heidari, thanks for sharing in advance! The taxonomy provides a very conducive framework in evaluating the sources of possible Human-AI complementarity. I am wondering if you could provide other concrete examples of complementarity analysis that weigh multiple sources of complementarity, and how, in these cases, the within-instance vs across-instance complementarity may be explained. Specifically, I am curious about how we can rigorously analyse cases where complementarity is led by the different internal processing between human and AI, as it seems to be a lot more difficult to quantify than the consistency example provided in the article.

@yhchou0904
Copy link

Thank you Professor Heidari for sharing your ideas with us.
The goal of collaboration between humans and machines must be to improve the whole social welfare. By defining the pros and cons of human and machine decision-making, we could get an expectation of how complementary the decision-making process is. I am wondering if there are some intuitive guidelines for people to construct a proper task or situation that could maximize the advantage of a hybrid Human-ML system.

@jinyz1220
Copy link

Hi Professor Heidari,
I am so grateful to you for presenting such enlightening work and I am looking forward to meeting you in person this Thursday! For the paper, I'm specifically interested in the section where you discussed the optimal aggregation mechanism, which is able to accommodate with various sources of complementarity. The best-fitting models for decision makers account for inconsistency in human decisions and target label bias for machines. My concern would be that, although theoretically including the consideration of those two "errors" for human and machine decision respectively in the models generates better prediction, practically speaking it is very complicated to detect and quantify those two errors especially inconsistency in human decisions. The prerequisite of measuring the inconsistency in human decisions is to have a sufficient amount of prior decisions human has made for a specific case. In reality, however, there might not have enough prior data to detect and determine the inconsistency. How would you like to address the potential limitation in the practicality of the models?
Thank you!

@zhiyun0707
Copy link

Hi Professor Heidari, thank you for sharing your work with us! The section on Human vs ML strengths and weakness is predictive decision making is interesting since it provides the trade-off between humans and machine learning techniques. As in this paper, you deliberately “combining predictive decisions in static environments” to a broader goal, I wonder that what will the predictive decisions be like under non-static environments? Thank you!

@AlexBWilliamson
Copy link

Hello Dr. Heidari! Thank you so much for sharing your research with us. In your paper you mention a number of advantages of human decision making over that of Machine Learning and vice versa. In your professional opinion, what is the most important advantage that human decision makers bring to the table? on the flip side, what is the single most important advantage that Machine Learning algorithms have over human decision making?

@awaidyasin
Copy link

Hi Prof., thanks for sharing your work. I was wondering if your formulation could take into account human (ideological) biases as well. If an individual human policy process is biased, but the observed features are ‘relevant,’ we can get a joint policy that is less optimal than the weighted average of the two. Your paper does talk about different input processing and perceptions for Humans and Machines, but that only seems to point toward behavioral biases. I figured that this might be addressed under consistency (but that concerns random disturbances rather than a more permanent shift in one's behavior).

@xin2006
Copy link

xin2006 commented Oct 26, 2022

Hi Prof. Heidari, thanks for sharing such interesting work! I am wondering how to understand the power of human decision here. The Machine Learning itself is a combination of human and computer, which means that it is based on human behavior. So for the combining of human and ML in decision making you mentioned in the paper, is it necessary for us to distinguish the additional human that complemented ML, from the embedded human in the ML algorithms? And I am curious whether there is any overlap?

@cgyhumble0612
Copy link

Hi Professor! Thank you so much for sharing us such an instructive and interesting paper. I'm wondering what's the practical fields to apply this Human-AI combined analysis system? I'm extremely interesting about the usages of using this model into finance fields such as quant investment.

@sushanz
Copy link

sushanz commented Oct 26, 2022

Dear Dr. Heidari, thank you for sharing your work with us! Your research sounds really interesting and I wonder how you would like to deliver those contributions to the reality. As we all know, nowadays machine learning has play a significant role in most of the fields. It becomes inevitable that one day people need to find the break-even point to balance and maintain the relationship between human decision and ML predictive decisions. Would you mind interpreting or expanding a bit more on the topic of how human and ML predictive decisions should be aggregated optimally in your research? Also, what would be the next goal for you in your research exploration?

@ChongyuFang
Copy link

Hi Prof. Heidari, thank you very much for presenting your work with us! Could you please elaborate on how the internal processing procedure is conducted?

@hazelchc
Copy link

Hi Professor Heidari, thank you for sharing your amazing work with us! It is definitely a very interesting and insightful paper. I'm just curious about the performance of hybrid human-ML decision-making in non-static environments, which need this type of technology the most. What are your thoughts? Thank you!

@bowen-w-zheng
Copy link

Hi Prof. Heidari, thank you so much for the talk. Do you think the result would generalize to inference type problem where human might run into computatinal limit and ML will not have sufficient inductive biases?

@yjhuang99
Copy link

Hi Prof. Heidari, This is super interesting work and we are glad to have you at our workshop! I am wondering what are the possible applications of this unifying framework for combining human and ML - could you give some examples for us to know how it works?

@BaotongZh
Copy link

Hi Prof. Heidari. Thank you for bring us such an interesting work. It's very insightful about how you define the aggregation mechanisms for complementarity. I was just wondering how can we combine a powerful algorithm with laypeople (noises) ? And how those noises affect the aggregation mechanisms and the performance of the final prediction ?

@zoeyjiao1104
Copy link

Hi Professor Heidari,

Thanks for sharing your paper with us. I was wondering in terms of the pros and cons of the hybrid between human decision-making and machine learning, is there any real-world application that we could envision in our daily life or in the industry. How could the society benefit from this hybrid setting? Thank you!

@LuZhang0128
Copy link

Hi Prof Heidari,
Thank you for sharing this interesting piece of work with us! As you noted that humans can take advantage of complementary information that the models do not have access to, thus we can make better decisions. I wonder if there are any more intuitive examples or reasonings behind this suggestion? Thank you!

@PAHADRIANUS
Copy link

Professor Heidari,
Thank you for presenting your latest findings in making ML results compatible to human decisions. It is particularly stunning for me to learn that human decisions, due to their inconsistency, should be best complemented with ML even when the machine has limited or imperfect information. The difficulty in such scenarios, though, as you mentioned in every end, is to convince the human, who most certainly will make the final call, to incorporate the machine's outputs. Would it be possible in your opinion to create a medium, of either experienced human agents or specialized machine components, in between to translate the ML results to more commonly acceptable suggestions?

@y8script
Copy link

Dear Prof Heidari,
Thanks for sharing this exciting research with us. The formalization and conception of this high-level decision process for optimal combination of human and AI predictions are valuable towards a holistic view of this issue that may ultimately lead to real-world application. I wonder whether this framework may apply to a slightly more complex situation, in which there are multiple heterogeneous human and/or ML predictors? People with different professional knowledge or personal belief already have a huge variety in their decision, while different ML models may act very differently due to the way they are trained. Is it possible to extend this network for the optimization of multiple-human / multiple-ML models or more complex situations?

@hshi420
Copy link

hshi420 commented Oct 27, 2022

Hi Dr. Heidari, as this is a pure framework/theory paper, I wonder if there is a framework for empirically testing the performance of these theoretical paradigms. In addition, is there a way to standardize the way of 'updating' the framework so that the framework can be generalized to more paradigms? Thank you!

@mintaow
Copy link

mintaow commented Oct 27, 2022

Hello, Professor Heidari, thanks for sharing this fulfilling paper! It is very interesting and inspirational to integrate comparative advantages in human acts to AI/ML. What I am curious about is that, while human behaviors can be in nature implicit and obscure, how do we represent (or label) the data in an intact way, or we could bypass this by opting for some self-learning technique? Thanks again!

@ChrisZhang6888
Copy link

Hi Professor Heidari, thanks for sharing this exciting research with us! It is both insightful and inspiring! I am wondering if given the combination of human decision and machine learning, will this mechanism be able to make decisions on non-data-driven problems.

@essicaJ
Copy link

essicaJ commented Oct 27, 2022

Hello, Professor Heidari. Thanks for sharing this exciting research with us! In reviewing your paper, I am wondering that for the "within-instance complementarity" model in which humans and machines jointly decide on an outcome how do we determine the weight associated with the decision by the human and by machine? Will they contribute equally or if not, what factors do we account for when computing this weight? Thanks so much. Looking forward to your talk tomorrow!

@bningdling
Copy link

Hi Professor Heidari,

Thanks for sharing your research with us. You emphasized in your paper that "combining the complementary strengths of humans and ML leads to higher quality decisions than those produced by each of them individually". I was wondering how to determine the responsibility if the decision is biased, and how would people's reactions change after the find ML making mistakes. Thank you!

@ddlee19
Copy link

ddlee19 commented Oct 27, 2022

Hi Prof Heidari,

My question is about the scope of the paper. As the settings studied do not include settings "where a human decision-maker makes the final call or to cases where predictions do not translate to decision in a straightforward manner," what do you think is the way forward in broadening the scope in future works?

@xiaowei-v
Copy link

Hi Professor Heidari! Thank you for sharing your research with us! The article mentions testing hypotheses about the optimal aggregation schemes in practical settings. Would you care to explain further what kind of specific hypotheses it could generate and potential method to test these hypotheses?

@XTang685
Copy link

Hi Professor Heidari,

Thanks for sharing this exciting research with us! Your paper is really intriguing. My question is, in the process of machine learning decision, does human's bias play any roles or make any impacts on it? Looking forward to watching your presentation tomorrow.

@Coco-Jiachen-Yu
Copy link

Hello Professor Heidari,

Thank you for sharing your research and I look forward to your presentation tomorrow. Your research topic is really intriguing. My questions is relevant to algorithmic biases and human biases. From previous presentations in the workshop, we've learned multiple efforts to address bias in implementation artificial intelligence. In your modeling that integrates human and machine predictive decision making, do you expect fewer biases compared to machine only, or human decision, separately?

@YijingZhang-98
Copy link

Hi Professor Heidari,

Thank you for sharing the excellent research. It just came up to me that putting a lot weight on the machine learning results for decision-making could easily run into statistical discrimination. So would justice is also reason for the divergence between human can AI?

@nijingwen
Copy link

Hi Professor Heidari! Thanks for sharing this exciting research with us! Hybrid human-ML teams are increasingly in charge of consequential decisions in various domains. I would like to hear more about the mechanism by which your combine human and ML judgments. Looking forward to seeing you tomorrow.

@YLHan97
Copy link

YLHan97 commented Oct 27, 2022

Hi Professor Heidari,

Thanks for sharing your research with us. I have a question as follows:
In the article "Toward a Unifying Framework for Combining Complementary Strengths of Humans and ML toward Better Predictive Decision-Making,” you have mentioned that Hybrid human-ml teams are increasingly responsible for the corresponding decisions in various areas. Also, empirical and theoretical work has contributed to our understanding of these systems. Since I’m really interested in machine learning applied in the real world but not so much familiar with the Hybrid human-ML industry, would you please provide more real-world examples in your relevant area?

@Peihan12
Copy link

Hello Professor Heidari,
Thank you for sharing your innovative work! Apart from the paradigm, you point out in the paper, have you considered how different industries and professional contexts would affect the human-machine learning interactive decision-making process?

@yutaili
Copy link

yutaili commented Oct 27, 2022

Hi Professor Heldari,

Thanks for sharing your work on human-ML complimentary decision making. It's really interesting that you brought up the idea that the hybrid human-ML decision making can outperform the decision made individually. Is that's the case, I'm wondering how would you validate the result of the decision made by this hybrid system, if the outcome is beyond the scope of either human or ML individually? Given the example of using patient's medical record and perception of medical record to predict prescribed treatment, how could we validate this treatment if that's beyond the scope of our knowledge, and who's going to take the responsibility of implementing a wrong treatment? Thanks.

@shenyc16
Copy link

Dear Professor Heldari,

Thanks for sharing this interesting research with us. It;s inspiring to think about combining the complementary strengths of humans and ML and establish a solid framework based on this idea. My question is, what can be a efficient and reasonable measurement for determining the "quality" of hybrid work? Also, will this combination lead to any moral hazard?

@YutaoHeOVO
Copy link

Hi Professor Heldari,

Thanks for sharing your work. The mathematics part of the paper is convincing and I think the idea is not just human decisions serve as the supplement but work together toward the prediction problem. But here is one thing is that human decision might be biased and there can be hugh heterogeneity among human decision making. (Well people make decision at their own discretion.) Could you let us know how to deal with robustness issues related to the complementarity framework? (To me this paper is more focused on plausibility and showing it is a theoretically plausible framework and I am really interested in how we can solve robustness issue.) And I also notice there was a paper about Bayesian modeling of human & ai complementarity in the reference. (Steyvers et.al.'s PNAS paper) Could you let us more about know how Bayesian methods can be used under this context? Again, thank you so much on sharing this with us.

@sudhamshow
Copy link

sudhamshow commented Oct 27, 2022

Thank you Professor Heidari, for introducing us to such a novel paper and walking us through the creative design decision process. A couple of questions arose while reading your work -

  1. I was wondering why you did not consider sequential decision-making processes while building your framework. An abundance of present-day Human In the Loop and Active Learning Tasks are sequential. Do you think extending your framework to these would be helpful?
  2. I was wondering if the differences in the ways humans and machines perceive a given input would have an impact on the design of your framework. A machine is trained to perceive an input instance $x \in \chi^n$ and is clearly prepared to look for particular covariates in the inputs. However, for a task that is not methodological or with inconsistent training, a human might only realise $x \subseteq \chi^n$. Wouldn't this be a problem while further perceiving the instance s(x) or when choosing a path of action ( $\pi(x)$ )?
  3. You rightly mention in your paper that noise has to be captured in the choice of action ( $\epsilon_i(s_i(x))$. Since human behaviour is non-deterministic and depends on several external factors, I was wondering if $s_i$ could be considered a random variable ( $s_i \sim F(E(s_i), Var(s_i))$ where F is some Distribution function )
  4. I was also wondering whether $\pi_h$ and $\pi_m$ could be considered independent in the computation of the joint policy. Suppose both of them had a high correlation, wouldn't the computation of the policy be biased?

@helyap
Copy link

helyap commented Oct 27, 2022

Hi Prof. Heidari,

Thank you for sharing your research with us. The recommendations and evidence put forth in your framework are promising for the improvement in ML applications. As a common concern in conversations regarding both ML and human decision making processes, I'm curious about your perspectives on the underlying problem of biased data that may discolor both human and algorithmic calculations. Are there any promising detection methods that can help differentiate unwanted social bias in the data that is being used?

Thank you and I'm looking forward to your presentation.

@bhavyapan
Copy link
Contributor

Hello Prof. Heidari,

Based on my limited exposure to Human-ML interaction literature, I was wondering how the research models individual human decision makers' internal processing in a generalisable manner, as it is bound to be affected by distinct inferences, heuristics, and biases. This question further extends to the consideration of sequential decision-making processes in modelling, and how this may change the way research considers models. I look forward to your insights on these points.

Thank you.

@Emily-fyeh
Copy link

Hi Professor Heidari,
Thank you for sharing your work with us!
I am wondering how you would interpret the proportion of intrinsic biases in human-ML interaction. How do you determine the weight between human judgment and machine workflow design? Also like many of my peers, I would want to know if there are more specific examples of this interactive decision-making process in different social contexts/scenarios.
Thanks!

@yunshu3112
Copy link

Hi Professor Heidari,

I gained a lot from reading your paper and was impressed by the rigorousness of the setup. As you have shown that a complementarity always exists, I wonder is the optimal joint policy always unique? Does uniqueness matter? For the future application of this framework, do you expect one unique optimal joint policy being reached to assist decision making, or do you expect the development in machine learning will minimize w_H* and eventually achieve w_H* = 0?

Thank you!

@JerryCG
Copy link

JerryCG commented Oct 27, 2022

Hi Professor Heidari,

Combining human expertise and ML prediction power is definitely one of the hottest topic nowadays in HCI research. Your research reminds me of another empirical research in radiology done by Professor Nikhil Agarwal and his coauthors from MIT. One of his intriguing findings is that AI prediction can be most problematic when it is uncertain about the decision, e.g., predict the probability of A as 0.45, and B as 0.55. And AI + Human can be most beneficial in this kind of scenario in the sense of increasing accuracy. Therefore, I wonder if your framework can capture and explain this kind of unique advantage of the hybrid mode when AI is uncertain of the prediction.

Best,
Jerry (Guo) Cheng

@UjjwalSehrawat
Copy link

Hi Professor Heidari,

Thank you for sharing your great work with us. The within-subject inconsistency in decision making is a time and context dependent phenomenon and can be quite difficult to model due to various reasons including missing data and unobserved/unobservable confounding factors. My question is how would you propose to tackle this challenge of measurement of human covariates of complex (often times unmeasurable) nature and their interactions under your formalization so that our comparisons of ML and human decisions are not themselves biased due to measurement limitations. And how fundamental do you believe such a challenge is to the problem space proposed in the paper?

Warm regards,
Ujjwal Sehrawat

@xinyi030
Copy link

xinyi030 commented Nov 9, 2022

Dear Professor,

Thanks for sharing your findings with us.

By reviewing the hybrid human-ML decision-making systems, I feel there is a fundamental difference between what humans and computers are good at. Humans are conscious and good at making plans and decisions in complex scenarios, but they are not good at doing a lot of data processing, while computers are good at efficient data processing but can't make basic judgments as easily as humans can. This significant difference between humans and machines also means that the benefits that people get if they work deeply with computers are much higher than the benefits that people get from cooperative transactions with other people, so computers are good aids for humans, not competitors.

In addition, I have some related questions. Will a field like deep learning see the convergence of statistics and machine learning? In other words, will statisticians apply computer-intensive deep learning model paradigms? Will statistical tools be used by machine learning researchers to advance the field?

@cgyhumble0612
Copy link

Hi Professor,

Thanks for sharing us the interesting idea. I would like to learn more about the empirical applications of this system. What do you think of the use on Manufacturing and Financial Industry? Thank you.

@QichangZheng
Copy link

Greetings, Professor Heidari
I appreciate you sharing your creative work. Have you thought about how different industries and professional contexts might alter the interactive decision-making process between humans and machines, in addition to the paradigm that you mention in the paper?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests