Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

10/14: Sendhil Mullainathan #5

Open
shevajia opened this issue Oct 12, 2021 · 110 comments
Open

10/14: Sendhil Mullainathan #5

shevajia opened this issue Oct 12, 2021 · 110 comments

Comments

@shevajia
Copy link
Contributor

Comment below with questions or thoughts about the reading for this week's workshop.

Please make your comments by Wednesday 11:59 PM, and upvote at least five of your peers' comments on Thursday prior to the workshop. You need to use 'thumbs-up' for your reactions to count towards 'top comments,' but you can use other emojis on top of the thumbs up.

@nijingwen
Copy link

Thanks for coming and offer us this lecture. When we do research, people will have some bias even if they do not realize it.
I am looking forward to hearing from you on Friday.

thanks again

@pranathiiyer
Copy link

Thanks so much for sharing this paper with us! It was great to see how tradeoffs between bias and variance can be modelled.
It is needless to say how ubiquitous prediction using ML algorithms have become, however they come with several problems of ethics, fairness, and bias, all of which are greatly being discussed in current times. Several algorithms can be biased with respect to race and gender, despite them being protected attributes. You speak of algorithms in the healthcare context, where sensitivity of data has always been a matter of concern. Did you have to deal with any protected attributes, or aspects of data anonymity etc.? Moreover, does accounting for bias in algorithms make them more complex, and hence less transparent, or can it be done in a simplified manner too? Would like to know what you think!

@nswxin
Copy link

nswxin commented Oct 13, 2021

Thank you so much for coming! It's true that behavioral error (defined as the tendency for people to over-emphasize dispositional, or personality-based explanations for behaviors observed in others while under-emphasizing situational explanations) is common in daily lives and may cause social disruption if it's not handled meticulously. Thus, the procedure developed by your team sound super meaningful! However, while human beings tend to misjudge due to bias and being emotional, I believe there will be certain issues related to computers and calculations that are different from human behavior but may cause potential problems as well. I'm looking forward to knowing more!

@yjhuang99
Copy link

Hi Professor Mullainathan, thanks for coming and giving us the presentation! The paper you wrote provides us a new perspective to look at and the leading example is great for us to understand the point you want to make - researchers always tried making causal inferences and solving causal inference problems but these may not be the central problem in every policy application. In many ways, predictions can be very important and have large policy impacts using machine learning methods. I am curious about how you actually model these kinds of prediction problems, how large the data needs to be for credibility, and how the conclusions can be generalized.

@JadeBenson
Copy link

Thank you so much for this interesting research! This paper poses an exciting way to re-imagine policy problems and our role as computational social scientists in answering them. I was wondering how you would envision applying these results to the health care system? Would doctors or medicare claims require the patients to be recommended by this algorithm? Would we need enough information from each potential patient to construct the 3,000+ variables to be properly categorized? I would love to hear more about how we could incorporate these types of machine learning approaches into our current systems.

@YLHan97
Copy link

YLHan97 commented Oct 13, 2021

Hi Professor Mullainathan, thank you so much for sharing such interesting topic! Nowadays, machine learning becomes more and more useful in many industries. Also, everyone care about social welfare, so I think it is a good way to use the example from health policy to find out social welfare gains.

@JoeHelbing
Copy link

JoeHelbing commented Oct 13, 2021

Where machine learning is applied to questions of Medicare procedures or parole releases for example or other decisions that affect an individuals life in very consequential ways, how do we properly integrate these systems into that decisions process?

With a human weighing evidence, there's an individual with a thought process they could ostensibly explain, and thus has both responsibility for the decision and the social role as someone to appeal as "the other side" in an argument of sorts. Even though the ML tools are much more accurate as a predictor, how do we avoid these becoming both a crutch for people making difficult decisions and also an unassailable explanation for why they made these decisions?

@zhiqianc
Copy link

Thank you Prof. Mullainathan. The paper is quite informative and instructive. It enables me to gain the very basic knowledge about machine learning. My understanding is, OLS model, aimming to achieve zero bias, is not a suitable way to do prediction because it only includes the minimization of bias term. While bias and variance are having a tradeoff relationship (meaning variance must be sacrificed and increased if to achieve lowest bias), the prediction asks for accuracy and low variance to narrow the possible outcomes. Thus Machine learning model takes both the variance term and bias term into the minimization. But I still fail to understand the function and work of the regularizer ( R(f) ) as well as how it is constructed.

@awaidyasin
Copy link

Thank you Prof. Mullainathan for your work! It is always refreshing to come across news methods that improve upon our traditional understanding of evaluating policy. My question is much general in nature.

The way you have explained prediction policy problems still presents causal inference and prediction as two independent traditions which don't seem to communicate with each other. By that, I am referring to the urge of going after one over the other i.e. you are either predicting or inferring from separate models. I just wanted to know if there are ways in which prediction and causal inference are both combined organically and perhaps under a unified model (and not presented as competitors or mutually exclusive)?

@yiq029
Copy link

yiq029 commented Oct 13, 2021

Hi Professor Mullainathan, thank you so much for sharing your work! Looking forward your presentation!!

@borlasekn
Copy link

Thank you Professor Mullainathan for sharing your expertise with us. I had a question regarding an assertion that you made in the summary for your talk. You note that facial features account for 30% of variation that is explainable in whether the judge sends a given person to jail. However, you follow this by saying "This finding is not explained by race, skin color, demographics or other known factors". It is now well understood that race does not have a biological basis and is instead a social construct, often associated with skin color, demographics, etc.. With this in mind, is it possible that judges still associate certain facial features with race, skin color, or background? It seems as though this would perhaps link the explanation with a factor such as race?

@kuitaiw
Copy link

kuitaiw commented Oct 13, 2021

Hi Professor Mullainathan.Thank you so much for sharing your research with us. Machine learning is more and more important for econ research . Looking forward your presentation!

@GabeNicholson
Copy link

Hi Professor, I think your research is very interesting and it ties into some other context where facial appearance has subconscious effects. For example, there are studies showing that people can accurately guess the winner of a political race-based only on a one-second look at the headshot of the two candidates running (https://www.princeton.edu/news/2007/10/22/determine-election-outcomes-study-says-snap-judgments-are-sufficient).

Moreover, it's also been shown that babies can recognize attractive faces from as early as a few months after being born, indicating that there is some innate module in our minds to recognize certain facial features as "attractive" (https://www.newscientist.com/article/dn6355-babies-prefer-to-gaze-upon-beautiful-faces).

Do you think it is possible to extend your algorithm to come up with estimates for these two different scenarios?

@ChongyuFang
Copy link

Hi Professor Mullainathan! Thanks so much for sharing your research on predictive inference with us in this week's CSS workshop. We would be very glad to have an understanding of predictive inference and how it may be applied to future policy work and nowcasting!

@sushanz
Copy link

sushanz commented Oct 13, 2021

Dear Prof. Mullainathan,

Thank you for taking your time to our workshop. We are also studying about natural experiments this week and look forward to hearing you more in those causal inference tools such as lab, field, and natural experiments to study social problems. Many of us are going to do research in our own and I believe we will generate and ignite more ideas after hearing your speech!

@afchao
Copy link

afchao commented Oct 13, 2021

Thank you for presenting to our group!

I'm not familiar with economic policy, so the answer may be painfully obvious, but why is it that prior research in the field has been so focused on causal inference? As the umbrella-wielders could attest, predictions have been possible long before advanced computational methods have been available to improve them. Is it just a matter of scale i.e. the modern methods make it possible to predict meaningfully on the basis of huge datasets?

On a somewhat but not really related note, aren't prediction and causal inference two sides of the same coin? Like when you train a model to make predictions, isn't it doing so on the basis of causally relevant factors? The Kleinberg paper frames the relationship between prediction and inference as dichotomous, but I feel like strong prediction models can reveal hidden, causally relevant factors which can in turn be used for inference problems

@LuZhang0128
Copy link

Dear Professor Mullainathan, thank you for sharing your paper with us! I get a better understanding of ML models and what it allows researchers to do beyond traditional methods. I believe with the benefits of ML models mentioned in the paper, those models are getting more and more attention as the paper advocated. While I also do believe that strong ML models can provide insight on causal inferences. Hope to learn more details about your paper during the presentation.

@YaoYao121
Copy link

YaoYao121 commented Oct 13, 2021

Hi, Professor Mullainathan. Appreciate that you could share your precious and experienced research stories to us. As machine learning becomes more and more popular in common people's lives, I am very interested in you research topic about combining up-dated computational techniques and micro-behavioral economic model to study casual effects of some classical variables of interest in Economics. Looking forward to learning more details from you workshop!

@yujing-syj
Copy link

Hi, Professor Mullainathan. Thanks so much for sharing this interesting topic to us. The combination of the machine learning and social science problem is what we will go through in the future. Great to hear from you about some applications first!

@xin2006
Copy link

xin2006 commented Oct 13, 2021

Hi Prof Mullainathan! Thanks for sharing such interesting research with us! The paper gives me more insight into the bias, an important topic that we currently discussed, existing not only in the way researchers carry out survey, but also in our real life, such as race bias and gender bias. And machine learning is a powerful and popular technical tool, so I am really looking forward to learning from you how machine learning can be applied into social science in workshop!

@taizeyu
Copy link

taizeyu commented Oct 13, 2021

Hi Professor Mullainathan,I am so appreciated that you are able to provide a presentation for us. I am looking forward that you will share the experience in research of policy

@zoeyjiao1104
Copy link

Hi Professor Mullainathan, thank you so much for presenting the paper. I am very interested in how artificial intelligence could interpret human behaviors, such as facial recognition and facial features. Looking forward to listening to the talk!

@k-partha
Copy link

k-partha commented Oct 13, 2021

Hi Professor Mullainathan, thanks for presenting your work. I was wondering if your work on facial features can be extended more generally to labor markets - to what extent do facial features influence hiring and networking choices?

@ZHE-ZHANG-0213
Copy link

Hello, Professor Mullainathan. Thank you so much for sharing your research with us. Machine learning is becoming increasingly important in economic research. Looking forward to your speech!

@ValAlvernUChic
Copy link

Hi Prof Mullainathan! thank you for coming to the lecture! Machine learning and its predictive capabilities are no doubt becoming increasingly important for policy issues. I had two questions related to this overwhelmingly technosocial approach:

  1. How heavily should we consider the results from predictive modeling in making policy decisions? While the data often (assuming a robust model) presents reliable predictions, it also seems to abstract away from humanistic considerations. In the context of the paper, the decision on how to allocate joint replacement surgery seems to imply a quick reduction to matters of utility and savings. Should that be enough? Does it matter?

  2. How far can these models account for socially complex issues like recidivism? For example, recidivism models are used to help judges decide on arrestees' sentences. However, these models have been criticized for relying on historical crime data that already disproportionately represent minorities, leading to higher recidivism scores for members of these communities. When COMPAS was interrogated, researchers found that it only performed slightly better than a human jury. With issues like underfunded schools or lack of access to healthcare being likely contributors to recidivism, is it possible to account for this in the models? If so, would we be penalizing underserved communities? If not, would we be then excluding these important factors? Should the focus then be shifted?

@Hongkai040
Copy link

Hi Prof Mullainathan! thank you for coming to the lecture! It's really awesome to use study human behaviors via machine learning. Those findings are really interesting(though some of them are counter-intuitive : )) When reading your papers and research interests, a question haunted in my mind:
As human we all have our own biases. Is it possible for techniques like machine learning algorithms to discover the biases of the people who use them do research, especially in the social science fields?

@sabinahartnett
Copy link

Hi Professor Mullainathan -
Thank you for sharing your prior work with us! I'm excited to hear how you think these frameworks can be applied to other disciplines/if this is a universal framework for bias removal. Additionally, in a more theoretical way, to what degree do you believe we can truly test the robustness of bias-removal - are there any true 'neutral' algorithms?

@fiofiofiona
Copy link

Hi Prof Mullainathan, thank you for coming to share about your research! It's interesting to see how machine learning is being applied in various disciplines to provide predictions and help people make better decisions. One question I have is that, how could we identify the source of bias in prior data and decisions, given machine learning-generated predictions? And can a model be biased due to the biased data we used to build and train it?

@hshi420
Copy link

hshi420 commented Oct 13, 2021

Hi, Professor Mullainathan. I'm looking forward to learning about applying statistical learning methods in studying behavioral science. Although traditional machine learning methods usually don't have the problem, but how to solve the interpretability problem when using deep learning models in social science?

@bningdling
Copy link

Thank you for coming to our workshop, Professor Mullainathan! I read your book _Scarcity: Why having too little means so _much in June and was deeply intrigued by your interpretation how scarcity influences our decision-making process, especially the concept of bandwidt.
I look forward to hearing your insights on machine learning during the workshop tomorrow. One thing I've been thinking about is that, the blind spot in hypothesis testing is choosing the null hypothesis, and I'd say it's a meta-level bias. Is a trained algorithm able to solve the problem like this, given the fact that it exists even before executing of the program?

@ginxzheng
Copy link

Thank you for coming to our workshop Professor Mullainathan! If possible, I hope to hear more about the regularization -- how to make choices of the regularization penalty empirically. Thanks!

@NikkiTing
Copy link

Thank you for sharing your work, Professor Mullainathan! Your paper really brings a new perspective to machine learning applications in public policy. I do think this will help policymakers come up with better-guided and -targeted policies. I'm looking forward to your presentation tomorrow!

@qishenfu1
Copy link

Hi Prof. Mullainathan, thank you very much for sharing your work! I am wondering whether you combine casual inference tools with advanced computational techniques, or you just use econometrics models. Thank you!

@TwoCentimetre
Copy link

Thanks for sharing. Although I know nothing about this field, I still can feel that the topic is truly interesting.

@luckycindyyx
Copy link

Thank you so much for sharing such interesting research! It is so nice to see how machine learning could be applied to behavioral science. Looking forward to your lecture!

@egemenpamukcu
Copy link

Hi Profesor Mullainathan, I am looking forward to hearing more about the intersection of statistical learning and human behavior. Thank you!

@ShiyangLai
Copy link

Dear Dr. Mullainathan,
In your article, you wrote that "Even this small set of examples are biased by what we imagine to be predictable. Some things that seem unpredictable may actually be more predictable than we think using the right empirical tools." Could you give one or two examples to explain this statement? Looking forward to attend your lecture.

@kthomas14
Copy link

Thank you for sharing your work Professor Mullainathan! I look forward to hearing more insight about the application of machine learning and causal inference in policy and I was wondering what other applications you and your team would apply these methods towards.

@ttsujikawa
Copy link

Thank you very much for presenting this wonderful research!
I was wondering to what extent the government should expand data accessibility for educational/research institutions so that they would be able to highly engage in the development of public policy.

@LFShan
Copy link

LFShan commented Oct 14, 2021

Thank you for the presentation, I would like to learn more about behavioral science and the use of computational science in behavioral study.

@YijingZhang-98
Copy link

YijingZhang-98 commented Oct 14, 2021

Dear Prof. Mullainathan, thanks for your excellent work. I look forward to learning more in tomorrow's workshop.

@koichionogi
Copy link

Thank you for presenting your research. I would like to hear about your thoughts on explainability of those machine learning umbrella-type predictions. Some critics claim that one of the biggest obstacles for policy makers to apply those prediction is the difficulties in understanding of the mechanisms(how those prediction have been made). I would like to know how you think about changes in relationship between society and those predictions in future. Would there be more explainable machine learning predictions(with same degree of accuracy ) or would literacy of those decision makers change? Thank you so much again for your lecture!!

@wanxii
Copy link

wanxii commented Oct 14, 2021

Thank you for your presentation and I'm looking forward to your inspiring talk.

@97seshu
Copy link

97seshu commented Oct 14, 2021

Thank you for presenting! I wonder how would the performance of the algorithm change if we include biological motion as one variable.

@y8script
Copy link

Dear Prof. Mullainathan, thank you for your interesting work! I'm looking forward to your lecture. I want to ask what's your opinions about the boundaries of prediction and causation problems. Do you think causation problems are a subset of prediction problems as they also contain a default prediction process? Or do you think they are different for they have different purposes?

@FrederickZhengHe
Copy link

Thanks very much for your paper and look forward to listening to your presentation tomorrow!

@XTang685
Copy link

Hi Prof. Mullainathan, thank you for presenting your great work! After reading your paper, I really know more about machine learning and causal inference. Looking forward to your lecture!

@YileC928
Copy link

Great interdisciplinary work! Thanks for sharing your paper with us Prof. Mullainathan. I am looking forward to learning more about your recent works in causal inference.

@xzmerry
Copy link

xzmerry commented Oct 14, 2021

Dear Prof. Mullainathan, it is really interesting to read your paper about human bias to identify facial features. How does such human bias affects people's market behavior systematically?

I notice that you will discuss some of your ongoing research recently, rather than this week’s workshop paper. I am looking forward to hearing more on your research in our workshop!

@robertorg
Copy link

Dear Dr. Mullainathan, thank you for coming to our workshop. My question would be regarding the role of causality. Social scientists are usually called on to forecast the effects of policy interventions in settings in which strategic adjustment happens. An accurate forecast of the effect of the intervention usually requires thinking counterfactually about situations for which in fact there may be no data to reveal how strategic adjustment will occur. In such settings, isn't a combination of theoretical accounts of mechanisms (causality) required to be informative about the magnitude of the effects associated with those mechanisms ?

@chrismaurice0
Copy link

Looking forward to hearing you speak tomorrow!

@zixu12
Copy link

zixu12 commented Oct 14, 2021

Thank you so much for coming to the workshop! I look forward to seeing you soon!

@LynetteDang
Copy link

LynetteDang commented Oct 14, 2021

Thank you Professor Mullainathan. I notice that you have briefly mentioned the AI-driven recidivism algorithm as part of your discussion on prediction policy problems. With your knowledge and expertise, I am sure you have looked into the COMPAS algorithm and Prorepublica's review of the accuracy of this algorithm and the potential biases against minorities. A lot of it has to do with ethical questions in the realm of data science. Would love to hear more about it from you tomorrow!

@FranciscoRMendes
Copy link

Look forward to hearing your talk!

@sdbaier
Copy link

sdbaier commented Oct 14, 2021

On your observation that social scientists and policy-makers alike often focus on rain dance–like problems and neglect umbrella-like policy problems: Why is this the case? Methodological limitations, canonical thinking, or potentially other drivers?

@chentian418
Copy link

Hi professor Mullainathan, it's very excited to see you on the workshop! Indeed we have many situations that forward-looking inference is much more useful than purely casual inference. My question is that for economics and behavior science questions, to what extend to you think algorithms like deep learning and machine learning would help explain the forecast inference? Does that mean training these delicate models would be enough to extrapolate human behaviors and make predictions? How should we interpret these data-driven results?

Thank you and look forward to your presentations!

@boyafu
Copy link

boyafu commented Oct 14, 2021

Thank you for sharing this fascinating research! I look forward to the presentation.

@j2401
Copy link

j2401 commented Oct 14, 2021

Hi professor Mullainathan,

We have been talking about the bias-variance trade-offs quite a lot in during the first year. Looking forward to your presentation tomorrow that bring more insights behind it!

@yierrr
Copy link

yierrr commented Oct 14, 2021

Thank you so much for the presentation Professor Mullainathan! Looking forward to it!

@DehongUChi
Copy link

Hi Professor Mullainathan, thank you so much for sharing with us! Looking forward to your presentation!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests