-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
10/14: Sendhil Mullainathan #5
Comments
Thanks for coming and offer us this lecture. When we do research, people will have some bias even if they do not realize it. thanks again |
Thanks so much for sharing this paper with us! It was great to see how tradeoffs between bias and variance can be modelled. |
Thank you so much for coming! It's true that behavioral error (defined as the tendency for people to over-emphasize dispositional, or personality-based explanations for behaviors observed in others while under-emphasizing situational explanations) is common in daily lives and may cause social disruption if it's not handled meticulously. Thus, the procedure developed by your team sound super meaningful! However, while human beings tend to misjudge due to bias and being emotional, I believe there will be certain issues related to computers and calculations that are different from human behavior but may cause potential problems as well. I'm looking forward to knowing more! |
Hi Professor Mullainathan, thanks for coming and giving us the presentation! The paper you wrote provides us a new perspective to look at and the leading example is great for us to understand the point you want to make - researchers always tried making causal inferences and solving causal inference problems but these may not be the central problem in every policy application. In many ways, predictions can be very important and have large policy impacts using machine learning methods. I am curious about how you actually model these kinds of prediction problems, how large the data needs to be for credibility, and how the conclusions can be generalized. |
Thank you so much for this interesting research! This paper poses an exciting way to re-imagine policy problems and our role as computational social scientists in answering them. I was wondering how you would envision applying these results to the health care system? Would doctors or medicare claims require the patients to be recommended by this algorithm? Would we need enough information from each potential patient to construct the 3,000+ variables to be properly categorized? I would love to hear more about how we could incorporate these types of machine learning approaches into our current systems. |
Hi Professor Mullainathan, thank you so much for sharing such interesting topic! Nowadays, machine learning becomes more and more useful in many industries. Also, everyone care about social welfare, so I think it is a good way to use the example from health policy to find out social welfare gains. |
Where machine learning is applied to questions of Medicare procedures or parole releases for example or other decisions that affect an individuals life in very consequential ways, how do we properly integrate these systems into that decisions process? With a human weighing evidence, there's an individual with a thought process they could ostensibly explain, and thus has both responsibility for the decision and the social role as someone to appeal as "the other side" in an argument of sorts. Even though the ML tools are much more accurate as a predictor, how do we avoid these becoming both a crutch for people making difficult decisions and also an unassailable explanation for why they made these decisions? |
Thank you Prof. Mullainathan. The paper is quite informative and instructive. It enables me to gain the very basic knowledge about machine learning. My understanding is, OLS model, aimming to achieve zero bias, is not a suitable way to do prediction because it only includes the minimization of bias term. While bias and variance are having a tradeoff relationship (meaning variance must be sacrificed and increased if to achieve lowest bias), the prediction asks for accuracy and low variance to narrow the possible outcomes. Thus Machine learning model takes both the variance term and bias term into the minimization. But I still fail to understand the function and work of the regularizer ( R(f) ) as well as how it is constructed. |
Thank you Prof. Mullainathan for your work! It is always refreshing to come across news methods that improve upon our traditional understanding of evaluating policy. My question is much general in nature. The way you have explained prediction policy problems still presents causal inference and prediction as two independent traditions which don't seem to communicate with each other. By that, I am referring to the urge of going after one over the other i.e. you are either predicting or inferring from separate models. I just wanted to know if there are ways in which prediction and causal inference are both combined organically and perhaps under a unified model (and not presented as competitors or mutually exclusive)? |
Hi Professor Mullainathan, thank you so much for sharing your work! Looking forward your presentation!! |
Thank you Professor Mullainathan for sharing your expertise with us. I had a question regarding an assertion that you made in the summary for your talk. You note that facial features account for 30% of variation that is explainable in whether the judge sends a given person to jail. However, you follow this by saying "This finding is not explained by race, skin color, demographics or other known factors". It is now well understood that race does not have a biological basis and is instead a social construct, often associated with skin color, demographics, etc.. With this in mind, is it possible that judges still associate certain facial features with race, skin color, or background? It seems as though this would perhaps link the explanation with a factor such as race? |
Hi Professor Mullainathan.Thank you so much for sharing your research with us. Machine learning is more and more important for econ research . Looking forward your presentation! |
Hi Professor, I think your research is very interesting and it ties into some other context where facial appearance has subconscious effects. For example, there are studies showing that people can accurately guess the winner of a political race-based only on a one-second look at the headshot of the two candidates running (https://www.princeton.edu/news/2007/10/22/determine-election-outcomes-study-says-snap-judgments-are-sufficient). Moreover, it's also been shown that babies can recognize attractive faces from as early as a few months after being born, indicating that there is some innate module in our minds to recognize certain facial features as "attractive" (https://www.newscientist.com/article/dn6355-babies-prefer-to-gaze-upon-beautiful-faces). Do you think it is possible to extend your algorithm to come up with estimates for these two different scenarios? |
Hi Professor Mullainathan! Thanks so much for sharing your research on predictive inference with us in this week's CSS workshop. We would be very glad to have an understanding of predictive inference and how it may be applied to future policy work and nowcasting! |
Dear Prof. Mullainathan, Thank you for taking your time to our workshop. We are also studying about natural experiments this week and look forward to hearing you more in those causal inference tools such as lab, field, and natural experiments to study social problems. Many of us are going to do research in our own and I believe we will generate and ignite more ideas after hearing your speech! |
Thank you for presenting to our group! I'm not familiar with economic policy, so the answer may be painfully obvious, but why is it that prior research in the field has been so focused on causal inference? As the umbrella-wielders could attest, predictions have been possible long before advanced computational methods have been available to improve them. Is it just a matter of scale i.e. the modern methods make it possible to predict meaningfully on the basis of huge datasets? On a somewhat but not really related note, aren't prediction and causal inference two sides of the same coin? Like when you train a model to make predictions, isn't it doing so on the basis of causally relevant factors? The Kleinberg paper frames the relationship between prediction and inference as dichotomous, but I feel like strong prediction models can reveal hidden, causally relevant factors which can in turn be used for inference problems |
Dear Professor Mullainathan, thank you for sharing your paper with us! I get a better understanding of ML models and what it allows researchers to do beyond traditional methods. I believe with the benefits of ML models mentioned in the paper, those models are getting more and more attention as the paper advocated. While I also do believe that strong ML models can provide insight on causal inferences. Hope to learn more details about your paper during the presentation. |
Hi, Professor Mullainathan. Appreciate that you could share your precious and experienced research stories to us. As machine learning becomes more and more popular in common people's lives, I am very interested in you research topic about combining up-dated computational techniques and micro-behavioral economic model to study casual effects of some classical variables of interest in Economics. Looking forward to learning more details from you workshop! |
Hi, Professor Mullainathan. Thanks so much for sharing this interesting topic to us. The combination of the machine learning and social science problem is what we will go through in the future. Great to hear from you about some applications first! |
Hi Prof Mullainathan! Thanks for sharing such interesting research with us! The paper gives me more insight into the bias, an important topic that we currently discussed, existing not only in the way researchers carry out survey, but also in our real life, such as race bias and gender bias. And machine learning is a powerful and popular technical tool, so I am really looking forward to learning from you how machine learning can be applied into social science in workshop! |
Hi Professor Mullainathan,I am so appreciated that you are able to provide a presentation for us. I am looking forward that you will share the experience in research of policy |
Hi Professor Mullainathan, thank you so much for presenting the paper. I am very interested in how artificial intelligence could interpret human behaviors, such as facial recognition and facial features. Looking forward to listening to the talk! |
Hi Professor Mullainathan, thanks for presenting your work. I was wondering if your work on facial features can be extended more generally to labor markets - to what extent do facial features influence hiring and networking choices? |
Hello, Professor Mullainathan. Thank you so much for sharing your research with us. Machine learning is becoming increasingly important in economic research. Looking forward to your speech! |
Hi Prof Mullainathan! thank you for coming to the lecture! Machine learning and its predictive capabilities are no doubt becoming increasingly important for policy issues. I had two questions related to this overwhelmingly technosocial approach:
|
Hi Prof Mullainathan! thank you for coming to the lecture! It's really awesome to use study human behaviors via machine learning. Those findings are really interesting(though some of them are counter-intuitive : )) When reading your papers and research interests, a question haunted in my mind: |
Hi Professor Mullainathan - |
Hi Prof Mullainathan, thank you for coming to share about your research! It's interesting to see how machine learning is being applied in various disciplines to provide predictions and help people make better decisions. One question I have is that, how could we identify the source of bias in prior data and decisions, given machine learning-generated predictions? And can a model be biased due to the biased data we used to build and train it? |
Hi, Professor Mullainathan. I'm looking forward to learning about applying statistical learning methods in studying behavioral science. Although traditional machine learning methods usually don't have the problem, but how to solve the interpretability problem when using deep learning models in social science? |
Thank you for coming to our workshop, Professor Mullainathan! I read your book _Scarcity: Why having too little means so _much in June and was deeply intrigued by your interpretation how scarcity influences our decision-making process, especially the concept of bandwidt. |
Thank you for coming to our workshop Professor Mullainathan! If possible, I hope to hear more about the regularization -- how to make choices of the regularization penalty empirically. Thanks! |
Thank you for sharing your work, Professor Mullainathan! Your paper really brings a new perspective to machine learning applications in public policy. I do think this will help policymakers come up with better-guided and -targeted policies. I'm looking forward to your presentation tomorrow! |
Hi Prof. Mullainathan, thank you very much for sharing your work! I am wondering whether you combine casual inference tools with advanced computational techniques, or you just use econometrics models. Thank you! |
Thanks for sharing. Although I know nothing about this field, I still can feel that the topic is truly interesting. |
Thank you so much for sharing such interesting research! It is so nice to see how machine learning could be applied to behavioral science. Looking forward to your lecture! |
Hi Profesor Mullainathan, I am looking forward to hearing more about the intersection of statistical learning and human behavior. Thank you! |
Dear Dr. Mullainathan, |
Thank you for sharing your work Professor Mullainathan! I look forward to hearing more insight about the application of machine learning and causal inference in policy and I was wondering what other applications you and your team would apply these methods towards. |
Thank you very much for presenting this wonderful research! |
Thank you for the presentation, I would like to learn more about behavioral science and the use of computational science in behavioral study. |
Dear Prof. Mullainathan, thanks for your excellent work. I look forward to learning more in tomorrow's workshop. |
Thank you for presenting your research. I would like to hear about your thoughts on explainability of those machine learning umbrella-type predictions. Some critics claim that one of the biggest obstacles for policy makers to apply those prediction is the difficulties in understanding of the mechanisms(how those prediction have been made). I would like to know how you think about changes in relationship between society and those predictions in future. Would there be more explainable machine learning predictions(with same degree of accuracy ) or would literacy of those decision makers change? Thank you so much again for your lecture!! |
Thank you for your presentation and I'm looking forward to your inspiring talk. |
Thank you for presenting! I wonder how would the performance of the algorithm change if we include biological motion as one variable. |
Dear Prof. Mullainathan, thank you for your interesting work! I'm looking forward to your lecture. I want to ask what's your opinions about the boundaries of prediction and causation problems. Do you think causation problems are a subset of prediction problems as they also contain a default prediction process? Or do you think they are different for they have different purposes? |
Thanks very much for your paper and look forward to listening to your presentation tomorrow! |
Hi Prof. Mullainathan, thank you for presenting your great work! After reading your paper, I really know more about machine learning and causal inference. Looking forward to your lecture! |
Great interdisciplinary work! Thanks for sharing your paper with us Prof. Mullainathan. I am looking forward to learning more about your recent works in causal inference. |
Dear Prof. Mullainathan, it is really interesting to read your paper about human bias to identify facial features. How does such human bias affects people's market behavior systematically? I notice that you will discuss some of your ongoing research recently, rather than this week’s workshop paper. I am looking forward to hearing more on your research in our workshop! |
Dear Dr. Mullainathan, thank you for coming to our workshop. My question would be regarding the role of causality. Social scientists are usually called on to forecast the effects of policy interventions in settings in which strategic adjustment happens. An accurate forecast of the effect of the intervention usually requires thinking counterfactually about situations for which in fact there may be no data to reveal how strategic adjustment will occur. In such settings, isn't a combination of theoretical accounts of mechanisms (causality) required to be informative about the magnitude of the effects associated with those mechanisms ? |
Looking forward to hearing you speak tomorrow! |
Thank you so much for coming to the workshop! I look forward to seeing you soon! |
Thank you Professor Mullainathan. I notice that you have briefly mentioned the AI-driven recidivism algorithm as part of your discussion on prediction policy problems. With your knowledge and expertise, I am sure you have looked into the COMPAS algorithm and Prorepublica's review of the accuracy of this algorithm and the potential biases against minorities. A lot of it has to do with ethical questions in the realm of data science. Would love to hear more about it from you tomorrow! |
Look forward to hearing your talk! |
On your observation that social scientists and policy-makers alike often focus on rain dance–like problems and neglect umbrella-like policy problems: Why is this the case? Methodological limitations, canonical thinking, or potentially other drivers? |
Hi professor Mullainathan, it's very excited to see you on the workshop! Indeed we have many situations that forward-looking inference is much more useful than purely casual inference. My question is that for economics and behavior science questions, to what extend to you think algorithms like deep learning and machine learning would help explain the forecast inference? Does that mean training these delicate models would be enough to extrapolate human behaviors and make predictions? How should we interpret these data-driven results? Thank you and look forward to your presentations! |
Thank you for sharing this fascinating research! I look forward to the presentation. |
Hi professor Mullainathan, We have been talking about the bias-variance trade-offs quite a lot in during the first year. Looking forward to your presentation tomorrow that bring more insights behind it! |
Thank you so much for the presentation Professor Mullainathan! Looking forward to it! |
Hi Professor Mullainathan, thank you so much for sharing with us! Looking forward to your presentation! |
Comment below with questions or thoughts about the reading for this week's workshop.
Please make your comments by Wednesday 11:59 PM, and upvote at least five of your peers' comments on Thursday prior to the workshop. You need to use 'thumbs-up' for your reactions to count towards 'top comments,' but you can use other emojis on top of the thumbs up.
The text was updated successfully, but these errors were encountered: