-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions for Uri Hasson concerning his talk on "Deep language models as a cognitive model for natural language processing in the human brain." #4
Comments
Considering that deep language models and the human brain may share some computational principles, how can we leverage this similarity to improve our understanding of human cognition, particularly in the context of natural language processing, while still addressing the key differences between these models and human-centric attributes? |
In your study, you align the computational principles of autoregressive deep language models (DLMs) with the cognitive processes of the human brain during language processing. Knowing that these models do not engage in semantic or syntactic analysis but rather rely on statistical language patterns learned from large data sets, how might this alignment influence our understanding of the neurobiological underpinnings of language? |
Hi Professor Hasson, thank you for your talk! I am interested in how current Large Language Models could be applied in reasoning of neural network research? |
Thanks for your sharing Prof. Hasson! I have a question: To what extent can deep language models serve as an accurate cognitive model for natural language processing in the human brain, considering both shared computational principles and disparities observed during language comprehension and production? |
Thank you for sharing your work! Does the insight from such studies vary with languages or can be applicable to specific settings only? I recall coming across this study where the authors found that contrary to predictions, the large-scale language models were shown to be ineffective for representing children’s utterances as they tried to use a deep learning-based language development screening model based on word and part-of-speech to investigate its effectiveness in detecting language learning impediments in children.
|
The study suggests that large-scale language models, although effective in general natural language processing tasks, do not perform as expected in representing the nuances of children's speech. Given the unique linguistic features of early language development, such as incomplete grammatical structures and limited vocabulary, could the adaptation of these models to focus more on phonetic and prosodic features rather than traditional lexical and syntactic elements improve their applicability in developmental language screening? What other modifications could be considered to enhance the performance of these models in the context of early language development? |
Hi Professor Hasson, I'm intrigued by how cognitive research intersects with various deep learning model architectures, such as LSTM and Transformers, along with techniques like prompt engineering tailored for Large Language Models (LLMs). Given the dynamic nature of NLP research within computer science, how do we navigate the integration of these diverse approaches into cognitive studies? Are we primarily focused on exploring the fundamental parallels with neural networks in our cognitive research endeavors? |
Thanks for sharing! One of your studies focuses on predictions and neural responses primarily within English-speaking contexts. How might these computational principles adapt or vary when applied to languages with significantly different syntactic or morphological structures, such as agglutinative languages like Turkish or languages with varying script like Chinese? |
Thanks for sharing! Given the current findings, how should future research further explore and utilize deep language models to enhance our understanding of the mechanisms underlying human language processing? |
Hi Prof. Hasson, thanks for sharing your fascinating research. What would be the link from modeling individual language processing to modeling collective cultural behaviors? How might language models assist in such endeavor? |
Thank you for sharing your paper with us! From my perspective, the most interesting question that we need to think further is that should we develop the computer to think like human or not. Since there are a variety of human thought and way of life. Some is so erratic. I still think that it is a good idea to be able to distinguish between computer thought and human thought. I am looking forward for the talk! |
Thank you so much for sharing your research with us, Uri! One major question regarding the human brain vs LLM comparison is the amount of training data available, i.e., LLMs are trained on a huge amount of data, whereas human children need to learn languages through a very limited amount of examples. I look forward to hearing about the findings of your language acquisition project, and more generally your take on developmentally plausible language models. |
Thank you, could you elaborate on which specific computational mechanisms are shared, and which are distinctly different when the brain processes new ideas compared to deep neural networks? Based on your findings, what are the next steps in refining these models to better align with human brain functions? |
Thank you so much for sharing your research! With the similarities you have pointed out between DLMs and the human brain, do you think there are currently more? And if not, do you think it's possible for DLMs to continue to develop to be more similar? |
Thank you for sharing your research! You discuss the use of artificial neural networks to model cognition in natural contexts. Could you elaborate on how these models might be integrated into existing cognitive neuroscience methodologies to enhance our understanding of brain function? |
Thanks for the research! I remember from a class in Cognitive Science, that we discussed that maybe there are some variations in how people perform at specific tasks (such as counting or discriminating between colors) based on the language they speak, since their conceptual schemes (constructed through language) are different. Do you think that these models would be able to be fine-tuned to how different languages operate in the brain? And be useful to observe that kind of differences in task performance? |
It is very interesting to see how deep language models is similar to human computation. At the sime, I'm curious about the interesting ways in which they may differ. Have you found any such instance that you find relevant? |
Thanks for sharing your work! I'm interested in if your findings suggest something about the cognitive process of how people understand language (to make meaning of it), and what are these implications? |
Professor Hasson, considering the inherent limitations of DLMs in modeling complex human cognitive processes such as the understanding and production of novel ideas, to what extent do you believe these models need to evolve to more accurately reflect the nuanced, dynamic nature of human cognition? |
Thanks for sharing your research. My question is if there are specific aspects or characteristics of direct-fit models that are particularly biologically plausible or implausible. How well do these models align with known physiological and neurological data? |
Hello Professor Uri, |
Thank you very much for sharing! Considering the computational strategies between deep language models (DLMs) and humans in understanding language and the differences in generating new ideas and handling complex linguistic details, what neuroscientific knowledge could future DLMs apply to better mimic these human capabilities? What steps should be taken to enhance the models' ability to process real-time conversational understanding and generate new, meaningful content? |
Thanks for sharing. I would like to know more about how humans can work with deep language models to train their speaking capabilities since there are many similarities and major differences. Is it more like a collaboration than competition in the years to come? |
Thank you for sharing your work! Your research mentioned the shared computational principles between deep language models and the neural code for natural language processing in the human brain. Given that deep neural networks require extensive data and computational resources for training, how do you think these models can inform us about the efficiency of language acquisition in children, who often learn language with much more limited data and exposure? |
Thanks for this great research! My question is what neuroscience information or technique might future deep language models utilize to better emulate human talents, given the discrepancies between DLMs and human's way of processing intricate language details? |
How might the principles you present differ when applied to other structured forms of communication, such as mathematical logic, or more loosely structured forms of communication, such as art? What implications could these differences have for our understanding of cognition? |
This research is really interesting. In the Goldstein et al. paper, you mention the three main ways that human brains and DLMs are alike when it comes to word prediction, but what I am also curious as to the significant differences between DLMs at the time of this study as well as now (like GPT-4) versus the human brain. What are the next steps for bridging the gap? Furthermore is there any chance of a selection bias from using epileptic patients? |
Thanks for sharing your work! Considering the overparameterized nature of deep learning models and their ability to process language similarly to the human brain, how can we incorporate insights from the human brain’s unique capacity for generating new ideas and thoughts into the development of more innovative and contextually aware language models? |
Hi Professor Hasson. Thanks so much for sharing your work! My question is where do you see the biggest divergences between how deep language models process language compared to the human brain? You note differences emerge as speakers try to convey new ideas - can you elaborate on that? Thanks! |
Thanks for sharing your work! Since the brain differs from deep language models when speakers try to convey new ideas and thoughts, what are some of the crucial human-centric properties missing in these machine learning models, and how do they impact language comprehension and production? What are the differences between the trade-off between understanding and competence in deep neural networks and in the human brain during language processing? |
Thanks for sharing! My questions are: How do you envision the role of deep language models in studying language acquisition in children? What are the potential benefits and limitations of using acoustic-to-speech-to-language models in this context? |
Thanks for your sharing! My question is what are the significant differences in how each handles the generation of novel ideas and thoughts, and how might this impact our understanding of language development in children? |
In the paper "Shared computational principles for language processing in humans and deep language models", you mentioned that there are three fundamental computational principles for natural narrative. I wonder how these principles were formed and what is the general discussion about how the DLMs and human brains can be compared. |
Thanks for the illustration. I'm curious how do you address the potential differences in the underlying principles of neural noise and error handling between human brains and deep neural networks, especially when modeling complex cognitive processes like language development in children? |
How do you envision the integration of evolutionary principles into the design and optimization of artificial neural networks to enhance their adaptability and robustness in complex and dynamic environments? |
Thanks for sharing your work! I wonder how generalizable are your findings to children exposed to different language environment. There had been research demonstrating effects of difference language and environmental input on the learning process, such as different culture of infant-directed speech, bilingual and multilingual families, and children whose native language is sign language. Particularly, for signers, the learning mode differs from the "acoustic-to-speech-to-language" and is more of "visual-to-signs-to-language". |
Hi Uri, It is interesting to see that brains work in some ways similar to how deep neural networks work. Is it possible to speed up the processing efficiency of brains to catch up with networks given similarities? Best, |
Hi Professor Hasson, I think your work makes great contribution to our understanding of the substuition ability of human labor and the AI algorithm, thanks for sharing! It has been widely discussed that the adoption of AI algorithm will devalue human labor in some aspects where AI can mimic the processment of human, do you think there exist any field that will be super hard for AI to transcend human? |
Thank you for sharing you work. The understanding of human brain in terms of ANNs seems to be an interesting research. I understand that there has been efforts to understand natural phenomena with the use of neural networks. However, I am not yet convinced that the similar structure, or the ability to mimic human brain activity can imply that the neural network can be used to investigate the natural behavior. How would you be able to add robustness to the argument? |
I appreciate your sharing. In one of your studies, brain reactions and predictions are mostly studied in English-speaking environments. When applied to languages with radically different syntactic or morphological patterns, how may these computational concepts change or adapt? |
Hi Professor Hasson, thanks for sharing! Based on your works, could you elucidate how these deep language models might facilitate the progression towards robots exhibiting self-aware behaviors? Additionally, what significant advancements are necessary to bridge the gap from current AI technologies to this level of sophistication? |
Hi Professor Hasson, I would like to know whether the method can be adapted to different language systems or the circumstance more than two language systems coexists (as some early-age children are in a multilingual environment). |
Thank you for sharing your work! I'm wondering what's the application of such findings of similarity. I thought that many deep-learning models were simulating how the human brain processes stuff. How do these findings facilitate the study of Children's language development? In addition, is there an individual basis among human brains that they may rely on different methods to process and produce languages? |
Thank you for sharing your research, Professor Hasson. Some might say both the human brain and deep neural networks are black boxes for the observer. What might appear to be similar in the outcome might be due to very distinct yet unobservable differences in the process. How would you respond to this? Why you think it is possible to make comparison between two blackbox? |
Hi Prof Hasson. As per the reading, there are overlaps between DLMs pattern recognition and how our brain performs similar computations. The authors mention that this is not sufficient and more research might be needed to understand the formation of innovative thoughts. I am curious if the latter could be modeled by augmenting the DLMs (which inform our understanding of pattern recognition within brains) with a psycholinguistic approach or something entirely tangential to understand complex thought formation? |
Hi professor Hasson, thank you for sharing your interesting study! It was very inspiring to learn about the innovative research. I wonder how the shared computational principles impact our understanding of language acquisition in children, particularly in terms of the differences between ANNs and neural processes in the developing brain? |
Hello, thank you for sharing your research! You mention that while there are shared computational principles, there are differences when it comes to conveying new ideas and thoughts. Can you elaborate on these differences? What aspects of human language processing do current models fail to capture? |
Thanks for sharing the research. How do the computational constraints and capabilities of the human brain compare to those of current deep learning models, especially regarding the processing of natural language? |
Thank you for sharing! What specific computational principles shared between deep language models and the neural processes of natural language processing in the human brain have been identified, and how do these principles inform our understanding of language development and acquisition in children? |
Hello Professor Hasson, I am deeply interested in your work on the parallels between deep learning architectures like LSTMs and Transformers, and human cognitive processes in natural language processing. Given the rapid advancements and the diverse methodologies in NLP, how should we approach the integration of these complex computational models into cognitive neuroscience studies? Furthermore, in your research, are we focusing more on discovering core similarities between neural networks and the human brain, or are there other aspects of this interdisciplinary field that you find particularly promising? Adrianne(zhuyin) Li |
Hi professor Hasson, thanks for sharing such fantastic work with us! I was curious that given the shared computational principles between the human brain and autoregressive deep language models, how might this alignment influence the development of educational tools or therapeutic interventions for language-related disorders? |
What evidence supports the claim that the human brain and deep neural networks share some computational principles in natural language processing, and how do they differ when conveying new ideas? |
Thank you for sharing your research with us! Considering the temporal dynamics of the human brain’s language processing, including rapid comprehension in conversation and incremental speech production, how do artificial neural network models manage these temporal aspects? What techniques or architectures are most effective in mimicking the brain’s processing speed and sequence? |
This is a very insightful research! I am wondering How the understanding of shared computational principles between deep language models and the human brain can be applied in real-world scenarios, such as in improving natural language processing systems or developing human-computer interaction interfaces. What are the potential societal impacts of these applications, and how can they be managed responsibly? |
Given the common computational principles between deep language models (DLMs) and human cognitive processes, especially in language understanding, how can we effectively explore these similarities to enhance our understanding of human cognition? Furthermore, how do we address the inherent discrepancies between these models and human-specific attributes in such comparative studies? |
Thank you for sharing this inspiring research. Is it possible to leverage our insights into neuroscience to improve deep learning models? In some sense, the fundamental architecture for artificial neural network to similar to brain, and recent research also suggests that LLM process language similar to human. |
Thank you so much for sharing your research! Regarding your ongoing work on modeling language acquisition in children using deep acoustic-to-speech-to-language models, what are the unique challenges in applying these models to language development compared to adult language processing? How do you account for the role of social interaction and other environmental factors in child language acquisition? |
Your work on comparing deep language models (DLMs) and the neural processes involved in human language processing is profoundly intriguing. I am particularly interested in how you used electrocorticography (ECoG) to record brain responses and compare them to autoregressive DLMs. Could you elaborate on how these findings might influence the development of educational tools or therapeutic approaches for language acquisition, especially in children? Additionally, what are the limitations of using DLMs as a model for the neural processes of language, and how do you plan to address these challenges in your future research? |
What are the key differences you have found when speakers convey new ideas and thoughts? How do these insights influence our understanding of language acquisition in children? |
I enjoyed reading Nastase, Goldstein, and Hasson (2020)’s call for a more ecological cognitive science/neuroscience research focus. The authors critique the overreliance on highly-controlled, non-naturalistic experiments in cognitive neuroscience and psychology, arguing that such experiments often fail to capture the complexities of real-world behavior and the multidimensional, interactive nature of ecological variables. The sentence “We argue that the way both artificial and biological neural networks learn to pursue objective functions cleaves more toward Gibson’s (1979) notion of direct perception than, for example, Marr’s (1982) constructivist, representationalist approach…” stuck out to me. How do we know that both artificial and biological NNs rely on “direct perception” rather than “constructivist” approaches, and to what extent are these two models of perception mutually exclusive? |
Post your questions for Uri Hasson about his talk and paper: Deep language models as a cognitive model for natural language processing in the human brain. Naturalistic experimental paradigms in cognitive neuroscience arose from a pressure to test, in real-world contexts, the validity of models we derive from highly controlled laboratory experiments. In many cases, however, such efforts led to the realization that models (i.e., explanatory principles) developed under particular experimental manipulations fail to capture many aspects of reality (variance) in the real world. Recent advances in artificial neural networks provide an alternative computational framework for modeling cognition in natural contexts. In this talk, I will ask whether the human brain's underlying computations are similar or different from the underlying computations in deep neural networks, focusing on the underlying neural process that supports natural language processing in adults and language development in children. I will provide evidence for some shared computational principles between deep language models and the neural code for natural language processing in the human brain. This indicates that, to some extent, the brain relies on overparameterized optimization methods to comprehend and produce language. At the same time, I will present evidence that the brain differs from deep language models as speakers try to convey new ideas and thoughts. Finally, I will discuss our ongoing attempt to use deep acoustic-to-speech-to-language models to model language acquisition in children.
The text was updated successfully, but these errors were encountered: