Emily M. Bender is a University of Washington professor who specializes in natural language processing (NLP) and computational linguistics. Bender is also the director of the university’s Computational Linguistics Master of Science (CLMS) program and, at times, has voiced her concerns over the potential societal risks of large language models.
Education and early career
Bender received a Ph.D. in Linguistics from Stanford University in 2000 and spent around 10 months as a lecturer at the University of California, Berkeley, soon after.
In 2001, she briefly worked as a Grammar Engineer at YY Technologies before returning to Stanford in September 2002 as an Acting Assistant Professor.
University of Washington
Twelve months later, Bender took up an assistant professor position in linguistics at the University of Washington. She is now an Adjunct Professor in Computer Science & Engineering, a Professor of Linguistics, and, as noted earlier, director of the CLMS program.
Bender is also involved in various university institutions:
- Tech Policy Lab – an interdisciplinary collaboration to promote and enhance tech policy via research and education.
- Value Sensitive Design Lab – an initiative centered around value-sensitive design. Pioneered in the 1990s, the approach establishes theory and method to incorporate human values throughout the design process, and
- RAISE – an acronym of Responsibility in AI Systems & Experiences, RAISE’s mission is to conduct research into AI systems and their interactions with human values. It also intends to devise systems for underserved contexts across critical areas such as education, finance, policy, and health.
Stochastic parrots
In response to the rapid progression in NLP over the preceding three years, Bender published the paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? in March 2021.
The paper asked important questions about the risks associated with LLMs and how those risks could be mitigated. To that end, Bender proposed that the financial and environmental costs be considered first and foremost.
She also believed that resources be invested in improving the quality of LLM input. In other words, datasets should be carefully curated and documented as opposed to models simply consuming all of the information on the internet.
Google’s LaMDA chatbot
When Google engineer Blake Lemoine publicly stated that the company’s LaMDA chatbot was sentient, Bender stressed that the obvious misconception “shows the risks of designing systems in ways that convince humans they see real, independent intelligence in a program. If we believe that text-generating machines are sentient, what actions might we take based on the text they generate?”
At the crux of the article she wrote for The Guardian is that people instinctively believe the words produced by a chatbot were created by a human mind. She also argued that the question-and-answer, concierge-type service now incorporated into Google Search increases the likelihood that a user will take information scraped from the internet as fact.
To prevent the spread of misinformation and design systems that “don’t abuse our empathy or trust”, Bender noted that transparency was key. What was the model trained to do? What information was it trained on? Who chose the data, and for what purpose?
Key takeaways:
- Emily M. Bender is a University of Washington professor who specializes in natural language processing (NLP) and computational linguistics.
- At the University of Washington, Bender is involved in several organizations that deal with AI as well as tech design, policy, and impact. These include RAISE, Value Sensitive Design Lab, and Tech Policy Lab.
- In a 2021 academic paper, Bender asked several important questions about the risks associated with LLMs and how those risks could be mitigated. She has also written on the role of AI chatbots and their contribution to the spread of misinformation.
Read Next: History of OpenAI, AI Business Models, AI Economy.
Connected Business Model Analyses
AI Paradigm
Pre-Training
Large Language Models
Generative Models
Prompt Engineering
OpenAI Organizational Structure
OpenAI Business Model
OpenAI/Microsoft
Stability AI Business Model
Stability AI Ecosystem
The post Who is Emily Bender? appeared first on FourWeekMBA.