Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Throwing a Cat Among the Pigeons? Augmenting Human Computation With Large Language Models

Ujwal GadirajuFollowTowards Data Science--ListenShareI have always been fascinated by etymology. More often than not, there is an intriguing story behind how words and phrases have acquired the meanings we are so familiar with. Morphing through the ages and mingling with changing times. The Mechanical Turk was a chess-playing humanoid machine made by a Hungarian author and inventor, Wolfgang von Kempelen, in the 18th century. The story goes that Mechanical Turk toured Europe and humbled noteworthy names like Napoleon Bonaparte and Benjamin Franklin in fabled battles of chess. Only later was the secret sauce unveiled in the form of a real Human chess whiz hidden away from the naked eye in a cabinet beneath the floorboard, from where they controlled the moves made by the humanoid.This story was the inspiration behind the naming of the Amazon Mechanical Turk crowdsourcing platform launched in 2005. The platform was designed to solve tasks that could not be solved by contemporary alternatives and required human input or intelligence. It was in this context that the notion of “artificial artificial intelligence” took shape and form, wherein humans serve as the source of intelligence when it is beyond the capabilities of machines. We have come a long way from there, to the cusp of a brand new notion of “artificial artificial artificial intelligence.” Yes, you read that right. Three artificials. Before you try to crack this walnut with your forehead, let’s take a quick tour down some memory lanes.In his book called “The Wisdom of Crowds’’ published in 2004, James Surowiecki explored and synthesized the attributes required to form a wise crowd — one that can often make decisions better than any single individual in the crowd. He identified diversity in opinion, independence in judgment, and decentralized knowledge as vital attributes to that end. In 2006, Jeff Howe coined the term crowdsourcing as a portmanteau of ‘crowds’ and ‘outsourcing’ in an article he wrote for Wired magazine on “The Rise of Crowdsourcing.” He discussed how businesses had begun tapping into the collective capabilities of distributed online communities through open calls to accomplish certain tasks.Amazon Mechanical Turk thrived after it was first launched, and within years, hundreds of thousands of people around the globe found an opportunity to earn their livelihoods by completing tasks on the platform. This sparked growth in crowdsourcing platforms across the world, forging and cementing a new economy of online microtask crowd work. Researchers and practitioners began to rely on crowdsourcing platforms to accomplish various tasks and demonstrated that even highly complex tasks could be decomposed and crowdsourced. Systems and tools were proposed to support Crowd Workers in completing tasks effectively.In 2009, the release of ImageNet spurred the entire field of machine learning. With over 3.2 million images in 12 subtrees with over five thousand synsets, it was a monumental data collection effort using crowdsourcing via Amazon Mechanical Turk [1]. This provided an unprecedented opportunity for progress in several computer vision tasks, including object recognition and image classification.Let us not forget that this progress came with a great set of trials and tribulations. Tremors of the dangers concomitant with relying on human-generated data, prone to cognitive and systematic biases, were felt by many. In 2013 a group of well-known researchers in the crowdsourcing community wrote a paper called “The Future of Crowd Work,” in which they reflected on the status of the paradigm and the series of challenges that needed immediate addressing [2]. Many of these challenges still remain unsolved a good 10 years later, despite a significant amount of progress. There have been well-documented problems pertaining to the quality of data collected (e.g., propagation of biases), power asymmetry on platforms, abysmal hourly wages, unfair work rejections, invisible labor, unhealthy work environments, and the list goes on. Despite the frailties of what some consider a fractured paradigm of work, remarkable results punctuate the historical timeline, and the power of crowdsourcing has unmistakably contributed to a rate of technological progress only a few would have seen coming.Much of the mainstream media around the world today is lost in sweeping narratives around generative AI and what the democratization of large language models can mean. Many more lives will continue to be touched by AI in expected and unexpected ways. And it is the laborious work of humans behind the scenes that has been fuelling this AI revolution in the first place. If we were to “scrutinize the shadows of AI, we would discover the humans powering it,” as unforgettably put by Mary Gray and Sid Suri in Ghost Work [3].Exaggerated forecasts and clickbait headlines have likened the role of humans in this age to anxious pigeons and equated LLMs to being bold cats — with the metaphorical cat disrupting the flock, sending them away scattering. But what does the onset of this new age of generative AI models truly mean for human input? Has the need for human input been wiped out for the most part in shaping future technologies? In the remainder of this article, I will argue that the answer to this is a resounding negative and that the main shift we should expect is in the nature of human input that will continue to be needed.I recently co-authored a workshop paper exploring how human computation workflows can embrace the emergence of generative AI models [4]. This work was presented at the Generative AI Workshop at the premier HCI conference, ACM CHI 2023, held in Hamburg earlier this year. We highlighted the potential role that large language models (LLMs) can play in augmenting existing crowdsourcing workflows and discussed how such workflows can be empirically evaluated.Crowdsourcing workflows are distinct patterns that manage how large-scale tasks are decomposed into smaller tasks to be completed by crowd workers. The crowd-powered word processor, Soylent, applies the Find-Fix-Verify workflow to produce high-quality text by separating tasks into stages of generating and reviewing text. This enabled “writers to call on Mechanical Turk workers to shorten, proofread, and otherwise edit parts of their documents on demand [5].” The Iterate-and-Vote workflow has been deployed in creating image descriptions, where workers are first asked to write descriptions of images (for example, with an end goal to assist those who are blind). Subsequent voting tasks are then used to converge on an optimal description [6]. The Map-Reduce workflow has been proposed for “partitioning work into tasks that can be done in parallel, mapping tasks to workers, and managing the dependencies between them [7].” Sharing the same essence, tools like CrowdWeaver have been proposed for managing complex workflows, supporting data sharing between tasks, and providing monitoring tools and real-time task adjustment capability [8].It is unlikely that the emergence of language models renders such workflows, frameworks, and tools completely mundane. On the contrary, the crowdsourcing community is uniquely positioned to embrace the benefits that LLMs can bring by building on decades of research around effective workflows, human-in-the-loop approaches, and knowledge around building hybrid human-AI systems.The human-centered perspective of developing technologies focuses on augmenting human experiences in everyday life and amplifying the abilities of people. If LLMs can indeed help crowd workers in completing tasks, they should be embraced and integrated in a fashion that empowers workers to complete tasks more accurately and quickly or in a fashion that improves their overall experience in one way or another.Researchers in information retrieval (a community I have engaged with over the last decade) have recently considered what the proliferation of LLMs can mean for the role of human annotators in the context of relevance judgments for evaluation [9]. They proposed a spectrum of collaboration between humans and LLMs to produce relevance judgments (ranging from human judgments to fully automatic assessments, akin to the popular levels of automation). The authors explored the potential benefits of roping-in LLMs within an assistive capacity for annotation tasks and weighed them in juxtaposition to the risks of doing so. It is clear that LLMs can reduce annotation costs in creating evaluation collections. However, it is unclear whether such collections could be systematically different from those created by humans and how such artifacts would influence the evaluation of information retrieval systems and, thereby, the future design of such systems.Apart from supporting individual writing or classification tasks within a workflow, researchers are also exploring the application of LLMs in assisting crowd workers. Liu et al. combined the generative power of GPT-3 and the evaluative power of humans to create a new natural language inference dataset that produces more effective models when used as a training set [10]. In a similar vein, others introduced a ‘Generative Annotation Assistant’ to help in the production of dynamic adversarial data collections, significantly improving the rate of collection [11]. However, there are several less-understood open questions pertaining to how LLMs can improve the effectiveness of crowdsourcing workflows and how such workflows can be holistically evaluated.Much like humans, LLMs can also be prone to bias and unfairness. On one hand, prior work has shown how human annotators fall prey to their own opinions while completing annotation tasks, leading to systematic biases creeping into the resulting data collection [12]. Others have proposed checklists for either combating or reporting potential cognitive biases that may have emerged during the annotation process [13]. On the other hand, recent work has revealed discriminatory stances and stereotypical biases present in LLMs [14, 15].The human computation and crowdsourcing research community (HCOMP) has devised a number of effective methods, interfaces, measures, and tools to ensure the collection of high-quality data from crowd workers. It is only a matter of time before we collectively figure out how such quality-related guarantees can be laid out while integrating LLMs in decision-making pipelines.On the surface, the integration of LLMs into crowdsourcing workflows can appear to be rather straightforward. As with most proposals for solutions related to complex systems, it is easier said than done. Crowdsourcing has many different stakeholders involved: the task requesters who are keen to gather large-scale annotations, the crowd workers willing to oblige in return for compensation, the platforms providing the infrastructure and serving as the marketplace for these transactions to take place, and indirect end-users of products or technologies that are developed or built in downstream efforts. The impact of including LLMs in workflows has the potential to affect each stakeholder in different ways.If crowd workers can become more effective and efficient by leveraging LLMs in intelligent workflows, there is potential to get more work done without increasing costs. However, further work is required to gain a better understanding of the risks and rewards entailing the inclusion of LLMs as a part of crowdsourcing workflows. Who would be responsible for designing, developing, and integrating LLMs into such workflows, considering the potential need for accountability?Crowd workers have historically been left to their own devices to improve their productivity and the environments and conditions within which they operate. Shouldn’t it now be the collective responsibility of crowdsourcing platforms and task requesters to better understand how to equip workers with LLM-based solutions that can aid them in successful task completion and improve and augment their work experiences?A recent case study explored the extent to which crowdsourced data from “humans” in a text summarization task was genuinely generated from humans. The authors found evidence to support that over 30% of crowd workers in their study on Amazon Mechanical Turk have already begun to rely on LLMs [16]. Although the study reported these insights from 44 workers alone, and the numbers can be taken with a grain of salt, this does reflect the undeniable prospect of more crowd workers turning towards LLM-based solutions that can help them increase their productivity, maximize their earnings, and improve the time they spend in crowdsourcing marketplaces. This is where the notion of “artificial artificial artificial intelligence” arises — crowd workers potentially making use of AI (assistance from LLMs) to provide what is presumably “human” input on demand.Further consideration is needed regarding the transparency and explainability of LLMs compared to what can be elicited from humans. When crowd workers complete tasks such as annotation or others that require decision-making, task requesters can extract meaningful rationales through follow-up questions. Crowd workers have the wherewithal to provide such insights where needed. The same cannot be currently achieved with LLMs. Yes, there are methods for model explainability, but none have demonstrated a level of effectiveness on par with what can be achieved with humans at both ends of the line. This perception of LLMs as a “black box” can create barriers to adoption for task requesters and crowdsourcing platforms, while also impeding the appropriate reliance of crowd workers on such tools.Humans and LLMs? There is an endless stream of possibilities with a sea of intriguing questions and only a handful of glimmering answers. Seizing the opportunity to integrate this technological advancement to improve crowd work is less like stirring a hornet’s nest and more like catching a gust of wind in our sails. Let us get busy, for a beautiful future awaits when we can shape it with humans taking center stage.----Towards Data ScienceMiriam SantosinTowards Data Science--19Dominik PolzerinTowards Data Science--22Leonie MonigattiinTowards Data Science--17Soner YıldırıminTowards Data Science--8Jari RoomerinBetter Humans--127Thomas SmithinThe Generator--118Dominik PolzerinTowards Data Science--22The PyCoachinArtificial Corner--523Unbecoming--843Kristen WaltersinAdventures In AI--246HelpStatusWritersBlogCareersPrivacyTermsAboutText to speechTeams



This post first appeared on VedVyas Articles, please read the originial post: here

Share the post

Throwing a Cat Among the Pigeons? Augmenting Human Computation With Large Language Models

×

Subscribe to Vedvyas Articles

Get updates delivered right to your inbox!

Thank you for your subscription

×