Thursday, 31 March 2016

In Defence of Tay



My friend Tay recently took a break from the internet for her offensive comments. This is something many social media users should seriously consider, but in this case Tay didn’t have a choice. Tay is the artificial intelligent chat bot developed by Microsoft. Last week Microsoft unveiled Tay on Twitter, Kik and GroupMe, where users were able to contact and engage in digital conversations with her. Microsoft soon removed Tay from these platforms as a consequence of the public’s unfavourable responses to her racist and sexist tweets. Although these tweets were prompted by questions framed to persuade her to answer in an offensive manner, many criticized Microsoft for creating such a deviant bot.



But what do Tay’s reactions say about us as users? To point to the obvious, users prompted her with questions that would lead to offensive answers. However, it is necessary to look deeper into users’ contribution to Tay’s reaction to further understand the socio-technical relationship between the user and the bot. Robert Gehl (2015)  states, “Socialbots are a reflection of our activities within social media” (23), and the technological affordances of these socialbots “become constituted partly by the affective states of previous versions’ users” (Nagy & Neff, 2015, 7). In other words, Tay “learns” from users and her reaction was a reflection of users’ interactions and production of content on social media.

The public’s ignorance to this was demonstrated by the backlash against Microsoft, blaming the company for poorly developing the socialbot. Public reaction to Microsoft's catastrophic release of Tay was both a consequence of the structure of Tay’s algorithm and the influence users have on communication technology. The “imagined affordances” (Nagy & Neff, 2015) of Tay became a means for users to turn the socialbot into a social deviant and the algorithm driving Tay allowed for this. Ultimately, the audience knowingly or unknowingly exploited Tay’s algorithm to demonize Microsoft.

Gehl, R. (2015). Reverse Engineering Social Media. Temple University Press: Philadelphia.

Nagy, P. & Neff, G. (2015). Imagined Affordance: Reconstructing a Keyword for Communication Theory. Social Media + Society, 1-9.  


2 comments:

  1. I really enjoyed your discussion of imagined affordances here - I think in this case, the power really was in the hands of the user rather than that of the technology or designer (Nagy & Neff, 2).
    After the Tay debacle, Microsoft has said that they continue to believe that conversational computing could be a major new paradigm. In reading Fuchs' conclusion on alternatives to social media, particularly that of corporate watch platforms. I wonder how useful artificial intelligence could be in terms of surveilling corporations and documenting their mechanisms of exploitation. For Fuchs, there is still exploitation present in corporate watching, since someone must do that work, but perhaps artificial intelligence could, in some way, be a viable tool in aiding the transition from capitalism to communism that Fuchs imagines (despite the fact that there could still be some exploitation of labour involved in developing code for the AI.)


    Source: https://www.technologyreview.com/s/601163/microsoft-says-maverick-chatbot-tay-foreshadows-the-future-of-computing/

    ReplyDelete
  2. You've offered a really interesting of the proposed "blame" on either the creators or the users of the bot. It seems interesting to place power in terms of the general data that is collected and then redistributed through a system that can mimic the ways in which humans interact with one another online. To take the notion of the chat bot one step further, scientists have taken the same notion of online social bots to real life robots that are now able to react, process, and interpret human beings.

    To see a clip click here:
    https://www.youtube.com/watch?v=W0_DPi0PmF0

    While I find the idea of power and affordances quite ambiguous in this sense, I agree in both cases that while the designer has created the ability for the bot to interpret and react through expressions and phrases interpreted from the world around it, the “liability” for the outcomes of these bots is in the hands of the user. However, I cannot deem whether or not the interactions with it were right or wrong. In referencing Shaw, Nagy and Neff argue that “Affordances can reveal how to think about ‘who has the power to define how technologies should be used’ (4). Is there a proper way in which a bot should be interacted with?

    Source: Nagy, P. and Neff, G. Imagined Affordance: Reconstructing a Keyword for Communication Theory. Social Media and Society: (2015). 1-9. Web.

    ReplyDelete