Thursday, 4 February 2016

I’m in Love with a Computer?



Online dating is complicated.

Trying to find “the one” is difficult, let alone navigating through a sea of bots.

Yet, bots are not a unique to dating apps. In Reverse Engineering Social Media, Robert Gehl notes that “socialbots” are spreading across social media. Designed to pass as human, socialbots have profiles, post status updates, and respond to messages from other users. Gehl claims that socialbots can pass as human because social media users produce “states of mind” that are discrete enough to be imitated by bots (p. 27).


From Tinder to OKCupid, bots pervade dating apps. The degree to which bots on dating apps can pass as human varies; some are laughably unconvincing, whereas others are endearingly believable. While socialbots are still in their early stages, some have successfully duped online daters for periods of up to two months. In fact, the concern about how to differentiate a bot from a human in online dating has spawned numerous advice articles, such as “How to Find out if You Are Dating a Robot”, “How to Avoid Fake Tinder Profiles”, and “How to Avoid Falling in Love with a Chatbot”. This suggests that some online daters are worried about being tricked into developing feelings for a bot.


Is Johnny a convincing bot?

That bots can influence our emotions relates to what Peter Nagy and Gina Neff have termed “imagined affordance”. Nagy and Neff describe imagined affordances as emerging “between users’ perceptions, attitudes, and expectations; between the materiality and functionality of technologies; and between the intentions and perceptions of designers” (p. 5). For Nagy and Neff, this means that a user’s expectations will shape how they approach a technology.


With bots on dating apps, imagined affordances emerge between users, technologies (e.g. the bot, the dating app, etc.), and designers. Regarding dating apps, imagined affordances emerge between: (1) the user’s expectation that they are communicating with another human (and/or the user’s wariness that they are being duped by a bot); (2) the app’s functions and materiality, such as its aesthetic design, messaging features, etc.; and (3) the intentions of designers to connect people, generate profit by selling memberships and/or user data, etc. Regarding bots, imagined affordances emerge between: (1) the user’s perception that the bot is a human (or that the bot is a bot); (2) a bot’s ability to accurately imitate the discrete states of a human mind; and (3) the intention of designers to fool users into thinking that the bot is human. In turn, these factors shape the affective ties between users and bots. For instance, if a user believes that a bot is human, then there is the possibility that the user will form strong emotional ties (e.g. affection, care, lust, etc.) to the bot that are akin to those the user would have for another (mutually interested) human user.

While I have only provided some preliminary thoughts in this post, we might expand this discussion by exploring some of the following questions:
  • What other aspects of the relationship between bots and humans on dating apps is salient to the discussion of imagined affordances?
  • How does the concept “imagined affordance” apply to cases where humans are deceived about a technology?
  • On dating apps, what might characterize the affective relationship between a bot and a human who is under the impression that the bot is a human? 

3 comments:

  1. Great post, Will! I know there is still a lot to be hashed out regarding socialbot and human interactions online, however, I wanted to take a stab at answering your question: "how does the concept “imagined affordance” apply to cases where humans are deceived about a technology?"

    Nagy and Neff state that "Users may have certain expectations about their communication technologies, data, and media that, in effect and practice, shape how they approach them and what actions they think are suggested" (p.5). Therefore, users may approach different technologies with a particular expectation regarding how they may be used appropriately. However, I do not believe that these imagined affordances can be valid if and when users realize that they have been deceived and thus their expectations are not met. For instance, the Couple app may be a great tool for couples who are in a long-distance relationship and have specific expectations regarding how they can use this app in their relationship. Though, if these expectations do not transpire accordingly, for reasons such as them being deceived about a technology, then these affordances of that technology may no longer be applicable in reality for that user.

    ReplyDelete
  2. Also, have you read this new release from Forbes? http://www.forbes.com/sites/parmyolson/2016/02/10/kik-bots-messaging-facebook-wechat/#2715e4857a0b6b286ef32571

    ReplyDelete
  3. I think that this was an interesting blog post, and have never considered bots appearing and duping users on dating sites and apps. In terms of "imagined affordances" to which users may expect out of the dating app, I would suggest that users of these sites are highly vulnerable to being fooled by a social bot. Users would expect to get to know each other through very limited forms of communication, for the safety of getting to know a stranger on the sites you listed (Tinder and OKCupid). I am not familiar with the latter, but having used the former app I understand that users cannot message each other unless the two parties approve and indicate they are interested in talking to one another, by swiping right on their smart devices. With that precaution, I would imagine that users would be less likely to believe that the person that they are talking to would be a bot. I could also see how bots could thrive in an online dating environment, by learning about human behaviours and mannerisms through online communication, a dating app would be ideal as bots could start conversation with thousands of new people everyday. Many of these conversations would parallel one another, as users would try to get to know the person, often through "small talk" or friendly conversation to introduce themselves to one another, as they develop online relationships. As with many apps, McVeigh-Schultz and Baym suggest that there is a learned politeness or certain way of communicating on different platforms. Hence, with the ability to speak to thousands of people, these bots would be well experienced and conditioned to speak to new users on the app. This could be quite a scary concept, as people simply looking for love could end up fallin win love with "a computer" as you say. This, to me, presents many ethical issues surrounding users' privacy as well as their emotional and mental well-being. Gehl and Fuchs often talk about how new social media platforms which use bots for data mining purposes and change the online communication environment, are exploitative and violate the privacy of users. I believe that bots on dating sites take this issue even further and can have real-life emotional damage towards users. I wonder if users that have duped by apps have led them to file legal cases, and how well that would hold up in court...

    ReplyDelete