Sunday, 28 February 2016

Introducing New Data Antiperspirant™ and Data Body Wash™

In her discussion of big data, Melissa Gregg argues that we should think about our relation to data according to the concept of “data sweat” (44). For Gregg, sweat illustrates the “existence of data that is essential to us” (45). Like sweat, data can unintentionally seep out, can be an annoyance or an accomplishment (depending on the context), and leaves behind a trace. While we may try to clean up our online image, Gregg claims that this is merely an “attempt to control what is ultimately out of our control” (45). Gregg sees this as problematic because powerful interests benefit from the lack of control that we have over our data sweat.



While Gregg briefly notes that there is an industry of perfumes and deodorants that accompany the need to disguise sweat, she does not fully develop this part of the metaphor in relation to data. Accordingly, I want to extend her analysis by offering two ways that we can control data sweat: data antiperspirant and data body wash.


First, data antiperspirant refers to a preventative measure that one can use in order to reduce or eliminate their data sweat. One example of a data antiperspirant is the software program Disconnect. In short, Disconnect blocks companies, governments, and individuals from tracking and collecting your data. In other words, Disconnect prevents you from sweating data while you are active online. In this case, “data antiperspirant” is more appropriate than “data deodorant” because the former prevents or reduces sweating, whereas the latter merely conceals unpleasant odours.


Second, data body wash refers to software programs that help one clean oneself of data sweat that has already formed. One example of a data body wash is the software Repnup, which helps people clean up their social media profiles by flagging potentially inappropriate content. Users can look over what has been flagged and delete the content that they want. Like an exfoliating body wash, Repnup helps users clean away their “inappropriate” data sweat. Repnup, however, is not a deep cleaning body wash. Repnup neither removes data sweat that has already been obtained by third parties, nor helps users deal with data sweat that is not considered inappropriate.

  
These two concepts – data antiperspirant and data body wash – draw attention to the ways that users can presently exercise some control over their data sweat. In addition, the idea of data body wash highlights the need for something that can clean away the sweat that has seeped into the crevices of third parties.

Discussion Questions:
  • What are some additional ways that people can reduce their data sweat?
  • Do software programs that embody the concepts of data antiperspirant and data body wash adequately deal with the problems of data sweat? Is there a need for a different solution?

Tuesday, 16 February 2016

Build for the People... of Facebook





Social media platforms, like Facebook, increase in value as their user base increases. Their value is therefore limited to the point of the entire population of the world. Facebook aspires to reach this goal. This past year Facebook has been gloating about their work towards fulfilling their promise of connecting everybody who has access to a mobile device via the internet. The company extends this promise with the goal of allowing everybody “to share their creativity, ideas, and passions with the world” (Facebook for Developers, 2015). They plan to implement this goal with their partner Internet.org, using their joint service named “Free Basics.”   
There exists a number of controversial and problematic aspects with this. Aside the fact that the Internet.org platform violates net neutrality in that it limits its networking capacities to the Facebook platform, there exists capitalistic goals beneath their social equity rhetoric. The first is that Facebook uses this partnership with the aim of expanding its user base to reach the most unconnected market in the world. Although promoted on the premise that everybody in the world should have an opportunity communicate and share their ideas online, the underlying premise is that Facebook’s user base is growing globally to reach untouched markets and this is becoming increasingly attractive to advertisers and third party interest groups.
Internet.org recently expanded thier platform allowing independant developers to create their own unique applications to be used in the newly branded Free Basic service. There, however, exists restrictions in the platform architecture, ensuring compatibility with specific mobile devices (Samsung, Nokia and Qualcomm to name a few). Indeed, this empowers different users to have the ability to contribute to this “not-for-profit effort.” It, however, encourages external developers to contribute to the success of Internet.org’s platform, without directly paying them for their work. The external developers who are being called to “Build for the People,” are being exploited.
Overall, the Free Basics service assists Facebook in exploiting a greater user base than any other social media platform. It also exploits its application developers by inviting them to contribute to their not-for-profit platform, while simultaneously making a profit from selling the aggregated data produced from said applications. Furthermore, it limits these developers to follow the very rigid guidelines for application engineering, allowing the power to continue to lie in the hands of Facebook and Internet.org.
The problematic intentions of Facebook have not gone unnoticed. India banned the Free Basics service last week. It will be interesting to see if other countries follow suit, or if they are convinced by the for the people rhetoric with for the profit intentions.

Facebook for Developers, (2015, March 15). What’s Free Basics Platform? [Video File] Retrieved from https://developers.facebook.com/docs/internet-org

Monday, 15 February 2016

Virtual Identity Suicide For All!

After reading the Daily Mail article “Facebook users are committing ‘virtual identity suicide’ in droves and quitting the site over privacy and addiction fears” posted in the explorations and provocations folder (woah, mouthful), I couldn’t help but think about the image I used in class a couple of weeks ago:

Although internet addiction was one of the lowest categories chosen by respondents in the article, it still made it in the headlines. This is probably because the word “addiction” is eye-catching and a good grab-in for a story. In this instance, I believe the word addiction is extremely accurate. Internet addictions have been worsened due to mobile devices. People incessantly check their phones for social media updates, carrying technology around with them as if it’s an extra limb, much like Slack’s conceptualization of the cyborg body. Social media use should not feel like a need. Internet (maybe even technology) addiction is exacerbated by the young age in which children start becoming reliant upon technologies. In the words of Ellen DeGeneres, kids needs naps, not apps. #preach

Granted, technologies also have positive value. Technology has the capacity to do great things in education, medicine, and research. However, I firmly believe “everything in moderation.” Social media use, on a whole, has surpassed moderate—resulting in people “needing” to pop some Facebook or Twitter or Instagram. Simple seem the times when parents received notes from the Harper Valley PTA complaining about simple wardrobe concerns. (Anyone know the reference? Great tune…)

I personally think more people should commit virtual identity suicide. My sophomore year of university, I committed virtual identity suicide for about six months (before pathetically crawling back to it after having joined the Geneseo softball team and wanting to be part of the Facebook group). Social media use has become so ingrained in our society it is leaking into our real and tangible activities. When people go out to eat at a restaurant they should be talking to each other, not texting from two feet away, not checking social media, not updating their status, and definitely not taking pictures of their food. #InstagramHusbands.



And with this last image I will leave you with the question: What is the world coming to….
Developing Country Value Generation in Social Media

In “Class struggles in the digital frontier” Eran Fisher talks about Facebook users as prosumers i.e. consumers who produce surplus value such as user generated advertisements which bring in revenue for social media corporations. This brings me to Mark Andrejevic's “Personal data” wherein the wealth (trove) of social media users' personal information can be utilized to profile users regarding suitability for employment, loans etc while creating networks of affective investment that are then sold to advertisers. But how useful can this information be when the user is situated in a developing country where advertising and financial matters (loans, employment etc) are considered differently? Where the users do not pay much heed to Facebook advertising since they mostly use social media as a free tool to communicate with others? Does this mean that the value of Facebook users' information varies according to their geographic location? How then can the information/activities of a developing country's user be of any potential value? Perhaps value can be determined/generated through other means in such cases. The Free Basics App scenario in India is one such scheme. Facebook attempted to disseminate this app (which provides free basic Internet services but charges for anything extra) to users in India – thus paving the way for online international markets. This app was blocked by the government since it violated net neutrality i.e. it privileged certain sites over others – sites that have agreements with Facebook. Bloggers have expanded on this debate:

http://paulwriter.com/facebooks-internet-org-india-line-principles-net-neutrality/

http://recode.net/2016/01/19/facebooks-regulatory-battle-over-free-basics-in-india-is-getting-feisty/


Facebook then overtly recruited users to demonstrate against this blockage. This scenario suggests another form of revenue accumulation by Facebook. By cracking open the online markets of developing countries it creates the potential for revenue accumulation. Furthermore by harnessing users to support these schemes, Facebook has the means to further their business interests.  

#RIPTwitter

In early February, it was rumoured that there would be changes coming to the chronological timelines on Twitter. Unlike Facebook, Twitter orders its content in reverse chronological order, allowing for strings of tweets to be read in their proper order, and allowing for voices to potentially be heard equally amongst the audience. The new changes that were rumoured included the removal of the 140-character limit, and the introduction of an algorithm to show more popular tweets first, coming within the next week.


This change had many users in outrage on the platform, not wanting to see the affordances, or their imagined affordances, of the site changing. Though showing tweets in an algorithm would make it easier to know that you had seen everything you'd wanted to see without scrolling through the entire timeline, it would also make Twitter much more appealing to advertisers, guaranteeing that ads would be seen by a certain number of users.



Jack Dorsey, Twitter's CEO, was quick to try and squander these rumours, but left a bit of wiggle room by saying that the changes "were never planned for next week", allowing plenty of question to remain over whether the changes would be coming in the near future. 


On Februrary 10th, Twitter did unveil their algorithmic timeline, though it currently exists as an opt-in feature found within the individual user's settings. 
Does an opt-in feature satisfy both the appeal to advertisers as well as the comfort and happiness of Twitter's current users? Is it a step toward a democratic space, allowing the user to decide how they want their timeline to operate? Or is it perhaps an indication that these changes will eventually happen without the choice to opt-in, only there for now to get users curious about what the new timelines would look like? 

Saturday, 13 February 2016

From Google to Academia

Fuchs allocates a chapter of his book to discussing Google and its work environment. According to Fuchs, there are pressures within the company, driving everyone to maximize efficiency and meaningfully contribute to the Google network in their “own” time. This system of peer pressure motivates employees to uphold incredibly high standards. As a result, a healthy work/life balance is often jeopardized. Furthermore, Google employees are expected to take 20% of their paid time working on projects of personal interest. However, these projects ultimately benefit Google. Although these employees can take credit for having come up with a program in a CV or resume, it ultimately serves the corporate giant that is Google.

Thus far, I have simply summarized some of the key points discussed by Fuchs. Now, I am going to take these ideas and apply them to a different working environment: Academia. 


Professors are expected to make publications, much as Google employees are expected to make Google-oriented programs. There are pressures in place, such as the quest to achieving tenure, motivating employees to maximize efficiency in order to make a meaningful contribution to academia (i.e., publications). Since oftentimes extensive research, and sometimes travel, is involved in the creation of these writings, a healthy work/life balance may be jeopardized. Although professors can research areas of personal interest, it is still expected that they produce research, which ultimately benefits the image of their university. Although the professor may take credit for an article on a CV or resume, it ultimately serves the corporate giant that is the university. The university can attract more students with a more impressive faculty, thus making more money.

Furthermore, universities house their students. As such, on campus dining and exercise facilities must be available. Professors have access to these amenities, just as Google employees have access to similar services. Granted, professors may have to pay to access these services, yet they are still available. This promotes an atmosphere of working late on campus, knowing that you can take a break at the gym, or run down to the concourse for a quick burger before returning to the office. When a publication is finally achieved, the institution has another professor to proudly boast of to prospective students. (Is this pressure the reason why Fuchs was compelled to publish a bunch of work that basically says the same thing? We all know his book does that…) 

Just as being a Googler is romanticized, so is being a professor. Many people believe professors have a stress-free job; they can pick and choose when they want to be on campus and they don’t have to work the entire year to get a full year’s salary. Just as people outside of Google overlook the hard truths associated with the job, people overlook the struggles of academia. (Here is an article describing those struggles)

Although comparing Google to a university is not a perfect comparison, there is clearly some cross over. It may be fruitful to examine if Google’s tools of exploitation are apparent in other work environments.

Work Cited:

Fuchs, C. (2014). Social media: A critical introduction. London: SAGE Publications Inc.

Friday, 12 February 2016

Mass Media On Social Media? Television Shows On Instagram?

As a class, we have talked about power imbalances on social media sites between users producers or designers. We have looked at how user generated data is capitalized on by producers to create different forms of exploitation and we focused on the importance of both users and designers to populate and frame these sites of communication. During my presentation I suggested that mass media and social media are currently integrated to change the viewer/ user experience. I found this even more prevalent with the recent release of the first-ever, scripted television show on Instagram: Shield 5. It will consist of 28 episodes, each 15 seconds long, with one episode released per day.
Check it out here : Shield 5



Gehl (2014) suggested that online advertisements are now more covert and through social media and I draw on Fuchs’ reading which further states that advertisers are able to create more directed advertisements to target audiences based on corporations accessing users’ online behaviours and constructed “profiles” using big data (2014). By streaming Shield 5 episodes on the widely used platform of Instagram, with what appears to be an action-packed, teen or young adult drama, I believe that the producers of Shield 5 are able to reach their target audience and are able to track audience reception more easily. This particular age group can be more easily reached through this particular social media site, rather than through mass media television.

Instead of having fans individually talk about the show on social media themselves, the producers make this process more accessible, by producing the show on social media. Each video already has thousands of likes and comments! This encourages more interaction between users and further popularizes these episodes by creating more buzz and discussion on Instagram, where viewers can watch and directly talk about it to other viewers. Through this platform, the producers are able to utilize hashtag and enable the producers more power to suggest how this series should be discussed in the public domain. Herman acknowledges the ability of users to express their opinions based on the affordances of the social media platform itself. Using hashtags allows the video to be included within relevant spheres of conversation between users following these hashtags on Instagram. Borrowing from Slack and Wise’s articulation and assemblage, hashtags allow the producers have more agency in creating associated ideas and focusing on the importance of certain aspects of the show: they are able to categorize and classify how the show is to be understood and within what particular genres and topics of conversation, using the affordances of Instagram’s user interface.


Following the first episode, a picture was uploaded to the Instagram account including a wanted poster of the main character introduced in that episode, to suggest a multimedia approach to the show. By producing artifacts on the site, to suggest clues about the plot to the viewers, it furthers discussions about the show and creates even more interaction between users.


I personally think the producers capitalize on this process, through Castel’s framework of the network theory which claims that sites generate value from their users. I applied this sense of value not only through the collection of big data but also on the producers’ abilities to capitalize on the increased exposure of the show, through a multimedia approach. This may lead to more loyal or dedicated fans, as they further discussion about the show using the interactive platform, in which the show is actually aired. McVeigh-Schultz and Baym suggest that affordances of technology are nested within various levels of interaction, on a scale. I believe that the affordances of viewing this show on Instagram, increases the shows ability to be discussed on this particular platform. Instagram users are familiar with this site’s affordances: by knowing how to access the show on this platform, users have to capability and are more likely to comment and interact with other users on this platform. 

Thursday, 4 February 2016

I’m in Love with a Computer?



Online dating is complicated.

Trying to find “the one” is difficult, let alone navigating through a sea of bots.

Yet, bots are not a unique to dating apps. In Reverse Engineering Social Media, Robert Gehl notes that “socialbots” are spreading across social media. Designed to pass as human, socialbots have profiles, post status updates, and respond to messages from other users. Gehl claims that socialbots can pass as human because social media users produce “states of mind” that are discrete enough to be imitated by bots (p. 27).


From Tinder to OKCupid, bots pervade dating apps. The degree to which bots on dating apps can pass as human varies; some are laughably unconvincing, whereas others are endearingly believable. While socialbots are still in their early stages, some have successfully duped online daters for periods of up to two months. In fact, the concern about how to differentiate a bot from a human in online dating has spawned numerous advice articles, such as “How to Find out if You Are Dating a Robot”, “How to Avoid Fake Tinder Profiles”, and “How to Avoid Falling in Love with a Chatbot”. This suggests that some online daters are worried about being tricked into developing feelings for a bot.


Is Johnny a convincing bot?

That bots can influence our emotions relates to what Peter Nagy and Gina Neff have termed “imagined affordance”. Nagy and Neff describe imagined affordances as emerging “between users’ perceptions, attitudes, and expectations; between the materiality and functionality of technologies; and between the intentions and perceptions of designers” (p. 5). For Nagy and Neff, this means that a user’s expectations will shape how they approach a technology.


With bots on dating apps, imagined affordances emerge between users, technologies (e.g. the bot, the dating app, etc.), and designers. Regarding dating apps, imagined affordances emerge between: (1) the user’s expectation that they are communicating with another human (and/or the user’s wariness that they are being duped by a bot); (2) the app’s functions and materiality, such as its aesthetic design, messaging features, etc.; and (3) the intentions of designers to connect people, generate profit by selling memberships and/or user data, etc. Regarding bots, imagined affordances emerge between: (1) the user’s perception that the bot is a human (or that the bot is a bot); (2) a bot’s ability to accurately imitate the discrete states of a human mind; and (3) the intention of designers to fool users into thinking that the bot is human. In turn, these factors shape the affective ties between users and bots. For instance, if a user believes that a bot is human, then there is the possibility that the user will form strong emotional ties (e.g. affection, care, lust, etc.) to the bot that are akin to those the user would have for another (mutually interested) human user.

While I have only provided some preliminary thoughts in this post, we might expand this discussion by exploring some of the following questions:
  • What other aspects of the relationship between bots and humans on dating apps is salient to the discussion of imagined affordances?
  • How does the concept “imagined affordance” apply to cases where humans are deceived about a technology?
  • On dating apps, what might characterize the affective relationship between a bot and a human who is under the impression that the bot is a human? 

Tuesday, 2 February 2016

Facebook: Come to the factory, it's fun!



A recent news story about Facebook's efforts to launch an initiative in India called "Free Basics" caught my interest, especially after reading Fuchs and autonomous Marxism (especially exploitation) that he points to that are an integral part of social media sites. According to this article,


Free Basics is a pet project of CEO Mark Zuckerberg that brings limited Internet use to those who would otherwise not be able to afford it. Originally launched as Internet.org, the service is available in over 30 countries.”

The service provides free internet access, but is channeled through Facebook and allows users to only access certain sites and information. A second article describes Free Basics as “a lightning rod for critics who say it actually gets in the way of a free and open Internet, creating a walled garden favoring Facebook and a small number of online venues. Others have accused Facebook, the world's largest social network, of forcing companies to offer their services at no cost”.

An Indian regulatory agency, the Telecom Regulatory Authority of India (TRAI), is collecting opinions from Indians about the Free Basics program. Zuckerberg (via Facebook of course) launched a campaign of support called “Save Free Basics” that provided skewed information and inundated the TRAI with 16 million e-mails sent from a template he provided.



So, we ask ourselves, why is Zuckerberg and Facebook so intent on “giving the internet for free” to the people of India? "We are committed to Free Basics and to working with Reliance and the relevant authorities to help people in India get connected," a Facebook spokesperson said. Could it be that adding multiple millions of prosumers to the Facebook factory could enrich Facebook exponentially, bringing in considerably more value to the Facebook corporation than the (minimal) cost of setting up Free Basics? As Castells notes, those in power "have made it their priority to harness the potential of mass communication in the service of their specific interests" (Fuchs:76). Exploitation, anyone?