Good fences make good neighbours: Interrogating how geo-fencing operates as surveillance technology

A geofence is a virtual perimeter for a real-world geographic area. Geofences established around a target location, such as stores, malls, factories, neighbourhoods, and cities transform location information from data gathering software on smartphones by collecting GPS coordinates, cellular network data, RFID, and Wi-Fi data into saleable data. Data vendors such as the Thasos Group [1] sell this geo-fenced information to clients as alternative data used to obtain insight into the investment process. Personal injury law firms [2] use geofencing to target ads onto the smartphones of patients waiting in hospital emergency rooms and chiropractic clinics. In 2017 Copley Advertising [3] was banned by the Massachusets Attorney General from using geofenced data to target women who were accessing reproductive health facilities with anti-abortion advertisements on their smartphones. This micro-targeting of ads suggests that advertising is now a system of surveillance. I will be focusing my presentation on showing how geofenced digital advertising is currently being used in political campaigns. As people’s habits and values are used to micro-target votes via data-driven discriminatory techniques, it will be discussing how surveillance technologies are being sutured into the political campaign cycle.

[1] Dezember, Ryan. Your Smartphone’s Location Data Is Worth Big Money to Wall Street, Wall Street Journal, November 02, 2018. <https://outline.com/yPWTZR>

[2] Allyn, Bobby. Digital Ambulance Chasers? Law Firms Send Ads To Patients’ Phones Inside ERs, NPR, March 25, 2018. <https://www.npr.org/sections/health-shots/2018/05/25/613127311/digital-ambulance-chasers-law-firms-send-ads-to-patients-phones-inside-ers>

[3] Press Release, AG Reaches Settlement with Advertising Company Prohibiting ‘Geofencing’ Around Massachusetts Healthcare Facilities, April 04, 2017.  <http://www.mass.gov/ago/news-and-updates/press-releases/2017/2017-04-04-copley-advertising-geofencing.html>

The Coded Gaze Watches Hollywood

by Aaron Tucker

Joy Buolamwini (2016), in recent her work with the MIT Media Lab creating the Algorithmic Justice League, establishes the problematics behind what she calls the “coded gaze”: as apps and programs ranging from government and law enforcement agencies (Statt, 2018) to banking institutions to advertising agencies (“Amazon Rekognition Customers”) borrow from common code libraries, such as Microsoft’s Face API and Amazon’s Rekognition, to build their bedrock functions, the embedded conscious and unconscious biases of the initial programmers, very often white males, are repeatedly perpetuated (“InCoding – In The Beginning” para. 6). Although her recent work has lead to improvements in this field (Feloni, 2019), Buolamwini does flag facial recognition software’s universalist design principles as particularly problematic, as it is becoming an increasingly ubiquitous presence that weaves surveilling technologies into corporate and security applications that are then utilized for everything from border patrols, commercial data gathering, and police investigation to built-in features of popular mobile phone apps, like Instagram and the basic iPhone camera application. While the prevalent use of biometric monitoring in “private” commercial spaces like malls is deeply troubling (Harris, Smith, and Seucharan, 2018), facial recognition software’s inherent flaws are far more damaging when applied in law enforcement contexts: a study from the Centre on Privacy and Technology at Georgetown Law written by Garvie, Bedoya and Frankle titled “The Perpetual Line-Up” (2016) states that nearly 1 in every 2 U.S. citizens are already in a facial database (para. 3), and that use of software that utilizes facial recognition, especially by law enforcement, is grossly unchecked, unregulated and un-audited; one of the most damning conclusions they draw is that “face recognition may be least accurate for those it is most likely to affect: African Americans” (para. 25). The reasons for this mis-/non-recognition is relatively straight-forward: in discussing their Gender Shades project (gendershades.org), Buolamwini and Timnit Gebru (2018) show that AI-driven machine learning algorithms have an especially difficult time even recognizing dark-skinned, female faces; this is further supported by the earlier research done by Phillips et al (2009) which draws a strong correlation between a human’s increased abilities to “recognize” faces of their own ethnic backgrounds and the facial recognition algorithms being  developed carrying those same biases, establishing what they call the “Caucasian face advantage” (8). Echoing this, in their introduction to Pattern Discrimination (2018), Clemens Apprich argues that when the constant flow of individuals’ self-surveilling data is then fed through algorithms, which attempt to discern patterns from that data, that “Far from being a neutral process, the delineation and application of patterns is a highly political issue”; the “Caucasian face advantage” are both manifested by the algorithms that process facial data, but are also part of a larger regime that recognizes and applying certain “patterns” and algorithms to over others, and only to certain populations . Extending from this, Sasha Costanza-Chock (2018), borrowing from Patricia Hill Collins, flags biometric monitoring and facial recognition software as part of the modern intersectional “matrix of domination” wherein “race, class, and gender [are] interlocking systems of oppression” (para. 14).

From this, a 2018 smartphone Always-On user is not only enabling their own surveillance through facial-recognition-enabled apps, GPS data tracking and engagement with the Internet of Everything, but also capturing all the others within their on- and offline systems. When facial recognition software becomes a part of every node of the synopticon, via the smartphone camera and the apps that utilize the software, the increased surveillance, based in a fundamentally faulty technology, becomes much more problematic. These fears are then mirrored by an increase in what Catherine Zimmer calls Surveillance Cinema (2015), where “the multiple mediations that occur through the cinematic narration of surveillance, through which practices of surveillance become representational and representational practices become surveillant, and ultimately the distinctions between the two begin to fade away” (2). While the beginnings of her work overlap with Norman K. Denzin’s The Cinematic Society (1995) and Wheeler Winston Dixon’s It Looks at You (1995), Zimmer updates her views to consider the hyper-mediated, hyper-networked reality of a 2019 cinema audience, who themselves are generating terabytes of video, and focuses then on how that audience themselves are using their own (self-)surveillance devices.           

My research creation project blends the concept of Surveillance Cinema with the aforementioned problematics around an increasingly common use of facial recognition software. The included video is the first in a series of videoart pieces titled “Envisaged”; this video is focused on women using the Internet in movies from the 1990s; it also includes images of intersectional-disadvantaged people from a facial recognition database of the same era. This, and my future project makes arguments about what sorts of “vision” such an app normalizes, and what casting machine vision, a synoptic coded gaze, over everyday landscapes and species does to those landscapes and species. I have chosen Hollywood films as my “footage” because, as popular scripts, they make persuasive arguments about both how their audiences should feel about their current technologies, but also provide imaginative future-looking cinematic prototypes of technologies that greatly influence real-life design. It is here that the contemporary#hollywoodsowhite and #metoo movements collide with the inherent biases of so many software cores. Intersectionally affected Internet-users were very underrepresented in early cinematic version of the Internet, but when they did appear they were generally punished for their virtual usage, victimized for daring to use the technology, or were demonized as online seductresses (with roots in Metropolis’s false Maria-as-Whore of Babylon (Dir. Fritz Lang 1927)); those who were not pushed to either side of the poles were cast as passive partners in support of a great white male tech genius. By combining facial recognition software with these scenes, I will be making obvious the overlapping failures of both Hollywood and tech companies, the prevalence of surveillance software, and bring to the audience’s attention to the limits of both the film’s cultural scripts and the facial recognition software. Doing so will activate Donna Haraway’s cyborg (1984) and N. Katherine Hayles’s posthuman (1999), and allow me to argue for a critical posthumanism, as theorized and discussed primarily by Rosi Braidotti (2013), as well as Stefan Herbrechter (2013) and Pramod K. Nayar (2014). By making through the lens of a critical posthumanism, I am attempting to break the anthropomorphic hierarchies that mark a European-centric Humanism.

“Envisaged” is a collage of found footage, ranging from Hollywood movies to Internet commercials to cinema of attractions-era films, to photos from facial recognition databases, all captured by a iPhone 6 and its facial recognition capabilities. The bulk of the film documents are taken from Internet-centric movies released in the 1990s, a key era in both the explosion of popular Internet adoption and the establishment of central face databases like FERET (1992-6). Juxtaposing such documents, as seen through a contemporary version of common computer vision, with documents from cinema’s early history demonstrates how facial recognition software and Hollywood cinema both struggle to identify and value certain faces and populations in ways that have their roots decades, centuries before.

Bibliography

“Amazon Rekognition Customers.” amazon.com. https://aws.amazon.com/rekognition/customers/. Accessed Jan. 29 2019.

Apprich, Clemens. “Introduction.” Pattern Discrimination. Meson Press, 2018.

Braidotti, Rosi. The Posthuman. Cambridge, England: Polity Press, 2013.

Buolamwini, Joy. “Incoding – In the Beginning” Medium. https://medium.com/mit-media-lab/incoding-in-the-beginning-4e2a5c51a45d. May 16 2016. Accessed Oct 15, 2018.

Buolamwini, Joy and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” Conference on Fairness, Accountability, and Transparency. Proceedings of Machine Learning Research 81:1–15, 2018

Costanza-Chock, Sasha. “Design Justice, A.I., and Escape from the Matrix of Domination.” JoDS. https://jods.mitpress.mit.edu/pub/costanza-chock. July 27 2018. Accessed Oct 15, 2018.

Dixon, Wheeler Winston. It Looks at You. Albany : State University of New York Press, 1995.

Denzin, Norman K. The Cinematic Society: the voyeur’s gaze. Calif.: Sage Publications, 1995.

Feloni, Richard. “An MIT researcher who analyzed facial recognition software found eliminating bias in AI is a matter of priorities.” Business Insider. Jan. 23, 2019 – https://www.businessinsider.com/biases-ethics-facial-recognition-ai-mit-joy-buolamwini-2019-1 Accessed Jan. 29 2019.

Garvie, Clare, Alvaro M. Bedoya and Jonathan Frankle. “The Perpetual Line-Up” Georgetown Law Centre on Privacy and Technology. 2016. https//:www.theperpetualineup.org. Accessed Oct 15, 2018.

Haraway, Donna J. “The Cyborg Manifesto.” Simians, cyborgs, and women: the reinvention of nature. Routledge, 1991.

HARRIS, TAMAR, MADELINE SMITH and CHERISE SEUCHARAN. “Directories at Cadillac Fairview malls have cameras inside.” July 26, 2018. https://www.thestar.com/news/gta/2018/07/26/directories-at-cadillac-fairview-malls-have-cameras-inside.html. Accessed Jan. 29 2019.

Hayles, N. Katherine. How We Became Posthuman. Chicago: The University of Chicago Press,1999.

Herbrechter, Stefan. Posthumanism: A Critical Analysis. Bloomsbury, 2013.

Mathiesen, Thomas. “The Viewer Society: Michel Foucault’s `Panopticon’ Revisited.”Theoretical Criminology Vol. 1 (2), 1997. 215-234.

Metropolis. Dir. Fritz Lang. Ufa, 1927.

Nayar, Pramod K. Posthumanism. Polity Press, 2014.

Phillips, P. Jonathon et al “An Other-Race Effect for Face Recognition Algorithms “ ACM Transactions on Applied Perception (TAP), Volume 8, Issue 2, 2011.

Rappaport, Mark. Rock Hudson’s Home Movies. Video. 1992.

Statt, Nick. “Amazon told employees it would continue to sell facial recognition software to law enforcement.” The Verge. Nov 8, 2018. https://www.theverge.com/2018/11/8/18077292/amazon-rekognition-jeff-bezos-andrew-jassy-facial-recognition-ice-rights-violations. Accessed Jan. 29 2019.

Tucker, Aaron. Interfacing with the Internet in Popular Cinema. Palgrave MacMillan, 2014.

Tucker, Aaron. Virtual Weaponry: The Militarized Internet in Hollywood War Films. Palgrave MacMillan, 2017.

Zimmer, Catherine. Surveillance Cinema. New York University Press, 2015.

CFP: Special Issue of Computers and Composition – Rhetorics of Data: Collection, Consent, & Critical Digital Literacies

CALL FOR PROPOSALS — Special Issue of Computers and Composition

Rhetorics of Data: Collection, Consent, & Critical Digital Literacies

Guest Editors: Les Hutchinson (Michigan State University) and Maria Novotny (University of Wisconsin Oshkosh)

Given the extent of regular breaking news coverage of user privacy violations (such as the recent whistleblowing on the Facebook and Cambridge Analytica collaboration or the 2017 Equifax data breach), Rhetorics of Data presents an opportunity for rhetorical action in regard to ethical questions about data collection, consent, and the need to acquire critical digital literacies as response. This special issue draws on the journal’s history of scholarship that has defined critical digital literacies with regard to data collection. In 2008, Stephanie Vie noticed that, despite students having a great deal of experience with digital technologies, they lacked “critical technological skills.” Then, drawing from Vie’s scholarship, Estee Beck (2015) argued that “if educators ask students to dig into digital spaces that use tracking technologies, then they also have some responsibility to teach students about invisible digital identities, how to become more informed about digital tracking, and how to possibly opt-out of behavioral marketing.” Kevin Brock and Dawn Shepherd (2016) noted that our discipline’s tendency to focus on the pragmatic nature of procedural rhetoric for how it promotes “code literacy” has lacked an attention to the rhetorical potential behind people’s social, political, and cultural uses of technology for persuasive means. John Gallagher (2017), too, believed that computers and writing educators have an obligation to teach students to consider algorithmic audiences when composing in the Internet. Following Gallagher’s argument, Dustin Edwards (2018) agreed that online writers should attend to digital audiences like algorithms, but they should also contend with larger institutional structures that have designed the algorithms that run platforms as well as the policies that shape users’ online experiences.

What has yet to be addressed by this critical conversation in the field is an attention to the correlation between consent and data. Our special issue specifically extends these conversations and connects to the call in the forthcoming Computers & Composition special issue Composing Algorithms: Writing (with) Rhetorical Machines by Aaron Beveridge, Sergio C. Figueiredo, and Steven K. Holmes. In their CFP, they argue that “We need to better understand the problems posed by algorithmic mediation, but we also need to get involved in making algorithms and studying them through computational and digital methods.” Our own special issue addresses this need by reflecting on the new scenes, methods, and pedagogies required for modeling critical digital literacy practices that promote user agency and consent surrounding rhetorics of data, which is often mediated through algorithms. We define critical digital literacies as critical methods of inquiry that 1) identify ethical concerns or issues within a technological infrastructure, 2) understand the rhetorical implications of these concerns or issues for how they impact people (users and non-users), and 3) respond with a range of tactics that promote more ethical outcomes for use of these technologies.

We offer this special issue as a designated space for contributors to both identify and understand how data operates rhetorically, but also that contributors offer action and response to issues concerning data collection. Data is more than a stagnant object; it is personal information collected through complex algorithms (Beck, 2015; Gallagher, 2017; Edwards, 2018) that often function without user knowledge, but is then commodified and appropriated across networks by political and corporate giants, and their unknown third-party affiliates. As Amidon and Reyman (2015) argue, user contributions are the very content that “‘writes’ the social web into existence,” and thus create enormous value.

This special issue seeks to build off of these conversations, asking questions such as:

  • How do we take up issues of data collection and ethics in our theories, teaching, research, and politics?
  • What theories can guide our pedagogies and research to implement critical digital literacies responses in our writing classrooms?
  • Where do we locate the impetus for critical digital literacies outside of the university and in our communities?
  • What ethical approaches to data collection can we adopt to better protect others while we educate and research?
  • How are other fields examining and teaching critical digital literacies and what may rhetoric and composition earn and/or apply from such methods?
  • What role does data collection play in online writing environments, including (but not limited to) social media spaces and composing platforms like Google Docs?
  • How does the design and language of Terms of Service and Privacy Policies affect users of online technologies and platforms? What do users need for these policies to be more accessible? How can we teach these practices in our writing classrooms?

Contributions to this special issue will extend and forward these scholarly conversations by emphasizing the role of consent when enacting critical rhetorical action–response–in the classroom, in communities, and in our public sphere.

Examples of such responses could look like

  • the redesign of Terms of Service/Conditions and Privacy Policies to support user consent;
  • research gathered from community-based workshops that educate others on a particular privacy issue to promote critical digital literacy;
  • consensual website or app design;
  • service-learning assignments where students collaborate with a technological company to user test online safety of a digital product;
  • collaborative discussions regarding the creation of online spaces that promote intersectional resistance to marginalization and oppression;
  • and other critical, creative actions to ethical concerns surrounding data collection.

Timeline*

-Proposals due: March 15, 2019
-Preliminary decision on authors: May 15, 2019
-First drafts of 6,000-7,000 words (not including bib/works cited) due: January 15, 2020
-Feedback from editors on first drafts returned to author/s: March 15, 2020
-Article revisions due: June 15, 2020
-Article sent out for blind review: June 30, 2020
-Feedback from blind review returned to author/s: September 1, 2020
-Second article revisions due: January 1, 2021
-Ready for copyediting: February 1, 2021
-Publication: Fall 2021

*Our timeline provides the time and space for contributing authors to design classes to incorporate these foci for this upcoming academic year and/or obtain IRB for new research if needed. This is intentional as we understand our call, emphasizing response, may require additional research time.

Submission and Contact Details

Individuals, co-authors, or collectives should submit a 250-500 word proposal that clearly identifies an ethical, rhetorical issue concerning data collection and consent, proposes an engaged response for addressing this issue, a brief address of contribution to the field(s), and an overview of the article. Proposals should be submitted as .doc or .docx files to Les Hutchinson and Maria Novotny at rhetoricsofdata@gmail.com.

The editors enthusiastically encourage those interested to contact us for information or with any questions prior to submitting a proposal. Considering the special issue’s focus on response, we are happy to think through ideas together to ensure the success of the proposal.

Welcome to SurvDH!

We are a community of scholars dedicated to exploring the relationship between surveillance and the humanities, using an anti-colonial framework to analyze the ethics surrounding physical and digital surveillance methods, such as the use of algorithms, biometrics, social media, search engines, smart devices, and DNA. We examine the ways in which communities experience surveillance differently, based on factors such as race, ethnicity, gender, sexuality, and socioeconomic status by developing work (research, digital projects, tutorials, lesson plans, etc.) that encourages the ethical treatment of all members of our community.

There are three ways to interact with this site:

Write

Send us a short blog post analyzing the role of surveillance in our culture. Posts should be between 300-900 words and should engage with theories and themes pertaining to critical digital scholarship. See our submissions page for more details.

Share

Tell us about opportunities for survdh scholars! Email us with CFPs, job announcements, funding opportunities, tutorials, and more. See our submissions page for more details.

Engage

Use hypothes.is to comment on our posts and to engage in dialogue with members of our community.

CFP: Global Digital Humanities Symposium

Global Digital Humanities Symposium
March 21-22, 2019
MSU, Main Library, Green Room
msuglobaldh.org

Call for Proposals
Deadline: November 15
Proposal form

Digital Humanities at Michigan State University is proud to extend its symposium series on Global DH into its fourth year. Digital humanities scholarship continues to be driven by work at the intersections of a range of distinct disciplines and an ethical commitment to preserve and broaden access to cultural materials.

Focused on these issues of social justice, we invite work at the intersections of critical DH; race and ethnicity; feminism, intersectionality, and gender; and anti-colonial and postcolonial frameworks to participate.

Given the growth of these fields within the digital humanities, particularly in under-resourced and underrepresented areas, a number of complex issues surface, including, among others, questions of ownership, cultural theft, virtual exploitation, digital rights, endangered data, and the digital divide. DH communities have raised and responded to these issues, pushing the field forward. We view the 2019 symposium as an opportunity to broaden the conversation about these issues. Scholarship that works across borders with foci on transnational partnerships and globally accessible data is especially welcome. Additionally, we define the term “humanities” rather broadly to incorporate the discussion of issues that encourage interdisciplinary understanding of the humanities.

This symposium, which will include a mixture of presentation types, welcomes 300-word proposals related to any of these issues, and particularly on the following themes and topics by Thursday, November 15, 11:59pm EST:

  • Critical cultural studies and analytics
  • Cultural heritage in a range of contexts
  • DH as socially engaged humanities and/or as a social movement
  • Open data, open access, and data preservation as resistance, especially in a postcolonial context
  • DH responses to crisis
  • How identity categories, and their intersections, shape digital humanities work
  • Global research dialogues and collaborations
  • Indigeneity – anywhere in the world – and the digital
  • Digital humanities, postcolonialism, and neocolonialism
  • Global digital pedagogies
  • Borders, migration, and/or diaspora and their connection to the digital
  • Digital and global languages and literatures
  • The state of global digital humanities community
  • Digital humanities, the environment, and climate change
  • Innovative and emergent technologies across institutions, languages, and economies
  • Scholarly communication and knowledge production in a global context
  • Surveillance and/or data privacy issues in a global context

Presentation Formats:

  • 5-minute lightning talk
  • 15-minute presentation
  • 90-minute workshop
  • 90-minute panel
  • There will be a limited number of slots available for 15-minute virtual presentations

Please note that we conduct a double-blind review process, so please refrain from identifying your institution or identity in your proposal.

Notifications of acceptance will be given by December 22, 2018

CFP: #SMSociety 2019 – Rethinking Privacy and Trust in the Social Media Age

INTERNATIONAL CONFERENCE ON SOCIAL MEDIA AND SOCIETY

Toronto, Canada
July 19–21, 2019

IMPORTANT DATES:
  • Full & WIP Papers Due: Jan. 28, 2019
  • Panels, Workshops, & Posters Due: Mar. 18, 2019
SUBMISSION DETAILS:
THEME: Rethinking Privacy and Trust in the Social Media Age
Can we trust what we hear and see on social media? For a time, social media was viewed as a net positive for society. In their 2013 book, The New Digital Age, Google’s Jared Cohen and Eric Schmidt wrote:“Never before have so many people been connected through an instantly responsive network.” In 2015, Mark Zuckerberg, the co-founder and CEO of Facebook, wrote a glowing endorsement of the internet and social media calling it “a force for peace in the world.” He argued that connecting people through social media would help to bring about a “shared understanding” of the human condition and build a “common global community.”
Fast forward to 2018, social media is now embroiled in a series of ongoing public scandals involving data abuse and misuse—with the most infamous scandal involving the UK data analytics firm Cambridge Analytica. More troubling is fact that social media has emerged as fertile ground for fostering anti-social behaviour and is an important vector for disinformation, misinformation, and manipulation operations. These realities have further raised users’ privacy concerns and challenged public trust in social media, which has resulted in a revitalized call for new legislation and regulation.
Considering this context, the International Conference on Social Media & Society invites scholarly and original submissions that explore key questions and central issues related (but not limited) to the 2019 theme of “Rethinking Privacy and Trust in the Social Media Age.” We welcome research from a wide range of methodological perspectives employing established quantitative, qualitative, and mixed methods as well as innovative approaches  that cross interdisciplinary boundaries and expands our understanding of the current and future trends in social media research, especially research that seeks to explore:
  • What does ‘privacy’ and ‘trust’ mean in the social media age?
 How can researchers  study and operationalize these constructs?
 What is the relationship between users’ trust in social media platforms, their privacy concerns, and their social media adoption and use?
 What are privacy protective technologies that can help to rebuild users’ trust in social media?
 Can and how AI-based applications help to rebuild trust in social media and improve the credibility of social media content?
  • How does social media manipulation affect political trust and tolerance?
  • How is social media being used to help build and strengthen trust in political, economic, social and cultural realms of society?
  • What roles do alternative social networking services—such as Diaspora, Mastodon, and Gab.ai—play in the current social media environment where hashtag campaigns—such as #DeleteFacebook, #MAGA and #MeToo—have become prevalent?
  • What theoretical and methodological tools can researchers rely on for ethical and privacy-protective collection, analysis, and sharing of social media datasets?
  • What are the consequences of data regulations such as GDPR (General Data Protection Regulation) on the industry and users? Are these regulations effective?
  • What are emerging successful user engagement models for governments, journalists, financial institutions, marketers, and others in today’s social media landscape?
  • What is the future of social media research without APIs?
  • How can we measure authentic (or organic) user engagement while properly accounting for bot-driven (or paid) interactions?
  • How does algorithmic architecture influence how we discover and interact with others?
  • What is the role of algorithms played in creating divisive culture in social media?
  • What are the ethical concerns of algorithms (inclusion, accessibility, discrimination, bots) and how to mitigate them?
ABOUT THE CONFERENCE
The International Conference on Social Media & Society (#SMSociety) is an annual gathering of leading social media researchers from around the world. Now, in its 10th year, the 2019 conference is being held in Toronto, Canada from July 19 to 21.
From its inception, the conference has focused on the best practices for studying the impact and implications of social media on society. Organized by the Social Media Lab at Ted Rogers School of Management at Ryerson University, the conference provides participants with opportunities to exchange ideas, present original research, learn about recent and ongoing studies, and network with peers.
The conference’s intensive three-day program features hands-on workshops, full papers, work-in-progress papers, panels, and posters. The wide-ranging topics in social media showcase research from scholars working in many fields including Management, Communication, Computer Science, Education, Journalism, Information Science, Political Science, and Sociology.
PUBLISHING OPPORTUNITIES
Full papers presented at the Conference will be published in the conference proceedings by ACM International Conference Proceeding Series (ICPS) and will be available in the ACM Digital Library. All conference presenters will be invited to submit their work as an expanded full paper to the special issue of the Social Media + Society journal (published by SAGE).
TOPICS OF INTEREST
SOCIAL MEDIA IMPACT ON SOCIETY
  • Privacy
  • Trust & credibility
  • Political mobilization & engagement
  • Extremism & terrorism
  • Politics of hate and oppression
  • Health and well-being
SOCIAL MEDIA & BUSINESS
  • Brand communities
  • Influencer & marketing
  • Public & customer relations
  • Cybervetting and HR
  • Risk management
SOCIAL MEDIA & PUBLIC SECTOR
  • Government social media management
  • Adoption, use, strategies and policies
  • Trust towards public agencies
  • Citizens’ engagement
  • Citizens’ privacy & security concerns
SOCIAL MEDIA & ACADEMIA
  • Alternative metrics
  • Learning analytics
  • Teaching with social media
  • University branding
ONLINE/OFFLINE COMMUNITIES
  • Online community detection
  • Influential user detection
  • Identity and anonymity
  • Case studies
SOCIAL MEDIA & MOBILE
  • Appification of society
  • Privacy & security issues in a mobile world
  • Encrypted messaging apps
  • Fake news & misinformation
THEORIES & METHODS
  • Qualitative approaches
  • Quantitative approaches
  • Mixed methods
  • Opinion mining & sentiment analysis
  • Social network analysis
  • Theoretical models
BIG & SMALL DATA
  • Value of small data
  • Data mining and analytics
  • Sampling issues
  • Visualization
  • Scalability issues
  • Ethics
ORGANIZING COMMITTEE
  • Anatoliy Gruzd, Ryerson University, Canada – Conference Chair
  • Priya Kumar, Ryerson University, Canada – Conference Chair
  • Philip Mai, Ryerson University, Canada – Conference Chair
  • Jenna Jacobson, Ryerson University, Canada – Full Paper Chair
  • Raquel Recuero, Universidade Federal de Pelotas (UFPel), Brazil – Full Paper Chair
  • Hazel Kwon, Arizona State University, USA – WIP Chair
  • Jeff Hemsley, Syracuse University, USA – WIP Chair
  • Anabel Quan-Haase, Western University, Canada – Panel Chair
  • Luke Sloan, Cardiff University, UK – Panel Chair
  • Jaigris Hodson, Royal Roads University, Canada – Poster Chair
ADVISORY BOARD
  • Susan Halford, University of Southampton, UK
  • Caroline Haythornthwaite, Syracuse University, USA
  • Zizi Papacharissi, University of Illinois at Chicago, USA
  • Barry Wellman, INSNA Founder, The NetLab Network, Canada