Momin M. Malik

I am currently a PhD student in the Societal Computing program at Carnegie Mellon University’s School of Computer Science. I am co-advised by Dr. Jürgen Pfeffer, who is now based at the Technical University of Munich, and by Dr. Anind K. Dey of CMU’s Human-Computer Interaction Institute.

My work brings statistical modeling to bear on critical, reflexive questions with and about social networks in large-scale digital trace data. The issues I address are nicely summarized in a commentary co-written by Dr. Pfeffer, “Social media for large studies of behavior” (PDF). I am broadly concerned with issues of algorithmic power and control, and of validity and rigor in computational social science; understanding the possible ways in which social media and other large-scale trace data may give a distorted picture of human behavior will allow us to make generalizations that are robust across time, platforms, and contexts, and that can ultimately inform just and effective policy-making.

My work thus far has focused largely on social media data. Currently, within Dr. Dey’s Ubicomp Lab, I am investigating the quality of sensors for collecting social network data and the nature of these data.

I also have ongoing work about understanding and communicating foundational problems in statistical models of social networks.


Download my current resume.

This summer, I am a fellow at Data Science for Social Good in Lisbon, Portugal, working on a project relating to sustainable tourism.

Read my thesis proposal. My committee consists of Dr. Jürgen Pfeffer (Bavarian School of Public Policy, Technical University of Munich, and Institute for Software Research, Carnegie Mellon University), Dr. Anind K. Dey (Human-Computer Interaction Institute, Carnegie Mellon University), Dr. Cosma Rohilla Shalizi (Department of Statistics, Carnegie Mellon University), and Dr. David Lazer (Political Science and Computer and Information Science, Northeastern University). They accepted my proposal on 19 May 2017; I aim to finish my dissertation by February 2018, and will go on the job market starting September 2017.

Current research

Jürgen Pfeffer and Momin M. Malik. (2017). Simulating the dynamics of socio-economic systems. In Betina Hollstein, Wenzel Matiaske, & Kai-Uwe Schnapp (Eds.), Networked governance: New research perspectives (pp. 143-161). Cham, Switzerland: Springer. doi:10.1007/978-3-319-50386-8_9. [Springer Link (paywall)] [Authors’s copy (contains minor corrections)] [Full-sized vector image of my recreation of the World3 diagram] [BibTeX]

Excerpt: To the two traditional modes of doing science, in vivo (observation) and in vitro (experimentation), has been added “in silico”: computer simulation. It has become routine in the natural sciences, as well as in systems planning and business process management (Baines et al. 2004; Laguna and Marklund 2013; Paul et al. 1999) to recreate the dynamics of physical systems in computer code. The code is then executed to give outputs that describe how a system evolves from given inputs. Simulation models of simple physical processes, like boiling water or materials rupturing, give precise outputs that reliably match the outcomes of the actual physical system. However, as Winsberg (2010, p. 71) argues, scientists who rely on simulations do so because they “assume as background knowledge that we already know a great deal about how to build good models of the very features of the target system that we are interested in learning about.” This is not the case with social simulation. It is often done precisely to try and discover the important features of the target system when those features are unknown or uncertain. Social simulation is a kind of computer-aided thought experiment (Di Paolo et al. 2000) and as such, it is most appropriate to use as a “method of theory development” (Gilbert and Troitzsch 2005). Unlike in the natural sciences, uncertainty and the impossibility of verification are the rule rather than the exception, and so it is rare to find attempts to use social simulation for prediction and forecasting (Feder 2002).

Momin M. Malik and Jürgen Pfeffer. (2016). Identifying platform effects in social media data. In Proceedings of the Tenth International AAAI Conference on Web and Social Media (ICWSM-16), pages 241–249. May 18-20, 2016, Cologne, Germany. [Unofficial corrected post-publication version] [ICWSM link] [ICWSM slides] [IC2S2 slides] [Sunbelt slides] [BibTeX]

Abstract: Even when external researchers have access to social media data, they are not privy to decisions that went into platform design—including the measurement and testing that goes into deploying new platform features, such as recommender systems, that seek to shape user behavior towards desirable ends. Finding ways to identify platform effects is thus important both for generalizing findings, as well as understanding the nature of platform usage. One approach is to find temporal data covering the introduction of a new feature; observing differences in behavior before and after allow us to estimate the effect of the change. We investigate platform effects using two such datasets, the Netflix Prize dataset and the Facebook New Orleans data, in which we observe seeming discontinuities in user behavior but that we know or suspect are the result of a change in platform design. For the Netflix Prize, we estimate user ratings changing by an average of about 3% after the change, and in Facebook New Orleans, we find that the introduction of the ‘People You May Know’ feature locally nearly doubled the average number of edges added daily, and increased by 63% the average proportion of triangles created by each new edge. Our work empirically verifies several previously expressed theoretical concerns, and gives insight into the magnitude and variety of platform effects.

Momin M. Malik and Jürgen Pfeffer. (2016). A macroscopic analysis of news in Twitter. Digital Journalism, 4(8), 955-979. doi:10.1080/21670811.2015.1133249. [Taylor & Francis link (Paywall for 1 year)] [Preprint] [BibTeX]

Abstract: Previous literature has considered the relevance of Twitter to journalism, for example as a tool for reporters to collect information and for organizations to disseminate news to the public. We consider the reciprocal perspective, carrying out a survey of news media-related content within Twitter. Using a random sample of 1.8 billion tweets over four months in 2014, we look at the distribution of activity across news media and the relative dominance of certain news organizations in terms of relative share of content, the Twitter behavior of news media, the hashtags used in news content versus Twitter as a whole, and the proportion of Twitter activity that is news media-related. We find a small but consistent proportion of Twitter is news media-related (0.8 percent by volume); that news media-related tweets focus on a different set of hashtags than Twitter as a whole, with some hashtags such as those of countries of conflict (Arab Spring countries, Ukraine) reaching over 15 percent of tweets being news media-related; and we find that news organizations’ accounts, across all major organizations, largely use Twitter as a professionalized, one-way communication medium to promote their own reporting. Using Latent Dirichlet Allocation topic modeling, we also examine how the proportion of news content varies across topics within 100,000 #Egypt tweets, finding that the relative proportion of news media-related tweets varies vastly across different subtopics. Over-time analysis reveals that news media were among the earliest adopters of certain #Egypt subtopics, providing a necessary (although not sufficient) condition for influence.

Hemank Lamba, Momin M. Malik, and Jürgen Pfeffer. (2015). A tempest in a teacup? Analyzing firestorms on Twitter. In Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015 (ASONAM 2015), pages 17–24. August 25-28, 2015, Paris, France. doi:10.1145/2808797.2808828. Best student paper award. [ACM link] [BibTeX]

Abstract: ‘Firestorms,’ sudden bursts of negative attention in cases of controversy and outrage, are seemingly widespread on Twitter and are an increasing source of fascination and anxiety in the corporate, governmental, and public spheres. Using media mentions, we collect 80 candidate events from January 2011 to September 2014 that we would term ‘firestorms.’ Using data from the Twitter decahose (or gardenhose), a 10% random sample of all tweets, we describe the size and longevity of these firestorms. We take two firestorm exemplars, #myNYPD and #CancelColbert, as case studies to describe more fully. Then, taking the 20 firestorms with the most tweets, we look at the change in mention networks of participants over the course of the firestorm as one method of testing for possible impacts of firestorms. We find that the mention networks before and after the firestorms are more similar to each other than to those of the firestorms, suggesting that firestorms neither emerge from existing networks, nor do they result in lasting changes to social structure. To verify this, we randomly sample users and generate mention networks for baseline comparison, and find that the firestorms are not associated with a greater than random amount of change in mention networks.

Momin M. Malik, Hemank Lamba, Constantine Nakos, and Jürgen Pfeffer. (2015). Population bias in geotagged tweets. In Papers from the 2015 ICWSM Workshop on Standards and Practices in Large-Scale Social Media Research (ICWSM-15 SPSM), pages 18–27. May 26, 2015, Oxford, UK. [AAAI link] [Slides] [BibTeX]

Abstract: Geotagged tweets are an exciting and increasingly popular data source, but like all social media data, they potentially have biases in who are represented. Motivated by this, we investigate the question, ‘are users of geotagged tweets randomly distributed over the US population’? We link approximately 144 million geotagged tweets within the US, representing 2.6m unique users, to high-resolution Census population data and carry out a statistical test by which we answer this question strongly in the negative. We utilize spatial models and integrate further Census data to investigate the factors associated with this nonrandom distribution. We find that, controlling for other factors, population has no effect on the number of geotag users, and instead it is predicted by a number of factors including higher median income, being in an urban area, being further east or on a coast, having more young people, and having high Asian, Black or Hispanic/Latino populations.


A social scientist’s guide to network statistics. Guest lecture in 70/73-449: Social, Economic and Information Networks, Fall 2016 (Instructor: Dr. Katharine Anderson). Undergraduate Economics, Tepper School of Business, Carnegie Mellon University. November 10, 2016, Pittsburgh, Pennsylvania. [Slides]

Platform effects in social media networks. 2nd Annual International Conference on Computational Social Science. Social Networks 1. June 24, 2016, Evanston, Illinois. [Slides]

Identifying platform effects in social media data. Tenth International AAAI Conference on Web and Social Media (ICWSM-16). Session I: Biases and Inequalities. May 18, 2016, Cologne, Germany. [Slides]

Social media data and computational models of mobility: A review for demography. 2016 ICWSM Workshop on Social Media and Demographic Research (ICWSM-16 SMDR). May 17, 2016, Cologne, Germany. [Slides]

Platform effects in social media networks. XXXVI Sunbelt Conference of the International Network for Social Network Analysis. Social Media Networks: Challenges and Solutions (Sunday AM2). April 10, 2016, Newport Beach, California. [Slides]

A social scientist’s guide to network statistics (presented to statisticians). stat-network seminar, Department of Statistics, Carnegie Mellon University. March 25, 2016, Pittsburgh, Pennsylvania. [Slides not public, see these slides for the same content.]

Ethical and policy issues in predictive modeling. Guest lecture in 08-200/08-630/19-211: Ethics and Policy Issues in Computing, Spring 2016 (Instructor: Professor James Herbsleb). Institute for Software Research, School of Computer Science, Carnegie Mellon University. March 1, 2016, Pittsburgh, Pennsylvania. [Slides]

Population bias in geotagged tweets. 2015 ICWSM Workshop on Standards and Practices in Social Media Research (ICWSM-15 SPSM). May 26, 2015, Oxford, UK. [Slides]

Inferring social networks from sensor data. XXXIV Sunbelt Conferece of the International Network for Social Network Analysis. Network Data Collection (Saturday AM2). February 22, 2014, St Pete Beach, Florida. [Slides]

Other research

I have worked on projects outside of my main focus, contributing data analysis and/or theory.

Gabriel Ferreira, Momin Malik, Christian Kästner, Jürgen Pfeffer, and Sven Apel. (2016). Do #ifdefs influence the occurrence of vulnerabilities? An empirical study of the Linux Kernel. In Proceedings of the 20th International Systems and Software Product Line Conference (SPLC ’16), pages 65-73. September 19-23, 2016, Bejing, China. doi:10.1145/2934466.2934467. Nominated for Best Paper Award. [ACM link] [arXiv preprint] [BibTeX]

Kathleen M. Carley, Momin Malik, Peter M. Landwehr, Jürgen Pfeffer, and Michael Kowalchuck. (2016). Crowd sourcing disaster management: The complex nature of Twitter usage in Padang Indonesia. Safety Science, 90, 48-61. doi:10.1016/j.ssci.2016.04.002. [ScienceDirect link (paywall)]

Previous works

Urs Gasser, Momin Malik, Sandra Cortesi, and Meredith Beaton. (2013, November 14). Mapping approaches to news literacy curriculum development: A navigation aid. Berkman Center Research Publication No. 2013-25. [SSRN link]

Momin Malik, Sandra Cortesi, and Urs Gasser. (2013, October 18). The challenges of defining ‘news literacy’. Berkman Center Research Publication No. 2013-20. [SSRN link]

Momin M. Malik. (2013, June 24). The role of incumbency in field emergence: The case of Internet studies. Poster presented at the Science of Team Science (SciTS) Conference 2013, Northwestern University, Evanston, IL, June 24-27, 2013. [PDF]
(Note that this is a poster version of my MSc thesis, adapted for the topic of SciTS. Also, I have since realized the error of a non-statistical approach to significance claims.)

Momin M. Malik. (2012, October). Networks of collaboration and field emergence in ‘Internet Studies’. Thesis submitted in partial fulfillment of the degree of MSc in Social Science of the Internet at the Oxford Internet Institute at the University of Oxford. Oxford Internet Institute, University of Oxford, Oxford, UK. [PDF]

Urs Gasser, Sandra Cortesi, Momin Malik, and Ashley Lee. (2012, February 16). Youth and digital media: From credibility to information quality. Berkman Center Research Publication No. 2012-1. [SSRN link]

Urs Gasser, Sandra Cortesi, Momin Malik, and Ashley Lee. (2010, August 30). Information quality, youth, and media: A research update. Youth Media Reporter. [Online]

Momin M. Malik. (2009, September). Survey of state initiatives for conservation of coastal habitats from sea-level rise. Rhode Island Coastal Resources Management Council. [PDF]

Momin M. Malik. (2008, December 8). Rediscovering Ramanujan. Thesis submitted in partial fulfillment for an honors degree in History and Science. The Department of the History of Science, Harvard University, Cambridge, MA. [PDF]

I may also be found in the acknowledgements of the following works:

Viktor Mayer-Schönberger and Kenneth Cukier. (2013). Big Data: A revolution that will transform how we live, work, and think. Boston and New York: Eamon Dolan/Houghton Mifflin Harcourt. [Book website]

Mary Madden, Amanda Lenhart, Sandra Cortesi, Urs Gasser, Maeve Duggan, Aaron Smith, and Meredith Beaton. (2013, May 21). Teens, social media, and privacy. Pew Internet & American Life Project. [Report website]


I may be reached at gmail (my first name dot my last name).

This website is my primary online presence, but I maintain profiles elsewhere as well. [Google Scholar] [ORCID] [LinkedIn] [] [SSRN]