Filippo Menczer is an American and Italian academic. He is a University Distinguished Professor and the Luddy Professor of
Informatics and
Computer Science at the
Luddy School of Informatics, Computing, and Engineering,
Indiana University. Menczer is the Director of the Observatory on Social Media,[1] a research center where data scientists and journalists study the role of media and technology in society and build tools to analyze and counter disinformation and manipulation on social media. Menczer holds courtesy appointments in
Cognitive Science and
Physics, is a founding member and advisory council member of the IU Network Science Institute,[2] a former director the Center for Complex Networks and Systems Research,[3] a senior
research fellow of the
Kinsey Institute, a fellow of the Center for Computer-Mediated Communication,[4] and a former
fellow of the
Institute for Scientific Interchange in
Turin,
Italy. In 2020 he was
named a
Fellow of the
ACM.
Menczer's research focuses on Web science, social networks, social media, social computation, Web mining, data science, distributed and intelligent Web applications, and modeling of complex information networks. He introduced the idea of
topical and adaptive Web crawlers, a specialized and intelligent type of
Web crawler.[10][11]
Analysis by Menczer's team demonstrated the
echo-chamber structure of information-diffusion networks on
Twitter during the
2010 United States elections.[34] The team found that conservatives almost exclusively retweeted other conservatives while liberals retweeted other liberals. Ten years later, this work received the Test of Time Award at the 15th International AAAI Conference on Web and Social Media (ICWSM).[35] As these patterns of polarization and segregation persist,[36] Menczer's team has developed a model that shows how social influence and unfollowing accelerate the emergence of online echo chambers.[37]
Menczer and colleagues have advanced the understanding of information virality, and in particular the prediction of what memes will go viral based on the structure of early diffusion networks[38][39] and how competition for finite attention helps explain virality patterns.[40][41] In a 2018 paper in Nature Human Behaviour, Menczer and coauthors used a model to show that when agents in a social networks share information under conditions of high information load and/or low attention, the correlation between quality and popularity of information in the system decreases.[42] An erroneous analysis in the paper suggested that this effect alone would be sufficient to explain why fake news are as likely to go viral as legitimate news on Facebook. When the authors discovered the error, they retracted the paper.[43]
Following influential publications on the detection of
astroturfing[44][45][46][47][48] and social bots,[49][50] Menczer and his team have studied the complex interplay between cognitive, social, and algorithmic factors that contribute to the vulnerability of social media platforms and people to manipulation,[51][52][53][54] and focused on developing tools to counter such abuse.[55][56] Their bot detection tool, Botometer, was used to assess the prevalence of social bots[57][58] and their sharing activity.[59] Their tool to visualize the spread of low-credibility content, Hoaxy,[60][61][62][63] was used in conjunction with Botometer to reveal the key role played by social bots in spreading low-credibility content during the
2016 United States presidential election.[64][65][66][67][68] Menczer's team also studied perceptions of partisan political bots, finding that Republican users are more likely to confuse conservative bots with humans, whereas Democratic users are more likely to confuse conservative human users with bots.[69] Using bot probes on Twitter, Menczer and coauthors demonstrated a conservative political bias on the platform.[70]
As social media have increased their countermeasures against malicious automated accounts, Menczer and coauthors have shown that coordinated campaigns by inauthentic accounts continue to threaten information integrity on social media, and developed a framework to detect these coordinated networks.[71] They also demonstrated new forms of social media manipulation by which bad actors can grow influence networks[72] and hide high-volume of content with which they flood the network.[73]
Menczer and colleagues have shown that political audience diversity can be used as an indicator of news source reliability in algorithmic ranking.[74]
Textbook
The textbook A First Course in Network Science by Menczer, Fortunato, and Davis was published by
Cambridge University Press in 2020.[75] The textbook has been translated into Japanese, Chinese, and Korean.
Projects
Observatory on Social Media (OSoMe, pronounced awesome):[76] A research center aimed to study and visualize how information spreads online.[77] Includes data and tools to visualize
Twitter trends, diffusion networks, detect social bots, etc.[78][79]
Botometer:[80] A machine learning tool to detect social bots on
Twitter. Previously known as BotOrNot. Includes a public API, a social bot dataset repository, and the BotAmp tool[81] to assess the role of automated accounts in boosting a given topic.
Hoaxy:[82] An open-source search and network visualization tool to study the spread of narratives on
Twitter. Includes a public API.
Fakey:[83] A mobile game for news literacy. Fakey mimics a social media news feed where you have to tell real news from fake ones.
Kinsey Reporter:[89] A global mobile survey platform to share, explore, and visualize anonymous data about sex and sexual behaviors. Developed in collaboration with the
Kinsey Institute. Reports are submitted via Web or smartphone, then available for visualization or offline analysis via a public API.[90][91]
^Menczer, F.; G. Pant; P. Srinivasan (2004). "Topical Web Crawlers: Evaluating Adaptive Algorithms". ACM Transactions on Internet Technology. 4 (4): 378–419.
doi:
10.1145/1031114.1031117.
S2CID5931711.
^Maguitman, Ana; Filippo Menczer; Heather Roinestad; Alessandro Vespignani (2005). "Algorithmic detection of semantic similarity". Proceedings of the 14th international conference on World Wide Web - WWW '05. pp. 107–116.
doi:
10.1145/1060745.1060765.
ISBN978-1595930460.
S2CID2011198.
^Markines, Benjamin; Ciro Cattuto; Filippo Menczer; Dominik Benz; Andreas Hotho; Gerd Stumme (2009). "Evaluating similarity measures for emergent semantics of social tagging". Proceedings of the 18th international conference on World wide web. pp. 641–650.
CiteSeerX10.1.1.183.2930.
doi:
10.1145/1526709.1526796.
ISBN9781605584874.
S2CID2708853.
^Menczer, F (2004). "Lexical and semantic clustering by web links". Journal of the American Society for Information Science and Technology. 55 (14): 1261–1269.
CiteSeerX10.1.1.72.1136.
doi:
10.1002/asi.20081.
^Conover, Michael; Jacob Ratkiewicz; Matthew Francisco; Bruno Gonçalves; Filippo Menczer; Alessandro Flammini (2011).
"Political Polarization on Twitter". Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media.
^Ratkiewicz, Jacob; Michael Conover; Mark Meiss; Bruno Gonçalves; Snehal Patil; Alessandro Flammini; Filippo Menczer (2011). "Truthy". Proceedings of the 20th international conference companion on World wide web. pp. 249–252.
arXiv:1011.3768.
doi:
10.1145/1963192.1963301.
ISBN9781450306379.
S2CID1958549.
^Ratkiewicz, Jacob; Michael Conover; Mark Meiss; Bruno Gonçalves; Alessandro Flammini; Filippo Menczer (2011).
"Detecting and Tracking Political Abuse in Social Media". Proc. Fifth International AAAI Conference on Weblogs and Social Media.
^WOJCIK, STEFAN; MESSING, SOLOMON; SMITH, AARON; RAINIE, LEE; HITLIN, PAUL (2018-04-09).
"Bots in the Twittersphere". Pew Research Center. Retrieved 18 March 2019.
^Torres-Lugo, Christopher; Pote, Manita; Nwala, Alexander; Menczer, Filippo (2022). "Manipulating Twitter through Deletions". Proc. International AAAI Conference on Web and Social Media (ICWSM). AAAI. pp. 1029–1039.
arXiv:2203.13893.
doi:10.1609/icwsm.v16i1.19355.