To heck with conventionality. Sometimes, you have to change direction and go bold. Sometimes, despite years of complex-systems study and academic obscurity, the rainbow beckons, you follow and real-world gold results.
And if it gets someone like Elon Musk’s attention, generates national acclaim for the Luddy School of Informatics, Computing and Engineering, targets “bad actors” using social media to manipulate and distort, aids understanding of Twitter trends during elections and more, all the better.
So here is Kai-Cheng “Kevin” Yang, standing in the shadow of Luddy Hall on a hot, breezeless August afternoon, contemplating Observatory on Social Media research projects gone mainstream.
“It’s not like a typical academic setting,” the Ph.D. candidate in Informatics says. “It’s more like we’re building a product. That’s the nature of the thing.”
When you offer intriguing names such as Botometer, BotAmp and Hoaxy for research tools that dig deep into social media trends for the common good, well, build it and they will come.
And they have.
Yang’s advisor, Filippo Menczer, director of the Observatory on Social Media and Luddy Distinguished Professor of Informatics and Computer Science, studies the social media potential for good and bad; how harmful and dangerous it can be; how “entities” can create vulnerabilities to threaten democracies and health; and how to counter those vulnerabilities.
OSoMe (pronounced awesome) is a joint project of the Center for Complex Networks and Systems Research (CNetS) at the Luddy School, the Media School, and the Network Science Institute at Indiana University. OSoMe unites data scientists and journalists in studying the role of media and technology in society, and building tools to analyze and counter disinformation and manipulation on social media.
OSoMe has built counter-measure tools crucial to recognizing threats. It’s committed to keep the tools running and relevant. If that means extra work, roll up your sleeves and get to it.
As the self-described “lead developer and maintainer” of the Botometer and BotAmp tools, Yang does work hard, building on the work of those before him.
“To make it happen,” he says, “you have to spend a lot of time and energy on it, but I believe it’s worth it.”
Yang’s research centers on “detecting social bots on Twitter.” It analyzes social media claims to “identify inauthentic actors and adverse behaviors, and study their implications.” In other words, things on social media aren’t always what they seem, and does this change people’s thoughts and behaviors.
Menczer says it’s about understanding why a message or topic is “spreading virally across the network.” Is there a “super-spreader” involved? Are you communicating with a real person or are you being manipulated by a bot?
Creating tools to analyze this involves artificial intelligence, machine learning and algorithms. It takes teamwork from experts in psychology, sociology, economics, physics, computer science and more.
“An entity can create false identities and use them in a coordinated way to manipulate opinions,” Menczer says. “It affects our interactions, our decisions and the leaders we elect.”
But if you know it’s a false identity bent on manipulation, well, knowledge is power.
Power comes from these OSoMe tools:
Botometer is designed to analyze Twitter accounts to see if they’re likely human or a bot.
“Botometer can only check one account at a time,” Yang says. “That’s useful if you want to see if it’s a bot or not.”
BotAmp explores multiple accounts. It searches Twitter to gather all the tweets mentioning a certain hashtag or topic to see if they are more likely human or bot generated.
A single person, for instance, can control thousands of bot accounts and potentially distort and inflate trend interest.
“Sometimes people see this trending topic or hashtag and wonder if many accounts are talking about it,” Yang says. “Are real people behind it? We try to make that analysis easier.”
A 0-to-5 scoring system is used. The lower the number, the more likely it’s human generated. A higher number suggests bot activity.
“We believe Botometer works well,” Yang says. “It’s useful to many people, from normal social media users to journalists doing an investigation to researchers.”
Making this information accessible to the public “is fun and important and has a real-world impact,” Yang says.
Impact could include the upcoming mid-term elections. These tools could aid in understanding social media trends and discerning truth amid misinformation noise.
“That’s why we make the tools available” says Yang. “No matter what events are happening, we hope people use the tools to get some insight as to what’s going on.”
Because current tools might not be enough, plans are underway for a new initiative specifically for the midterm elections.
“We might have some new tools specifically designed for this election so we can answer questions people are really interested in.”
The foundation is already in place. One tool -- Hoaxy, which visualizes the spread of claims and fact checking -- is up and running.
“We have a lot of experience,” Yang says. “This is actually the right time to start it.”
Producing these tools is not a one-person operation. More than a dozen Luddy students, research scientists and faculty contributed, with several playing major roles. “It’s a group effort,” Yang says.
Keeping it going takes significant time and work.
“For the tools to be relevant, you have to maintain them and upgrade every once in a while,” he adds.
Yang handles that while keeping up with his research and other responsibilities, doing it at an impressive level that doesn’t surprise Menczer.
“He’s one of the best Ph.D. students I’ve ever been fortunate to advise. He’s a hard worker; he’s smart; and he takes a lot of initiative.
“It’s very difficult to pull all the data together and figure out what the connections are. If anybody can pull it off, it’s him.
“He has so many publications already, more than many young faculty. He’s an incredible asset for our research center. I’m super proud of him.”
Attention soared recently when Musk, the billionaire founder of Tesla, SpaceX, and more, used Botometer to analyze Twitter firehose data (an unfiltered stream of every public Tweet as it’s posted) the first week in July. It was part of his court filing in a suit with Twitter over his attempt to buy the company.
Musk said it “shows that, during that time frame, false or spam accounts accounted for 33 percent of visible accounts.”
Twitter officials disagreed, saying it wasn’t nearly that high.
The result -- multiple media outlets (including the BBC and CNN) contacted Yang for clarity.
“(Musk and Twitter) are having an argument over what percentage of Twitter accounts are false or spam,” Yang says.
Yang says Botometer could be used to estimate how many accounts are automated, but a key is the threshold number (such as a 2 or a 3 in that 0-to-5 scoring system) used to determine what’s human and what’s a bot. If the threshold is not selected in a sound way, the estimate can be off.
“They didn’t contact me about using our tool,” Yang says. It’s publicly available, so anyone can use it. I am not aware of Elon Musk’s team’s work.”
Yang is aware of the interest spike. Before Musk, the Botometer website was getting about 500 visits a day worldwide. That jumped to as high as 3,000 before dropping back to normal levels.
“People saw it on the news and tried it,” Yang says, “but it’s not really what they do daily. We want to let people know it’s there if they need it.”
Yang says for privacy reasons, they don’t keep track of who uses the tools unless users mention it on Twitter.
“If a journalist uses our tool,” Yang says, “they will mention it, and that’s how we know. If researchers use it, they are supposed to cite it and we’ll know.
The clock ticks toward 6 p.m. on a recent Monday. Yang is in a conference room at the Luddy Center for Artificial Intelligence, punching data into Botometer regarding a Twitter account. At 0.9, the user is likely a human.
Yang didn’t expect social media research to dominate his Ph.D. quest.
He studied physics while going to China’s Lanzhou University. He used mathematical models to analyze complex systems. He says he came to IU five years ago, because of CNetS.
Then everything changed.
Luddy’s diverse opportunities and a strong interest in social media was too intriguing to ignore.
“I thought it would be fun to study social media. I started to work with Menczer. I was involved in several projects, and Botometer was one of them.
“I’m drawn to social media because it’s closely related to our daily life, and the results can help others in a way that mathematical modeling never could.”
Helping others is a constant theme for Yang, who is uncertain what’s next after his Ph.D. is complete.
“Sometimes with research, you think you have a really good idea, and other people might not think so,” he says, “It happens all the time. We’re glad some people find our tools useful. That’s really rewarding.
“If you can have a broader audience, why not?”