
Can our search results be fairer? Can the systems that generate them be smarter? Are the recommendations we get from Google, Amazon or Netflix really the best they can be?
Those are some of the topics researchers investigate in the Information School鈥檚 , which works to make search and recommendation systems fairer, more diverse and more transparent.
The lab migrated from Rutgers University to the University of Washington with its director, Associate Professor Chirag Shah, when he joined the iSchool in 2019. It recently celebrated its 鈥 a decade in which it has generated highly cited research, produced 14 Ph.D. graduates and garnered more than $4 million in funding.
When users search online, they don鈥檛 just deal with a neutral machine; they encounter recommendations that are produced by algorithms and personalized to appeal to them. Those systems contain baked-in biases, Shah said, such as promoting sensationalized stories because they generate more clicks than reliable sources of information. In one of the lab鈥檚 Responsible AI initiatives, its is studying ways to address these biases and increase fairness and diversity among search results.
In a recent experiment, members of Shah鈥檚 research team tested whether they could improve the diversity of Google search results without harming user satisfaction. They replaced a few of the first-page results with items that would normally appear farther down the list, offering different perspectives. They found that users were just as satisfied with the altered results.
鈥淒iversity of information is easy; doing it the right way is hard.鈥
鈥淚f we can sneak in things with different views or information and people don鈥檛 notice, then we have improved the diversity,鈥 Shah said. 鈥淲e have given them more perspectives without them feeling like we have toyed with their satisfaction.鈥
But Shah wondered if there might be a dark side. If users couldn鈥檛 distinguish between two sets of search results, would the same principle apply to misinformation? To find out, researchers in the lab where they sprinkled debunked COVID-19 related misinformation into search results. They again found that people couldn鈥檛 tell the difference.
The two experiments illustrated the challenge of diversifying search results. A well-intentioned effort to offer users a broader range of sources could inadvertently bring misinformation to the fore, and if that misinformation drew a lot of clicks, it would teach the algorithm to favor it on subsequent searches. Meanwhile, suppressing less-trusted information can open up search engines to charges of bias.
鈥淒iversity of information is easy; doing it the right way is hard,鈥 Shah said. 鈥淭here鈥檚 a whole community working on bias in search and a whole community working on misinformation. This is the first time we鈥檝e shown how one affects the other.鈥
Outside the 鈥榖lack box鈥

Another InfoSeeking Lab initiative aims to increase the transparency of search results and give users more information about why they鈥檙e seeing what鈥檚 being recommended to them. Most users know that when they perform a keyword search in Google, a hidden algorithm spits out customized results. But Google doesn鈥檛 disclose the data it uses to tailor those recommendations or tell users anything about what produced that set of links.
鈥淭he idea is to create explanations because a lot of recommender systems are just a black box,鈥 Shah said. 鈥淚f you want to earn users鈥 trust, if you want to create a fair system, you need to be able to explain your decision process.鈥
As technology makes systems smarter, it can become more complex at the expense of transparency and fairness. Shah hopes to show that greater transparency can be good for business, and he wants to educate users so they鈥檒l demand to know more about the reasons behind the recommendations.
鈥淲hen you鈥檙e trying to make systems smart, it often means making them more opaque and less fair,鈥 Shah said. 鈥淢y hope is we get to a place where you鈥檙e building AI systems that you want to make smart and you also want to make them fair, and these are not competing goals.鈥
Postdoctoral scholar Yunhe Feng is applying that lens of fairness to research on Google Scholar, a widely used search engine for scholarly literature that relies on keyword searches.
鈥淕oogle Scholar just picks up the most relevant documents and returns them to users, but you can鈥檛 say they鈥檙e the fairest,鈥 said Feng, who joined the iSchool this summer after completing his Ph.D. at the University of Tennessee.
Feng is developing a technique to rerank the results after accounting for more information about the research, such as its author鈥檚 gender or national origin. By changing the results to distribute them evenly by these factors, his algorithm raises the profile of research that would otherwise be buried deeper in search results.
The concept of fairness is at the heart of the InfoSeeking Lab鈥檚 work, but as Shah noted, fairness is a social concept and a moving target. In the 19th century, for example, it was generally considered fair that American women didn鈥檛 have the right to vote; and our system of taxation is constantly changing based on what elected leaders consider to be fair at the time. As a result, the lab works to create tools that will enable fairness and let users define it.
鈥淲e don鈥檛 want to decide what鈥檚 fair, but we want to give you the ingredients to say, 鈥業n this time and context, fairness means this,鈥欌 Shah said.
Smarter searching

Along with making search and recommendation systems fairer, researchers in the lab also want to make them smarter. Ph.D. candidate Shawon Sarkar said that as sophisticated as search engines have become, they often don鈥檛 understand the underlying task when people search. If you type the keyword 鈥淩ome,鈥 for example, Google doesn鈥檛 know if you want a history lesson or a vacation plan.
鈥淔rom people鈥檚 interactions with the systems and search behaviors, I try to identify the actual task that people are working on,鈥 Sarkar said. 鈥淚f I can identify that, then I will provide better solutions for them.鈥
Sarkar became involved in the lab as a master鈥檚 student at Rutgers and followed Shah to the UW iSchool to complete her Ph.D. She praised Shah鈥檚 mentorship and said she finds encouragement from the lab鈥檚 collaborative atmosphere.
鈥淲e鈥檙e like a family in the lab 鈥 both the existing students and alumni. We still keep in touch with all the alumni. They鈥檙e very good connections in academia and industry, but also in our personal lives.鈥
And like others in the lab, she has her eye on the greater good.
鈥淚 think my work will help people find accurate information more easily and then make more informed decisions in life,鈥 she said. 鈥淭hat鈥檚 the main goal. It鈥檚 a pretty big goal!鈥
Learn more about the InfoSeeking Lab at .