Cornell University Aug. 17 reported that a team of researchers led by computer science doctoral student Ashudeep Singh have introduced a took that improves the fairness of online rankings.
The tool, while improving the rankings’ fairness, does so without sacrificing their usefulness or relevance.
“If you could examine all your choices equally and then decide what to pick, that may be considered ideal. But since we can’t do that, rankings become a crucial interface to navigate these choices,” Singh, co-first author of “Controlling Fairness and Bias in Dynamic Learning-to-Rank,” which won the Best Paper Award at the Association for Computing Machinery SIGIR Conference on Research and Development in Information Retrieval, held virtually July 25-30, said in a university report.
“For example, many YouTubers will post videos of the same recipe, but some of them get seen way more than others, even though they might be very similar,” the Indian American added. “And this happens because of the way search results are presented to us. We generally go down the ranking linearly and our attention drops off fast.”
The researchers’ method, called FairCo, gives roughly equal exposure to equally relevant choices and avoids preferential treatment for items that are already high on the list, the report said.
This can correct the unfairness inherent in existing algorithms, which can exacerbate inequality and political polarization, and curtail personal choice, it said.
Online ranking systems were originally based on library science from the 1960s and ’70s, which sought to make it easier for users to find the books they wanted. But this approach can be unfair in two-sided markets, in which one entity wants to find something and another wants to be found, according to the university report.
Algorithms that prioritize more popular items can be unfair because the higher a choice appears in the list, the more likely users are to click on and react to it. This creates a “rich get richer” phenomenon where one choice becomes increasingly popular, and other choices go unseen, it notes.
Algorithms also seek the most relevant items to searchers, but because the vast majority of people choose one of the first few items in a list, small differences in relevance can lead to huge discrepancies in exposure.
The research was partly supported by the National Science Foundation and by Workday.