1/24/2024 0 Comments Bias 2 amp models![]() Because it was trained on historical hiring decisions, which favored men over women, it learned to do the same. The second case is precisely what happened when Amazon discovered that its internal recruiting tool was dismissing female candidates. The resulting face recognition system would inevitably be worse at recognizing darker-skinned faces. The first case might occur, for example, if a deep-learning algorithm is fed more photos of light-skinned faces than dark-skinned faces. If the algorithm discovered that giving out subprime loans was an effective way to maximize profit, it would end up engaging in predatory behavior even if that wasn’t the company’s intention.Ĭollecting the data. There are two main ways that bias shows up in training data: either the data you collect is unrepresentative of reality, or it reflects existing prejudices. The problem is that “those decisions are made for various business reasons other than fairness or discrimination,” explains Solon Barocas, an assistant professor at Cornell University who specializes in fairness in machine learning. It could then define creditworthiness within the context of that goal. ![]() In order to translate it into something that can be computed, the company must decide whether it wants to, say, maximize its profit margins or maximize the number of loans that get repaid. A credit card company, for example, might want to predict a customer’s creditworthiness, but “creditworthiness” is a rather nebulous concept. The first thing computer scientists do when they create a deep-learning model is decide what they actually want it to achieve. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |