Measuring and Mitigating Biases in Vision and Language Models

Tianlu Wang
We have seen unprecedented success achieved by deep learning models in many areas of research in computer vision and natural language processing. While deep learning techniques are universally successful, they have also been criticized for carrying unwanted biases. For example, a human activity recognition model can overly correlate “man” with “coaching” and “woman” with “shopping”, and a resume filtering system can recommend more male candidates for the position of “programmer” and more female candidates for...
This data repository is not currently reporting usage information. For information on how your repository can submit usage information, please see our documentation.