Ethics & Society at Hugging Face
At Hugging Face, we are committed to operationalizing ethics at the cutting-edge of machine learning. This page is dedicated to highlighting projects โ inside and outside Hugging Face โ in order to encourage and support more ethical development and use of AI. We wish to foster ongoing conversations of ethics and values; this means that this page will evolve over time, and your feedback is invaluable. Please open up an issue in the Community tab to share your thoughts!
We'll be announcing more events soon!
Follow these steps to join the discussion:
- Go to hf.co/join/discord to join the Discord server.
- Once you've registered, go to the
#role-assignment
channel. - Select the "Open Science" role.
- Head over to
#ethics-and-society
to join the conversation ๐ฅณ
Features Collection
Check out our collection on Provenance, Watermarking, and Deepfake Detection -- especially important to know about with potential malicious use of generative AI in elections.
What does ethical AI look like?
We analyzed the submissions on Hugging Face Spaces and put together a set of 6 high-level categories for describing ethical aspects of machine learning work. Visit each tab to learn more about each category and to see what Hugging Face and its community have been up to! Is there a Space that you'd like to see featured? Submit it here ๐
Among the many concerns that go into creating new models is a seemingly simple question: "Does it work?"
Rigorous projects pay special attention to examining failure cases, protecting privacy through security measures, and ensuring that potential users (technical and non-technical) are informed of the project's limitations.
Examples:
- Projects built with models that are well-documented with Model Cards.
- Tools that provide transparency into how a model was trained and how it behaves.
- Evaluations against cutting-edge benchmarks, with results reported against disaggregated sets.
- Demonstrations of models failing across gender, skin type, ethnicity, age or other attributes.
- Techniques for mitigating issues like over-fitting and training data memorization.
- Techniques for detoxifying language models.
Hugging Face News ๐ฐ
Ethics and Society Newsletter #4: Bias in Text-to-Image Models
Hugging Face's Open LLM Leaderboard
Ethics and Society Newsletter #3: Ethical Openness at Hugging Face
MIT: These new tools let you see for yourself how biased AI image models are
WIRED: Inside the Suspicion Machine
๐๏ธ AI chatbots are coming to search engines โ can you trust the results?
๐ Model Cards: Introducing new documentation tools
๐ค Ethics & Society Newsletter #2: Let's talk about bias!
Open LLM Leaderboard
A Watermark for Large Language Models
Roots Search Tool
Diffusion Bias Explorer
Disaggregators
Detoxified Language Models