Queer in AI is hosting a workshop with physical and online socials at International Conference on Machine Learning (ICML) 2022, on July 17 - 23, 2022 at Baltimore Convention Center. We welcome people of all identities (including allies)! For more details regarding the socials, check the Social section below!

How to join

To join the workshop, you need to register for the conference on the ICML website.

If that is not an option for you and you would otherwise struggle to attend the workshop, please get in touch with us at queerinai@gmail.com !

Our workshop date is July 23rd, 2022. We aim to provide hybrid events with both in-person and virtual offerings.

You do not need to register for the conference to join our social - Our social events will be free and open to all. The sign-up form can be found here.

Schedule (July 23rd)

Room: 337 - 338 (for virtual participants, use this Zoom link)

  • Opening remarks 9:15 am - 9:30 am

  • Talk - Sarthak Arora & Satyam Yadav 9:30 am - 10:00 am

  • Break 10:00 am - 10:30 am

  • Talk - Jay Cunningham 10:30 am - 11:00 am

  • Sponsor events 11:00 am - 12:00 pm

  • Lunch 12:00 pm - 01:30 pm

  • Talk - Kyra Yee 01:30 pm - 02:00 pm

  • Poster session 02:00 pm - 02:30 pm

  • Queer in AI Virtual Social event: 02:30 pm - 03:00 pm

Talk Titles and Abstracts

Morning Talk 1

Speaker: Sarthak Arora, Satyam Yadav
Title: South Asian Feminism/s in the 21st Century: Notes on Paradoxes and Possibilities
Abstract: Unpacking the term ‘South Asia,’ this talk candidly explores links between nationalism, state, identity, and gender and their significance in understanding feminist politics and its impacts on structures of queer inclusivity in the region. Examining cyberspaces ranging from Pakistani feminist blogs to queer art communities in India, it seeks to locate the feminist, intersectional unfoldings in the political economy of everyday life.
Bios: Sarthak Arora is a Statistics graduate from Ramjas College, University of Delhi. His interests lie primarily at the intersection of Data Science and its application in otherwise little explored avenues of Ethics, Environment, Politics and Art in creating intuitive and impactful models of Automation. Currently he is conducting research on Fire risk Assessment using AI/ML at UC Berkeley, and working on the Climate SDG Project at the AI for Good Foundation.
Based out of New Delhi, Satyam Yadav is pursuing a Postgraduate degree in Gender Studies from the School of Human Studies, Ambedkar University Delhi. His areas of interest broadly include contemporary art, curatorial studies, and visual cultures of queerness and sexuality in relation to intellectual and bio-political histories of modern south Asia.

Morning Talk 2

Speaker: Jay Cunningham
Title:
Potentials of Community Participation in Machine Learning Research
Abstract: This talk explores the potentials of participatory design approaches within machine learning (ML) research and design, toward developing more responsible, equitable, and sustainable experiences among underrepresented user communities. ML scholars and technologists are expressing emerging interest in the domain of participatory ML, seeking to extend collaborative research traditions in human-computer interaction, health equity, and community development. It is a firm position that participatory approaches that treat ML and AI systems developers and their stakeholders more equally in a democratic, iterative design process, presents opportunities for a more fair and equitable future of intelligent systems. This talk will urge more MI/AL research that employs participatory techniques and research on those techniques themselves, while providing background, scenarios, and impacts of such approaches on vulnerable and underrepresented users. We end by discussing existing frameworks for community participation that promote collective decision making in problem solving, selecting data for modeling, defining solution success criteria, and ensuring solutions have sustainably mutual benefits for all stakeholders.
Bio: Jay Cunningham (He/Him) is a third-year doctoral candidate in the Department of Human Centered Design & Engineering (HCDE) at the University of Washington, where he is advised by Professors Dr. Julie Kientz and Dr. Daniela Rosner. He is passionate about responsibility/fairness in AI systems and experiences through inclusive design and algorithmic/design justice. His doctoral research investigates the integration of human-centered design methods with community-collaboration and equity-centered approaches to explore and address the sociotechnical implications of race, culture, identity & power and the intersection of intelligent systems and machines (AI/ML; voice-ASR/NLP). Jay's work aims to advance inclusive design practices that can lead to more responsible, equitable, and sustainable interactive language technologies systems. Within his campus community, Jay is an avid student leader, mentor, and scholar who is proud to support initiatives surrounding diversity, equity, and inclusion.

Afternoon Talk 1:

Speaker: Kyra Yee
Title:
A Keyword Based Approach to Understanding the Overpenalization of Marginalized Groups by English Marginal Abuse Modeling on Twitter
Abstract: Harmful content detection models tend to have higher false positive rates for content from marginalized groups. Such disproportionate penalization poses the risk of reduced visibility, where marginalized communities lose the opportunity to voice their opinion online. Current approaches to algorithmic harm mitigation are often ad hoc and subject to human bias. We make two main contributions in this paper. First, we design a novel methodology, which provides a principled approach to detecting the severity of potential harms associated with a text-based model. Second, we apply our methodology to audit Twitter’s English marginal abuse model. Without utilizing demographic labels or dialect classifiers, which pose substantial privacy and ethical concerns, we are still able to detect and measure the severity of issues related to the over-penalization of the speech of marginalized communities, such as the use of reclaimed speech, counterspeech, and identity related terms. In order to mitigate the associated harms, we experiment with adding additional true negative examples to the training data. We find that doing so provides improvements to our fairness metrics without large degradations in model performance. Lastly, we discuss challenges to marginal abuse modeling on social media in practice.
Bio: Kyra is a research engineer on the machine learning ethics, transparency, and accountability team at Twitter, where she works on methods for detecting and mitigating algorithmic harms. Prior to Twitter, she was a resident at Meta (formerly Facebook) AI research working on machine translation. She is passionate about working towards safe and equitable deployment of technology.

Accepted Papers/Posters

Experimental Design Considerations for Limiting Psychological Distress presented by Mary Smart (UCSD)

Decentering Imputation: Fair Learning at the Margins of Demographics presented by Evan Dong (Brown University)

Iterative Value-Aware Model Learning on the Value Improvement Path presented by Claas Voelcker (University of Toronto)

Molecular Fingerprints Are a Simple Yet Effective Solution to the Drug–Drug Interaction Problem presented by Yanan Long (University of Chicago)

From Pedestrian Detection to Crosswalk Estimation: An EM Algorithm, Analysis, and Evaluations on Diverse Datasets presented by Ross Greer (UCSD)

We are very excited to announce a joint in-person social with BlackInAI on July 20th! The social will be split into three sessions: 11-12am, 1pm-2pm and 3-4pm. There will also be a separate virtual social organised by QueerInAI on July 23rd (exact time TBC). Note that joining the virtual social doesn’t require a conference registration.

How to join

If you’re interested in joining the socials, please sign-up using the following forms:

Code of Conduct

Please read Queer in AI code of conduct which will be strictly followed at all times. Recording (screen recording or screenshots) is prohibited. All participants are expected to maintain the confidentiality of other participants.

Queer in AI adheres to Queer in AI Anti-harassment policy. Any participant who experiences harassment or hostile behavior may contact Queer in AI Safety Team. Please be assured that if you approach us, your concerns will be kept in strict confidence, and we will consult with you on any actions taken.

Organizers

Huan Zhang (he/him) is a postdoctoral researcher at Carnegie Mellon University. He received his Ph.D. degree in Computer Science at UCLA in 2020. Huan's research focuses on the trustworthiness of artificial intelligence. At Queer in AI, Huan helps on organizing events and mentorship programs at machine learning conferences, such as AAAI 2022 and ICML 2022.

Kruno Lehman (he/him) is a second year MSc student in Statistics at ETH Zurich. He is interested in Bayesian statistics and neural network theory. He is an organizer with QueerInAI.

Willie Agnew (he/him) is a PhD candidate at the University of Washington studying object representations and ML ethics. William helps organize different Queer in AI events and administer the graduate admissions aid programs.

Arjun Subramonian (they/them) is a Computer Science student at the University of California, Los Angeles (UCLA). Their research is broadly about graph representation learning, self-supervised learning, and fairness. They are a co-founder of QWER Hacks, lead diversity and inclusion initiatives within ACM at UCLA, and teach machine learning and fairness at underserved high schools in LA. They also organized the Queer in AI socials at AAAI 2021 and AAAI 2022.

Sharvani Jha (she/her) is a fourth year undergraduate computer science student at the University of California, Los Angeles. She likes applying computer science to various fields, from space weather to saving whale sharks. She is a co-founder of QWER Hacks, has led various initiatives (including AI Outreach and social impact) at ACM at UCLA, is the External Vice President of SWE at UCLA (and helps spearhead the organization’s lobbying initiatives), and is a software developer for UCLA ELFIN Cubesat.

Hua Wei (he/him) is an Assistant Professor in the Department of Informatics at the New Jersey Institute of Technology. He received his Ph.D. degree in Information Sciences and Technology at the Pennsylvania State University. His research interest lies in Reinforcement Learning, Data Mining and Human-in-the-loop Computations.

Contact Us

For any concerns or questions, please reach out to us. Your concerns will be kept confidential.

Email: queerinai [at] gmail [dot] com

Call for Contributions (past)

We are excited to announce our call for contributions to the Queer in AI Workshop at the ICML 2022 Conference! We have a call for submissions to present at our workshop. The submissions must be generally related to the intersection of LGBTQIA+ representation and AI, or be research produced by LGBTQIA+ individuals. The submissions need not be directly related to the themes of the workshop, and they can be works in progress. Please refrain from including personally identifying information in your submission. The submissions need not be directly related to the themes of the workshop, and they can be works in progress. No submissions will be desk-rejected. Accepted works will be linked to our website and invited to present at our workshop at ICML 2022.

Call for Volunteers (past)

Interested in volunteering and joining us on organizing our ICML 2022 events? Want to become a speaker or panelist? Have suggestions on workshop contents? Please fill this interest survey: https://forms.gle/vdad8FN2DEoFDF1Z8

Submissions (past)

Submission link: https://cmt3.research.microsoft.com/QAIICML2022/ (while an "Abstract" is required, it need not be formal and can be a brief synopsis of your project)

Submission Formatting: We are accepting submissions in any media, including---but not limited to---research papers, books, poetry, music, art, musings, tiktoks, and testimonials. Submissions need NOT be in English. This is to maximize the inclusivity of our call for submissions and amplify non-traditional expressions of what it means to be Queer in AI. You can find excellent examples of “non-traditional” submissions here.

Page Limits: There are no page limits. If you are considering submitting work presented in a non-traditional format, you are still required to submit an abstract and include a link pointing to your work.

Anonymized Submission: All submissions should be anonymized. Please refrain from including personally identifying information in your submission.

Important Dates

  • Visa-friendly submission deadline: Sunday, May 15 AoE (Anywhere on Earth)

  • Final submission deadline: Friday, July 8 AoE (Anywhere on Earth)

  • Acceptance notification: Rolling basis, final notification deadline: Wednesday, July 13

We have opened the call on Sunday, April 24 and we will have TWO deadlines for submission – one is a visa-friendly submission deadline, in case you want to present your work in person at the conference and will require to obtain a visa to do so. Note that we are currently NOT guaranteeing full support for the process of obtaining a visa, but we will work our hardest to provide as much support for this as we can. Acceptance notifications will go out on a rolling basis, and final notifications of acceptance will go out by Wednesday, July 13.

If you need help with your submission in the form of mentoring or advice, you can get in touch with us at queerinaiicml2022@googlegroups.com.