POSTED Jan 16

Internship - AI Interpretability and Fairness Researcher (Summer 2024) (NYC)

at Hume AINew York

Share:

Hume AI is seeking a dedicated and talented individual interested in working as part of our AI research team to advance our core mission: using the world’s most advanced technology for emotion understanding to build empathy and goal-alignment into AI. Join us in the heart of New York City as a research intern and contribute to our endeavor to ensure that AI is guided by human values, the most pivotal challenge (and opportunity) of the 21st century.

About Us

Hume AI is an AI research lab and startup that provides the AI toolkit to measure, understand, and improve how technology affects human emotion. Where other AI companies see only words, we see the other half of human communication: subtle tones of voice, word emphasis, facial expression, and more, along with the reactions of listeners. These behaviors reveal our preferences—whether we find things interesting or boring; satisfying or frustrating; funny, eloquent, or dubious. Having been trained with billions of human expressions as feedback, our LLMs will serve as better question answerers, copywriters, tutors, call center agents, and more, even in text-only interfaces.

Our goal is to enable a future in which technology draws on an understanding of human emotional expression to better serve human goals and emotional well-being. We currently provide API access to our expression measurement models to researchers and developers building better healthcare solutions, digital assistants, communication tools, and more, who work with our AI tools to optimize their applications for users’ preferences and values. As part of our mission, we also conduct groundbreaking scientific research, publish in leading scientific journals like Nature, and support a non-profit, The Hume Initiative, that has released the first concrete ethical guidelines for empathic AI (www.thehumeinitiative.org). You can learn more about us on our website (https://hume.ai/) and read about us in Axios and The Washington Post


About the Role

The ideal candidate for this role will have previous experience training and evaluating machine learning models. In this role, you will conduct analyses of both model interpretability and model fairness across a variety of Hume models. You will build out benchmarking tools and work directly with our research team to adapt model analysis pipelines to the latest improvements in our models. You will have the opportunity to directly contribute to Hume's developer platform.


Requirements

  • Strong foundations in statistics and machine learning.

  • Comfort working with the Python ecosystem and popular ML libraries and tools (e.g. PyTorch, sklearn, numpy, pandas, matplotlib).

  • Some experience evaluating deep learning models for accuracy, interpretability, or fairness (bonus: Captum, Fairlearn, SHAP, etc)

Bonus

  • Knowledge of one or several MLOps platforms for experiment tracking or model training (e.g. Sagemaker, Vertex, Weights and Biases).

  • Experience working with large datasets of text, audio, image, and video data.

  • Familiarity working with ML models in cloud environments (e.g. AWS, Azure, Google Cloud).


This is a paid internship opportunity with compensation ranging from $30-$60/hour.

Please mention that you found this job on Moaijobs, this helps us get more companies to post here, thanks!

Related Jobs

Meta
Postdoctoral Researcher, Embodied AI (PhD)
Seattle, WA
Meta
Postdoctoral Researcher, AI for Chemistry (PhD)
Redmond, WA, Menlo Park, CA, San Francisco, CA
Meta
Visiting Researcher, AI Mentorship Program (PhD)
Pittsburgh, PA, Menlo Park, CA, Seattle, WA, New York, NY, San Francisco, CA
ElevenLabs
AI Researcher
United States
Meta
Product Program Manager, AI Solutions and Automation (ASA)
Menlo Park, CA, Seattle, WA