The Trust & Safety Intelligence team builds technology that keeps Spotify a safe and welcoming space for creativity. We design and scale AI systems that detect harmful or risky activity, uphold global compliance standards, and make moderation operations more efficient. The team is also expanding how we use GenAI to support policy specialists who work around the clock to protect our users from harm. Together, we’re building a strong foundation for safe and responsible innovation at Spotify. As an Associate Machine Learning Engineer, you will help develop AI and ML systems that improve how Spotify detects, reviews, and manages risky or non-compliant activity. You’ll work with engineers, data scientists, and policy partners to design and evaluate models, automate performance tracking, and enhance transparency and auditability. This is a hands-on, collaborative role where you’ll learn from experienced practitioners while building solutions that advance safety, compliance, and responsibility.
Please mention the word FAITHFULLY and tag RMTA4LjM1LjI1My4zMA== when applying to show you read the job post completely (#RMTA4LjM1LjI1My4zMA==). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.