ML Kernel Performance Engineer, AWS Neuron, Annapurna Labs
2 weeks ago
Description
The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's custom machine learning accelerators, Inferentia and Trainium.
The Acceleration Kernel Library team is at the forefront of maximizing performance for AWS's custom ML accelerators. Working at the hardware-software boundary, our engineers craft high-performance kernels for ML functions, ensuring every FLOP counts in delivering optimal performance for our customers' demanding workloads. We combine deep hardware knowledge with ML expertise to push the boundaries of what's possible in AI acceleration.
The AWS Neuron SDK, developed by the Annapurna Labs team at AWS, is the backbone for accelerating deep learning and GenAI workloads on Amazon's Inferentia and Trainium ML accelerators. This comprehensive toolkit includes an ML compiler, runtime, and application framework that seamlessly integrates with popular ML frameworks like PyTorch, enabling unparalleled ML inference and training performance.
As part of the broader Neuron Compiler organization, our team works across multiple technology layers - from frameworks and compilers to runtime and collectives. We not only optimize current performance but also contribute to future architecture designs, working closely with customers to enable their models and ensure optimal performance. This role offers a unique opportunity to work at the intersection of machine learning, high-performance computing, and distributed architectures, where you'll help shape the future of AI acceleration technology
This is an opportunity to work on cutting-edge products at the intersection of machine-learning, high-performance computing, and distributed architectures. You will architect and implement business-critical features, publish cutting-edge research, and mentor a brilliant team of experienced engineers. We operate in spaces that are very large, yet our teams remain small and agile. There is no blueprint. We're inventing. We're experimenting. It is a very unique learning culture. The team works closely with customers on their model enablement, providing direct support and optimization expertise to ensure their machine learning workloads achieve optimal performance on AWS ML accelerators.
Explore the product and our history
https://awsdocs-neuron.readthedocs-
Key job responsibilities
Role
Our kernel engineers collaborate across compiler, runtime, framework, and hardware teams to optimize machine learning workloads for our global customer base. Working at the intersection of software, hardware, and machine learning systems, you'll bring expertise in low-level optimization, system architecture, and ML model acceleration. In this role, you will:
- Design and implement high-performance compute kernels for ML operations, leveraging the Neuron architecture and programming models
- Analyze and optimize kernel-level performance across multiple generations of Neuron hardware
- Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks
- Implement compiler optimizations such as fusion, sharding, tiling, and scheduling
- Work directly with customers to enable and optimize their ML models on AWS accelerators
- Collaborate across teams to develop innovative kernel optimization techniques
A day in the life
As You Design And Code Solutions To Help Our Team Drive Efficiencies In Software Architecture, You'll Create Metrics, Implement Automation And Other Improvements, And Resolve The Root Cause Of Software Defects. You'll Also
Build high-impact solutions to deliver to our large customer base.
Participate in design discussions, code review, and communicate with internal and external stakeholders.
Work cross-functionally to help drive business decisions with your technical input.
Work in a startup-like development environment, where you're always working on the most important stuff.
About The Team
1. Diverse ExperiencesAWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying.
2. Why AWSAmazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.
3. Inclusive Team CultureHere at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon's culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust.
4. Work/Life BalanceOur team puts a high value on work-life balance. It isn't about how many hours you spend at home or at work; it's about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.
5. Mentorship & Career GrowthOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we're building an environment that celebrates knowledge sharing and mentorship. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded professional and enable them to take on more complex tasks in the future.
Basic Qualifications
- 3+ years of non-internship professional software development experience
- 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience
- Experience programming with at least one software programming language
Preferred Qualifications
- 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience
- Bachelor's degree in computer science or equivalent
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Company
- Amazon Development Centre Canada ULC
Job ID: A2980080
-
Toronto, Ontario, Canada Amazon Full timeThe Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's custom machine learning accelerators, Inferentia and Trainium.The Acceleration Kernel Library team is at the forefront of maximizing performance for AWS's custom ML accelerators. Working at the...
-
Toronto, Ontario, Canada Amazon Full timeDESCRIPTIONThe Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's custom machine learning accelerators, Inferentia and Trainium. The Acceleration Kernel Library team is at the forefront of maximizing performance for AWS's custom ML accelerators....
-
Toronto, Ontario, Canada Amazon Web Services (AWS) Full timeDescriptionThe Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's custom machine learning accelerators, Inferentia and Trainium.The Product: The AWS Machine Learning accelerators (Inferentia/Trainium) offer unparalleled ML inference and training...
-
ML Compiler Engineer
2 weeks ago
Toronto, Ontario, Canada Amazon Web Services (AWS) Full timeDescriptionThe Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's custom machine learning accelerators, Inferentia and Trainium.The Product: The AWS Machine Learning accelerators (Inferentia/Trainium) offer unparalleled ML inference and training...
-
Software Development Manager
7 days ago
Toronto, Ontario, Canada Amazon Web Services (AWS) Full time*DESCRIPTION*The Product: AWS Machine Learning accelerators are at the forefront of AWS innovation. The Inferentia chip delivers best-in-class ML inference performance at the lowest cost in the cloud. Trainium will deliver the best-in-class ML training performance with the most teraflops (TFLOPS) of compute power for ML in the cloud. This is all enabled by a...
-
ML Compiler Engineer
4 days ago
Toronto, Ontario, Canada Amazon Web Services (AWS) Full timeDescriptionAt AWS our vision is to make deep learning pervasive for everyday developers and to democratize access to innovative infrastructure. In order to deliver on that vision, we've created innovative software and hardware solutions that make it possible.AWS Neuron is the SDK that optimizes the performance of complex neural net models executed on AWS...
-
Senior ML Performance Engineer
2 weeks ago
Toronto, Ontario, Canada Lemurian Labs Full timeAbout UsAt Lemurian Labs, we're on a mission to bring the power of AI to everyone—without leaving a massive environmental footprint. We care deeply about the impact AI has on our society and planet, and we're building a solid foundation for its future, ensuring AI grows sustainably and responsibly. Innovation should help the world, not harm it.We are...
-
AI/ML Engineer
4 days ago
Toronto, Ontario, Canada Confiar Services LLC Full timeJob Title: AI/ML EngineerLocation: Toronto, CanadaDuration: Long Term ContractKey ResponsibilitiesDevelop and maintain Python applications using MCP (Model Context Protocol).Integrate MCP with AI/ML models, APIs, and data pipelines.Build scalable real-time and batch AI workflow services.Work with data scientists/ML engineers to deliver AI-driven...
-
Performance Reliability Engineer
2 weeks ago
Toronto, Ontario, Canada Cerebras Systems Full timeCerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to...
-
Compiler Code Gen Engineer
2 weeks ago
Toronto, Ontario, Canada Lemurian Labs Full timeAt Lemurian Labs, we're on a mission to bring the power of AI to everyone—without leaving a massive environmental footprint. We care deeply about the impact AI has on our society and planet, and we're building a rock-solid foundation for its future, ensuring AI grows sustainably and responsibly. Because let's face it, what good is innovation if it doesn't...