Inference Serving Engineer — Scalable AI Infra
4 weeks ago
A technology firm specializing in AI is seeking a Software Engineer – Inference Serving. This entry-level role involves building software infrastructure for an inference serving cluster. Responsibilities include adapting open-source inference servers and implementing efficient solutions for AI models. Ideal candidates should have a relevant degree and familiarity with Python, ML, and low-level programming.
#J-18808-Ljbffr
-
Inference Serving Engineer — Scalable AI Infra
4 weeks ago
Toronto, Canada Taalas Full timeA technology firm specializing in AI is seeking a Software Engineer – Inference Serving. This entry-level role involves building software infrastructure for an inference serving cluster. Responsibilities include adapting open-source inference servers and implementing efficient solutions for AI models. Ideal candidates should have a relevant degree and...
-
Toronto, Canada Taalas Full timeA technology firm specializing in AI is seeking a Software Engineer – Inference Serving. This entry-level role involves building software infrastructure for an inference serving cluster. Responsibilities include adapting open-source inference servers and implementing efficient solutions for AI models. Ideal candidates should have a relevant degree and...
-
Software Engineer – Inference Serving
4 weeks ago
Toronto, Canada Taalas Full timeJoin to apply for the Software Engineer – Inference Serving role at Taalas At Taalas we believe that fundamental progress is achieved by those who are willing to understand and assail a problem end-to-end, without regard for commonly accepted abstractions and boundaries. We are building a team of hands‑on technologists who dislike overspecialization and...
-
Software Engineer – Inference Serving
4 weeks ago
Toronto, Canada Taalas Full timeJoin to apply for the Software Engineer – Inference Serving role at Taalas At Taalas we believe that fundamental progress is achieved by those who are willing to understand and assail a problem end-to-end, without regard for commonly accepted abstractions and boundaries. We are building a team of hands‑on technologists who dislike overspecialization and...
-
Software Engineer – Inference Serving
4 weeks ago
Toronto, Canada Taalas Full timeJoin to apply for the Software Engineer – Inference Serving role at Taalas At Taalas we believe that fundamental progress is achieved by those who are willing to understand and assail a problem end-to-end, without regard for commonly accepted abstractions and boundaries. We are building a team of hands‑on technologists who dislike overspecialization and...
-
Deployment Engineer, AI Inference
4 weeks ago
Toronto, Canada Cerebras Systems Inc. Full timeAbout Cerebras Systems Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer‑scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of one device. This enables industry‑leading training and inference speeds and lets machine learning users run...
-
Deployment Engineer, AI Inference
4 weeks ago
Toronto, Canada Cerebras Systems Inc. Full timeAbout Cerebras Systems Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer‑scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of one device. This enables industry‑leading training and inference speeds and lets machine learning users run...
-
Deployment Engineer, AI Inference
3 weeks ago
Toronto, Canada Cerebras Systems Full timeCerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry‑leading training and inference speeds and empowers machine learning...
-
Engineering Manager, Inference Platform
2 weeks ago
Toronto, Canada Cerebras Systems Full timeCerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to...
-
Engineering Manager, Inference Platform
2 weeks ago
Toronto, Canada Cerebras Systems Full timeCerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to...