Current jobs related to DataBricks Data Architect - Toronto - Rubicon Path
-
Specialist Solutions Architect
1 hour ago
Toronto, Canada Databricks Full timeSpecialist Solutions Architect – GenAI & LLM Join to apply for the Specialist Solutions Architect – GenAI & LLM role at Databricks. Location Toronto, ON Role Overview As a Specialist Solutions Architect (SSA) – ML Engineering, you will be the trusted technical ML expert for Databricks customers and the Field Engineering organization. You will work with...
-
DataBricks Data Architect
4 weeks ago
Toronto, Canada Rubicon Path Full timeJob Summary The Databricks Data Architect is a senior technical leader responsible for building and optimizing a robust data platform in a financial services environment. In this full-time role, you will lead a team of 10+ data engineers and own the end-to-end architecture and implementation of the Databricks Lakehouse platform. You will collaborate closely...
-
Specialist Solutions Architect
44 minutes ago
Toronto, Ontario, Canada Databricks Full timeP-1363Location: Toronto, ONAs a Specialist Solutions Architect (SSA) - ML Engineering, you will be the trusted technical ML expert to both Databricks customers and the Field Engineering organization. You will work with Solution Architects to guide customers in architecting production-grade ML applications on Databricks, while aligning their technical roadmap...
-
DataBricks Data Architect
27 minutes ago
Toronto, Ontario, Canada Rubicon Path Full timeJob SummaryThe Databricks Data Architect is a senior technical leader responsible for building and optimizing a robust data platform in a financial services environment. In this full-time role, you will lead a team of 10+ data engineers and own the end-to-end architecture and implementation of the Databricks Lakehouse platform. You will collaborate closely...
-
DataBricks Data Architect
19 minutes ago
Toronto, Ontario, Canada Exdonuts Full timeDataBricks Data ArchitectToronto, ON (4 Days Onsite)Contract Role**Azure Cloud Platform:** Expert-level knowledge of Azure services, architecture patterns, and best practices**Azure Databricks:** Advanced experience with Unity Catalog, Delta Lake, cluster management, and governance**Data Architecture:** Strong background in data lake architectures,medallion...
-
Delivery Solutions Architect
20 minutes ago
Toronto, Ontario, Canada Databricks Full time $160,500 - $246,100Location: Toronto, ON (We are unable to consider candidates outside of Toronto at this time)At Databricks we are on a mission to empower our customers to solve the world's toughest data problems by utilising the Lakehouse platform. As a Delivery Solutions Architect (DSA), you will play an important role during this journey. You will collaborate with our...
-
Delivery Solutions Architect
44 minutes ago
Toronto, Ontario, Canada Databricks Full timeLocation: Toronto, ON (We are unable to consider candidates outside of Toronto at this time)At Databricks we are on a mission to empower our customers to solve the world's toughest data problems by utilising the Lakehouse platform. As a Delivery Solutions Architect (DSA), you will play an important role during this journey. You will collaborate with our...
-
Delivery Solutions Architect
6 hours ago
Toronto, Canada Databricks Full timeLocation: Toronto, ON (We are unable to consider candidates outside of Toronto at this time)At Databricks we are on a mission to empower our customers to solve the world's toughest data problems by utilising the Lakehouse platform. As a Delivery Solutions Architect (DSA), you will play an important role during this journey. You will collaborate with our...
-
Databricks Architect
31 minutes ago
Toronto, Ontario, Canada Slalom Full timeSlalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are...
-
Databricks Architect
45 minutes ago
Toronto, Ontario, Canada TechDoQuest Full timeKey ResponsibilitiesDesign and architect enterprise-scale data platforms using Databricks Lakehouse architecture.Lead data modernization initiatives including migration from legacy data warehouses to cloud-based platforms.Define best practices for data ingestion, transformation, storage, and analytics using Databricks.Implement and optimize Apache...
DataBricks Data Architect
1 hour ago
The Databricks Data Architect is a senior technical leader responsible for building and optimizing a robust data platform in a financial services environment. In this full-time role, you will lead a team of 10+ data engineers and own the end-to-end architecture and implementation of the Databricks Lakehouse platform. You will collaborate closely with application development and analytics teams to design scalable data solutions that drive business insights. This position demands deep expertise in Databricks (Azure), hands-on experience with PySpark and Delta Lake, and strong leadership to ensure best practices in data engineering, performance tuning, and governance. Key Responsibilities Own the Databricks platform architecture and implementation, ensuring the environment is secure, scalable, and optimized for the organizations data processing needs. Design and oversee the Lakehouse architecture leveraging Delta Lake and Apache Spark. Implement and manage Databricks Unity Catalog for unified data governance. Ensure fine-grained access controls and data lineage tracking are in place to secure sensitive financial data and comply with industry regulations. Provision and administer Databricks clusters (in Azure), including configuring cluster sizes, auto-scaling, and auto-termination settings. Set up and enforce cluster policies to standardize configurations, optimize resource usage, and control costs across different teams and projects. Collaborate with analytics teams to develop and optimize Databricks SQL queries and dashboards. Tune SQL workloads and caching strategies for faster performance and ensure efficient use of the query engine. Lead performance tuning initiatives for Spark jobs and ETL pipelines. Profile data processing code (PySpark/Scala) to identify bottlenecks and refactor for improved throughput and lower latency. Implement best practices for incremental data processing with Delta Lake, and ensure compute cost efficiency (e.g., by optimizing cluster utilization and job scheduling). Work closely with application developers, data analysts, and data scientists to understand requirements and translate them into robust data pipelines and solutions. Ensure that data architectures support analytics, reporting, and machine learning use cases effectively. Integrate Databricks workflows into the CI/CD pipeline using Azure DevOps and Git. Develop automated deployment processes for notebooks, jobs, and clusters (infrastructure-as-code) to promote consistent releases. Manage source control for Databricks code (using Git integration) and collaborate with DevOps engineers to implement continuous integration and delivery for data projects. Collaborate with security and compliance teams to uphold data governance standards. Implement data masking, encryption, and audit logging as needed, leveraging Unity Catalog and Azure security features to protect sensitive financial data. Stay up-to-date with the latest Databricks features and industry best practices. Proactively recommend and implement improvements (such as new performance optimization techniques or cost-saving configurations) to continuously enhance the platforms reliability and efficiency. Qualifications Bachelors degree in Computer Science, Information Systems, or a related field 7+ years of experience in data engineering, data architecture, or related roles, with a track record of designing and deploying data pipelines and platforms at scale. Significant hands-on experience with Databricks (preferably Azure Databricks) and the Apache Spark ecosystem. Proficient in building data pipelines using PySpark/Scala and managing data in Delta Lake format. Strong experience working with cloud data platforms (Azure preferred, or AWS/GCP). Familiarity with Azure data services (such as Azure Data Lake Storage, Azure Blob Storage, etc.) and managing resources in an Azure environment. Advanced SQL skills with the ability to write and optimize complex queries. Solid understanding of data warehousing concepts and performance tuning for SQL engines. Proven ability to optimize ETL jobs and Spark processes for performance and cost efficiency. Experience tuning cluster configurations, parallelism, and caching to improve job runtimes and resource utilization. Demonstrated experience implementing data security and governance measures. Comfortable configuring Unity Catalog or similar data catalog tools to manage schemas, tables, and fine-grained access controls. Able to ensure compliance with data security standards and manage user/group access to data assets. Experience leading and mentoring engineering teams. Excellent project leadership abilities to coordinate multiple projects and priorities. Strong communication skills to effectively collaborate with cross-functional teams and present architectural plans or results to stakeholders. Preferred Databricks Certified Data Engineer Professional or Databricks Certified Data Engineer Associate. Equivalent certifications in cloud data engineering or architecture (e.g., Azure Data Engineer, Azure Solutions Architect) Exposure to related big data and streaming tools such as Apache Kafka/Event Hubs, Apache Airflow or Azure Data Factory for orchestration, and BI/analytics tools (e.g., Power BI) is advantageous. Experience implementing CI/CD pipelines for data projects. Familiarity with Databricks Repos, Jenkins, or other CI tools for automated testing and deployment of data pipelines. Tools & Technologies Databricks Lakehouse Platform: Databricks Workspace, Apache Spark, Delta Lake, Databricks SQL, MLflow (for model tracking). Data Governance: Databricks Unity Catalog for data cataloging and access control; Azure Active Directory integration for identity management. Programming & Data Processing: PySpark and Python for building data pipelines and Spark Jobs; SQL for querying and analytics; DevOps & CI/CD: Azure DevOps (Azure Pipelines) for build/release pipelines, Git for version control (GitHub or Azure Repos); experience with Terraform or ARM templates for infrastructure-as-code is a plus. Other Tools: Project and workflow management tools (JIRA or Azure Boards), monitoring tools (Azure Log Analytics, Spark UI or Databricks performance monitoring), and collaboration tools for documentation and design (Figma, Visio, Lucidcharts etc.). #J-18808-Ljbffr