The role involves data engineering and DevOps tasks, focusing on Databricks architecture, orchestration, automation, troubleshooting, and building deployment pipelines.
Core data engineering skills:
- Deep understanding of Databricks Lakehouse architecture (Unity Catalog, Delta Lake, Workflows, clusters) - batch and streaming.
- Strong experience with Databricks Notebooks (PySpark/SQL), job orchestration, and production pipeline debugging.
- Expertise in Delta Lake internals: optimization, schema evolution, time travel, Z Ordering, vacuuming.
- Experience troubleshooting:
- Cluster performance issues
- Job failures
- Library/environment conflicts
- Driver/executor memory issues
DevOps skills:
- Strong experience with DevOps pipelines using:
- GitHub Actions / Azure DevOps for CI/CD
- Databricks CLI & REST APIs for automation
- Ability to build automated deployment pipelines for:
- Notebooks
- Workflows
- UC catalogs, schemas, permissions
- Cluster policies / job configs
Ability to set up and troubleshoot:
- Databricks job logs
- Cluster metrics
- Audit logs, Unity Catalog events
- Log Analytics dashboards
Core data engineering skills:
- Deep understanding of Databricks Lakehouse architecture (Unity Catalog, Delta Lake, Workflows, clusters) - batch and streaming.
- Strong experience with Databricks Notebooks (PySpark/SQL), job orchestration, and production pipeline debugging.
- Expertise in Delta Lake internals: optimization, schema evolution, time travel, Z Ordering, vacuuming.
- Experience troubleshooting:
- Cluster performance issues
- Job failures
- Library/environment conflicts
- Driver/executor memory issues
DevOps skills:
- Strong experience with DevOps pipelines using:
- GitHub Actions / Azure DevOps for CI/CD
- Databricks CLI & REST APIs for automation
- Ability to build automated deployment pipelines for:
- Notebooks
- Workflows
- UC catalogs, schemas, permissions
- Cluster policies / job configs
Ability to set up and troubleshoot:
- Databricks job logs
- Cluster metrics
- Audit logs, Unity Catalog events
- Log Analytics dashboards
Core data engineering skills:
- Deep understanding of Databricks Lakehouse architecture (Unity Catalog, Delta Lake, Workflows, clusters) - batch and streaming.
- Strong experience with Databricks Notebooks (PySpark/SQL), job orchestration, and production pipeline debugging.
- Expertise in Delta Lake internals: optimization, schema evolution, time travel, Z Ordering, vacuuming.
- Experience troubleshooting:
- Cluster performance issues
- Job failures
- Library/environment conflicts
- Driver/executor memory issues
DevOps skills:
- Strong experience with DevOps pipelines using:
- GitHub Actions / Azure DevOps for CI/CD
- Databricks CLI & REST APIs for automation
- Ability to build automated deployment pipelines for:
- Notebooks
- Workflows
- UC catalogs, schemas, permissions
- Cluster policies / job configs
Ability to set up and troubleshoot:
- Databricks job logs
- Cluster metrics
- Audit logs, Unity Catalog events
- Log Analytics dashboards
Part of the $4.8 billion RPG Group, we’re a community of 10,000+ innovators across 30+ global locations, including Milpitas, Seattle, Princeton, Cape Town, London, Zurich, Singapore, and Mexico City. Explore Life at Zensar and join us to Grow. Own. Achieve. Learn. to be the best version of yourself.
We believe the best work happens when individuality is celebrated, growth is encouraged, and well-being is prioritized. We are an equal employment opportunity (EEO) and affirmative action employer, committed to creating an inclusive workplace. All qualified applicants will be considered without regard to race, creed, color, ancestry, religion, sex, national origin, citizenship, age, sexual orientation, gender identity, disability, marital status, family medical leave status, or protected veteran status.
Top Skills
Azure Devops
Databricks Cli
Databricks Lakehouse Architecture
Databricks Notebooks (Pyspark/Sql)
Delta Lake
Github Actions
Rest Apis
Similar Jobs
Automotive • Machine Learning • Energy
The SDET/QA Automation Engineer will lead QA initiatives by developing test strategies, implementing automation frameworks, and ensuring quality across multiple projects.
Top Skills:
Adobe Experience ManagerCi/CdGitSelenium
16 Hours Ago
Automotive • Machine Learning • Energy
The role involves developing surveillance models while maintaining application stability and performance, collaborating with teams, and implementing GenAI features.
Top Skills:
AngularCSSDb2GenaiHibernateHTMLJavaMicroservicesMs Sql ServerPostgresSpring BootSpring Mvc
Automotive • Machine Learning • Energy
The Cyber Data Engineer will onboard new data sources, develop automation tools, provide consultancy, and ensure optimal performance of data analytics platforms, particularly ElasticSearch or Splunk.
Top Skills:
AnsibleAws)AzureCloud Services (GcpElasticsearchGitJenkinsJIRALinuxPythonSplunkUnix
What you need to know about the Bengaluru Tech Scene
Dubbed the "Silicon Valley of India," Bengaluru has emerged as the nation's leading hub for information technology and a go-to destination for startups. Home to tech giants like ISRO, Infosys, Wipro and HAL, the city attracts and cultivates a rich pool of tech talent, supported by numerous educational and research institutions including the Indian Institute of Science, Bangalore Institute of Technology, and the International Institute of Information Technology.
