Job Title:  Senior Consultant | Oracle Analytics Cloud | Bengaluru | Oracle

Job requisition ID ::  84163
Date:  Jun 23, 2025
Location:  Bengaluru
Designation:  Senior Consultant
Entity: 

We are seeking a Senior Data Engineer with extensive experience in cloud platforms and data engineering tools, with a strong emphasis on Databricks. The ideal candidate will have deep expertise in designing and optimizing data pipelines, building scalable ETL workflows, and leveraging Databricks for advanced analytics and data processing. Experience with Google Cloud Platform is beneficial, particularly in integrating Databricks with cloud storage solutions and data warehouses such as BigQuery. The candidate should have a proven track record of working on data enablement projects across various data domains and be well-versed in the Data as a Product approach, ensuring data solutions are scalable, reusable, and aligned with business needs.

Key Responsibilities:

·      Design, develop, and optimize scalable data pipelines using Databricks, ensuring efficient data ingestion, transformation, and processing.

·      Implement and manage data storage solutions, including Delta Tables for structured storage and seamless data versioning.

·      5+ years of experience with cloud data services, with a strong focus on Databricks and its integration with Google Cloud Platform storage and analytics tools such as BigQuery.

·      Leverage Databricks for advanced data processing, including the development and optimization of data workflows, Delta Live Tables, and ML-based data transformations.

·      Monitor and optimize Databricks performance, focusing on cluster configurations, resource utilization, and Delta Table performance tuning.

·      Collaborate with cross-functional teams to drive data enablement projects, ensuring scalable, reusable, and efficient solutions using Databricks.

·      Apply the Data as a Product / Data as an Asset approach, ensuring high data quality, accessibility, and usability within Databricks environments.

·      5+ years of experience with analytical software and languages, including Spark (Databricks Runtime), Python, and SQL for data engineering and analytics.

·      Should have strong expertise in Data Structures and Algorithms (DSA) and problem-solving, enabling efficient design and optimization of data workflows.

·      Experienced in CI/CD pipelines using GitHub for automated data pipeline deployments within Databricks.

·      Experienced in Agile/Scrum environments, contributing to iterative development processes and collaboration within data engineering teams.

·      Experience in Data Streaming is a plus, particularly leveraging Kafka or Spark Structured Streaming within Databricks.

·      Familiarity with other ETL/ELT tools is a plus, such as Qlik Replicate, SAP Data Services, or Informatica, with a focus on integrating these with Databricks.

Qualifications:

·      A Bachelor’s or Master’s degree in Computer Science, Engineering, or a related discipline. 

·      Over 5 years of hands-on experience in data engineering or a closely related field. 

·      Proven expertise in AWS and Databricks platforms. 

·      Advanced skills in data modeling and designing optimized data structures. 

·      Knowledge of Azure DevOps and proficiency in Scrum methodologies. 

·      Exceptional problem-solving abilities paired with a keen eye for detail. 

·      Strong interpersonal and communication skills for seamless collaboration. 

·      A minimum of one certification in AWS or Databricks, such as Cloud Engineering, Data Services, Cloud Practitioner, Certified Data Engineer, or an equivalent from reputable MOOCs.