Job Title:  Senior Consultant | Python Developer | Bengaluru | Engineering

Job requisition ID ::  96361
Date:  Feb 18, 2026
Location:  Bengaluru
Designation:  Senior Consultant
Entity:  Deloitte Touche Tohmatsu India LLP

Python Senior Data Engineer

Total Experience - 6-8 Years

Relevant Experience - 4+ 

Location - Bangalore

Keywords – Python, SQL, Pyspark, Data Integration, Data transformation


Roles and Responsibilities:

The Data Engineer will work on data engineering projects for various business units, focusing on delivery of complex data management solutions by leveraging industry best practices. They work with the project team to build the most efficient data pipelines and data management solutions that make data easily available for consuming applications and analytical solutions. A Data engineer is expected to possess strong technical skills.

Experience: -

•        Knowledge of and skills in various programming languages primarily Python

•        Must to have knowledge on Back-end frameworks

•        Thorough understanding of containers and functions. Deployment experience with Kubernetes (K8s) or Functions is highly desirable.

•        Experience using Cloud Native CI/CD tools (Azure Pipelines/Circle CI/Jenkins X).

•        Experience deploying workloads to AWS or Azure with strong knowledge and understanding of the cloud provider's API / associated services and infrastructure

•        Experience in test driven development & writing of unit and integration tests is a must

•        Knowledge of other Cloud (Google Cloud Platform, Cloudera etc), and Integration services technologies is highly desirable.

•        Experience working in agile teams with demonstrated application of the principles.

•        Demonstrable proficiency in developing complex JavaScript applications.

•        Experience of working with lean startup/agile development methodologies

•        Strong analytical, problem-solving, and troubleshooting skills.

•        Experienced with modern coding, testing, debugging and automation techniques.

•        Rave about the benefits of CI/CD

•        Have a high bar for user experience and quality.

•        You are data driven and customer obsessed.

•        Good communication skills.


Key Characteristics

·      Availability to solve production issues/taking ownership of interfaces deployed on prod etc

·      Understanding in RDBMS and NoSQL databases (e.g., PostgreSQL, MongoDB).

·      Ability to troubleshoot and resolve database issues.

·      In-depth knowledge and experience in RESTful API interfaces.

·      Comprehensive understanding of ETL processes from end-to-end.

·      Technology champion who constantly pursues skill enhancement and has inherent curiosity to understand work from multiple dimensions.

·      Interest and passion in Big Data technologies and appreciates the value that can be brought in with an effective data management solution.

·      Has worked on real data challenges and handled high volume, velocity, and variety of data.

·      Is proficient in designing data integrations and data processes across different types of architecture patterns using native cloud services, or custom application codes

·      Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges.

·      Contributes to community building initiatives like CoE, CoP.



Mandatory Skills

·      Programming Languages: Proficiency in Python and PySpark is paramount. Should have knowledge of data processing using Pandas, Numpy and relevant libraries to handle data.

·      Databases: Strong knowledge of SQL and NoSQL databases

·      Big Data Technologies: Experience with frameworks like Apache Spark, Hadoop, and Kafka for processing large datasets

·      Cloud Computing: Familiarity with cloud platforms like Azure(preferred)/AWS(good to have) and their data services (e.g., Data Factory, Databricks, Synapse, Redshift)

·      ETL/ELT & Data Warehousing: Expertise in designing and implementing ETL (Extract, Transform, Load) processes, and data modelling for efficient data warehousing

·      Data Pipelines: Ability to develop, optimize, and maintain robust and scalable data pipelines

·      Should have a good understanding of optimization techniques and able to resolve performance bottlenecks

·      Knowledge of web frameworks and API is beneficial

 

Key Points to Note:

Data Engineer with experience primarily in Databricks, ADF, Azure Synapse, Snowflake, AWS are not required but those who worked on handling live data / data streaming using Python are suitable for this requirement