Job Title: Consultant | Databricks + Pyspark | Hyderabad | Engineering
Job description
Data Architecture and Modelling
- Design and implement high-quality, scalable data models that ensure the integrity, security, and accessibility of Cybersecurity Data Assets.
- Collaborate with analytics/BI engineers to ensure these models are used effectively.
- Produce high quality documentation and metadata.
Databricks Expertise
- Design and implement data architectures using Databricks notebooks and Unity Catalog.
- Build robust, scalable data pipelines within Databricks.
Required Skills and Qualifications
The successful candidate will be a credible leader, possess skills and experience within one or more of the following areas, and demonstrate a willingness to learn additional skills via certification and/or on-the-job learning where required.
- Overall, 5 years of proven experience in data architecture, modelling, and database design, including star & snowflake schemas.
- 3+ years of experience with Databricks notebooks, Unity Catalog, and building scalable data pipelines using PySpark/Spark, MySQL.
- Hands-on experience working with multi-terabyte data sets, optimizing for performance and cost-efficiency.
- In-depth understanding of ETL processes, and data warehouse architectures.
- Familiarity with Azure cloud platform and data solutions.
- Experience with data governance, security, and compliance practices.