Proven expertise in Databricks platform, including PySpark and SQL. Experience with cloud platforms (AWS, Azure, or GCP). Strong knowledge of data engineering, ETL processes, and data pipeline orchestration. Hands-on experience in data warehousing, data lakes, and data modeling. Proficiency in programming languages such as Python, Scala, or R. Experience with machine learning frameworks and tools. Familiarity with CI/CD pipelines and DevOps practices.