Starburst Senior Developer Apply
Interview Process:
- We are only looking for profiles who can relocate to Tampa, FL or local to Tampa
Job Responsibilities:
We are seeking a highly skilled and hands-on Data Engineer to join our offshore team. The ideal candidate will have extensive experience in core data engineering practices and a strong foundation in modern data technologies.
This role requires deep technical expertise in Starburst, ETL processes, data warehousing, and big data frameworks, with a plus for candidates who bring AI/ML capabilities to the table. Should have a hands-on Data Engineer who can work in prime Data Engineering like Starburst, ETL, Databases, Python, Data Warehouse, Data Lake, Apache Spark etc.
Required Skills & Experience:
6+ years of hands-on experience in Data Engineering within the IT industry.
Deep knowledge of Starburst and policy management tools like Apache Ranger.
Strong SQL skills and experience with data modeling (relational and dimensional).
Familiarity with platforms like Apache Spark, Databricks.
Strong proficiency in Python for data processing and scripting.
Expertise in ETL development, SQL, and working with relational and non-relational databases.
Solid understanding of Data Warehousing and Data Lake architectures.
Familiarity with cloud platforms and distributed computing environments.
Key Responsibilities:
Design, develop, and maintain scalable ETL pipelines and data workflows.
Work with structured and unstructured data across various databases and data lakes.
Implement and optimize data solutions using tools such as Apache Spark, Starburst, and Python.
Build and manage data warehouses and data lakes to support analytics and reporting.
Collaborate with cross-functional teams to understand data requirements and deliver high-quality solutions.
Ensure data quality, integrity, and security across all data platforms.
Contribute to performance tuning and optimization of data processing systems.
Preferred Qualifications:
Experience with AI/ML workflows, particularly in LLMs, RAG (Retrieval-Augmented Generation), and Agent-based architectures.
Exposure to building intelligent data pipelines that support machine learning models.
Knowledge of modern data governance and data cataloging tools.
Soft Skills:
Strong problem-solving and analytical skills.
Excellent communication and collaboration abilities.
Ability to work independently in a remote setup and manage time effectively.

