Big Data Engineer Full time Job in Austin, Texas United States | Snaprecruit

Big Data Engineer Full time Job in Austin, Texas United States | Snaprecruit
  • Snapboard
  • Activity
  • Reports
  • Campaign
Welcome ,

Big Data Engineer

In Texas United States

Save this job

Big Data Engineer   

JOB TITLE:

Big Data Engineer

JOB TYPE:

Full-time

JOB LOCATION:

Austin Texas United States

JOB DESCRIPTION:

Job Title

Big Data Engineer

Location

Austin,TX

(day1 onsite work, 3 days in office)

Roles & Responsibilities

  • Having 10 years of professional experience fields of software Analysis, Design, Development, Deployment and Maintenance of software and Big Data applications.
  • Experience in Big data Implementation with strong experience on major components of Iceberg, Tableau, Kafka, Superset, Druid, Hive metastore, apache, ranger, security, AWS
  • Experience in creating iceberg tables and loading the data from different file formats.
  • Good Experience in Data importing and exporting to Hive and HDFS with Sqoop.
  • Experience in using Producer and Consumer API's of Apache Kafka.
  • Skilled in integrating Kafka with Spark streaming for faster data processing.
  • Experience in using Spark Streaming programming model for Real-time data processing.
  • Experience dealing with the file formats like text files, Sequence files, JSON, Parquet, ORC.
  • Extensively used Apache Kafka to collect the logs and error messages across the cluster.
  • Excellent knowledge and understanding of Distributed Computing and Parallel processing frameworks.
  • Experienced with Analytics with Hive Megastore.
  • Experience with Superset, Druid.
  • Experience working with EC2 (Elastic Compute Cloud) cluster instances, setup data buckets on S3 (Simple Storage Service), setting up EMR (Elastic MapReduce).
  • Good experience working on Tableau and enabled the JDBC/ODBC data connectivity from those to Hive Metastore.
  • Good with version control systems like GIT.
  • Strong knowledge on UNIX/LINUX commands.
  • Adequate Knowledge on Python scripting Language.
  • Adequate knowledge of Scrum, Agile and Waterfall methodologies.
  • Highly motivated and committed to the highest levels of professionalism.
  • Exhibited strong written and oral communication skills.
    Rapidly learn and adapt quickly to emerging new technologies and paradigms.

Position Details

POSTED:

Nov 22, 2023

EMPLOYMENT:

Full-time

SNAPRECRUIT ID:

S110230-9119-11152023-39210351

LOCATION:

Texas United States

CITY:

Austin

Job Origin:

CEIPAL_ORGANIC_FEED

Jobcon Logo
A job sourcing event
In Dallas Fort Worth
Aug 19, 2017 9am-6pm
All job seekers welcome!

Big Data Engineer    Apply

Click on the below icons to share this job to Linkedin, Twitter!

Job Title

Big Data Engineer

Location

Austin,TX

(day1 onsite work, 3 days in office)

Roles & Responsibilities

  • Having 10 years of professional experience fields of software Analysis, Design, Development, Deployment and Maintenance of software and Big Data applications.
  • Experience in Big data Implementation with strong experience on major components of Iceberg, Tableau, Kafka, Superset, Druid, Hive metastore, apache, ranger, security, AWS
  • Experience in creating iceberg tables and loading the data from different file formats.
  • Good Experience in Data importing and exporting to Hive and HDFS with Sqoop.
  • Experience in using Producer and Consumer API's of Apache Kafka.
  • Skilled in integrating Kafka with Spark streaming for faster data processing.
  • Experience in using Spark Streaming programming model for Real-time data processing.
  • Experience dealing with the file formats like text files, Sequence files, JSON, Parquet, ORC.
  • Extensively used Apache Kafka to collect the logs and error messages across the cluster.
  • Excellent knowledge and understanding of Distributed Computing and Parallel processing frameworks.
  • Experienced with Analytics with Hive Megastore.
  • Experience with Superset, Druid.
  • Experience working with EC2 (Elastic Compute Cloud) cluster instances, setup data buckets on S3 (Simple Storage Service), setting up EMR (Elastic MapReduce).
  • Good experience working on Tableau and enabled the JDBC/ODBC data connectivity from those to Hive Metastore.
  • Good with version control systems like GIT.
  • Strong knowledge on UNIX/LINUX commands.
  • Adequate Knowledge on Python scripting Language.
  • Adequate knowledge of Scrum, Agile and Waterfall methodologies.
  • Highly motivated and committed to the highest levels of professionalism.
  • Exhibited strong written and oral communication skills. Rapidly learn and adapt quickly to emerging new technologies and paradigms.

Loading
Please wait..!!