Find Onsite Job Opportunity Aws Cloud Engineer Job in Plano, Texas | Snaprecruit

Find Onsite Job Aws Jobs in Plano
image
  • Snapboard
  • Activity
  • Reports
  • Campaign
Welcome ,
loadingbar
Loading, Please wait..!!

Onsite Job Opportunity Aws Cloud Engineer

  • ... Posted on: Oct 14, 2024
  • ... Donato Technologies Inc
  • ... Plano, Texas
  • ... Salary: Not Available
  • ... Full-time

Onsite Job Opportunity Aws Cloud Engineer   

Job Title :

Onsite Job Opportunity Aws Cloud Engineer

Job Type :

Full-time

Job Location :

Plano Texas United States

Remote :

No

Jobcon Logo Job Description :

Job Description

Donato Technologies, established in 2012, excels as a comprehensive IT service provider renowned for delivering an exceptional staffing experience and prioritizing the needs of both clients and employees. We specialize in staffing, consulting, software development, and training, catering to small and medium-sized enterprises. While our core strength lies in Information Technology, we also deeply understand and address the unique business requirements of our clients, leveraging IT to effectively meet those needs. Our commitment is to provide high-quality, customized solutions using the optimal combination of technologies.

Job Description

Duties and responsibilities
Collaborate with the team to build out features for the data platform and consolidate data
assets
Build, maintain and optimize data pipelines built using Spark
Advise, consult, and coach other data professionals on standards and practices
Work with the team to define company data assets
Migrate CMS' data platform into Chase's environment
Partner with business analysts and solutions architects to develop technical
architectures for strategic enterprise projects and initiatives
Build libraries to standardize how we process data
Loves to teach and learn, and knows that continuous learning is the cornerstone of every
successful engineer
Has a solid understanding of AWS tools such as EMR or Glue, their pros and cons and
is able to intelligently convey such knowledge
Implement automation on applicable processes

Mandatory Skills:
5+ years of experience in a data engineering position
Proficiency is Python (or similar) and SQL
Strong experience building data pipelines with Spark
Strong verbal & written communication
Strong analytical and problem solving skills
Experience with relational datastores, NoSQL datastores and cloud object stores
Experience building data processing infrastructure in AWS
Bonus: Experience with infrastructure as code solutions, preferably Terraform
Bonus: Cloud certification
Bonus: Production experience with ACID compliant formats such as Hudi, Iceberg or
Delta Lake
Bonus: Familiar with data observability solutions, data governance frameworks
Requirements
Bachelor's Degree in Computer Science/Programming or similar is preferred
Right to work
Must have legal right to work in the USA

Job responsibilities

  • Your experience in public cloud migrations of complex systems, anticipating problems, and finding ways to mitigate risk, will be key in leading numerous public cloud initiatives
  • Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
  • Own end-to-end platform issues & help provide solutions to platform build and performance issues on the AWS Cloud & ensure the deliverables are bug free
  • Drive, support, and deliver on a strategy to build broad use of Amazon's utility computing web services (e.g., AWS EC2, AWS S3, AWS RDS, AWS CloudFront, AWS EFS, AWS DynamoDB, CloudWatch, EKS, ECS, MFTS, ALB, NLB)
  • Design resilient, secure, and high performing platforms in Public Cloud using JPMC best practices
  • Measure and optimize system performance, with an eye toward pushing our capabilities forward, getting ahead of customer needs, and innovating to continually improve
  • Provide primary operational support and engineering for the public cloud platform and debug and optimize systems and automate routine tasks
  • Collaborate with a cross-functional team to develop real-world solutions and positive user experiences at every interaction
  • Drive Game days, Resiliency tests and Chaos engineering exercises
  • Utilize programming languages like Java, Python, SQL, Node, Go, and Scala, Open Source RDBMS and NoSQL databases, Container Orchestration services including Docker and Kubernetes, and a variety of AWS tools and services
  • Required qualifications, capabilities, and skills
  • Formal training or certification on software engineering concepts and 10+ years applied experience
  • Hands-on practical experience delivering system design, application development, testing, and operational stability
  • Advanced in one or more programming language(s) - Java, Python, Go
  • A strong understanding of business technology drivers and their impact on architecture design, performance and monitoring, best practices
  • Design and building web environments on AWS, which includes working with services like EC2, ALB, NLB, Aurora Postgres, DynamoDB, EKS, ECS fargate, MFTS, SQS/SNS, S3 and Route53
  • Advanced in modern technologies such as: Java version 8+, Spring Boot, Restful Microservices, AWS or Cloud Foundry, Kubernetes.
  • Experience using DevOps tools in a cloud environment, such as Ansible, Artifactory, Docker, GitHub, Jenkins, Kubernetes, Maven, and Sonar Qube
  • Experience and knowledge of writing Infrastructure-as-Code (IaC) and Environment-as-Code (EaC), using tools like CloudFormation or Terraform
  • Experience with high volume, SLA critical applications, and building upon messaging and or event-driven architectures
  • Deep understanding of financial industry and their IT systems
  • Preferred qualifications, capabilities, and skills
  • Expert in one or more programming language(s) preferably Java
  • AWS Associate level certification in Developer, Solutions Architect or DevOps
  • Experience in building the AWS infrastructure like EKS, EC2, ECS, S3, DynamoDB, RDS, MFTS, Route53, ALB, NLB
  • Experience with high volume, mission critical applications, and building upon messaging and or event-driven architectures using Apache Kafka
  • Experience with logging, observability and monitoring tools including Splunk, Datadog, Dynatrace. CloudWatch or Grafana
  • Experience in automation and continuous delivery methods using Shell scripts, Gradle, Maven, Jenkins, Spinnaker
  • Experience with microservices architecture, high volume, SLA critical applications and their interdependencies with other applications, microservices and databases
  • Experience developing process, tooling, and methods to help improve operational maturity

Please share your resumes at resumes@donatotech.net

Jobcon Logo Position Details

Posted:

Oct 14, 2024

Employment:

Full-time

Salary:

Not Available

Snaprecruit ID:

SD-CIE-b5d175f3bc471e7b9f1c877f885e65850ab99ff8ac5187ea329a9386d1650fa1

City:

Plano

Job Origin:

CIEPAL_ORGANIC_FEED

Share this job:

  • linkedin

Jobcon Logo
A job sourcing event
In Dallas Fort Worth
Aug 19, 2017 9am-6pm
All job seekers welcome!

Similar Jobs

Onsite Job Opportunity Aws Cloud Engineer    Apply

Click on the below icons to share this job to Linkedin, Twitter!

Job Description

Donato Technologies, established in 2012, excels as a comprehensive IT service provider renowned for delivering an exceptional staffing experience and prioritizing the needs of both clients and employees. We specialize in staffing, consulting, software development, and training, catering to small and medium-sized enterprises. While our core strength lies in Information Technology, we also deeply understand and address the unique business requirements of our clients, leveraging IT to effectively meet those needs. Our commitment is to provide high-quality, customized solutions using the optimal combination of technologies.

Job Description

Duties and responsibilities
Collaborate with the team to build out features for the data platform and consolidate data
assets
Build, maintain and optimize data pipelines built using Spark
Advise, consult, and coach other data professionals on standards and practices
Work with the team to define company data assets
Migrate CMS' data platform into Chase's environment
Partner with business analysts and solutions architects to develop technical
architectures for strategic enterprise projects and initiatives
Build libraries to standardize how we process data
Loves to teach and learn, and knows that continuous learning is the cornerstone of every
successful engineer
Has a solid understanding of AWS tools such as EMR or Glue, their pros and cons and
is able to intelligently convey such knowledge
Implement automation on applicable processes

Mandatory Skills:
5+ years of experience in a data engineering position
Proficiency is Python (or similar) and SQL
Strong experience building data pipelines with Spark
Strong verbal & written communication
Strong analytical and problem solving skills
Experience with relational datastores, NoSQL datastores and cloud object stores
Experience building data processing infrastructure in AWS
Bonus: Experience with infrastructure as code solutions, preferably Terraform
Bonus: Cloud certification
Bonus: Production experience with ACID compliant formats such as Hudi, Iceberg or
Delta Lake
Bonus: Familiar with data observability solutions, data governance frameworks
Requirements
Bachelor's Degree in Computer Science/Programming or similar is preferred
Right to work
Must have legal right to work in the USA

Job responsibilities

  • Your experience in public cloud migrations of complex systems, anticipating problems, and finding ways to mitigate risk, will be key in leading numerous public cloud initiatives
  • Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
  • Own end-to-end platform issues & help provide solutions to platform build and performance issues on the AWS Cloud & ensure the deliverables are bug free
  • Drive, support, and deliver on a strategy to build broad use of Amazon's utility computing web services (e.g., AWS EC2, AWS S3, AWS RDS, AWS CloudFront, AWS EFS, AWS DynamoDB, CloudWatch, EKS, ECS, MFTS, ALB, NLB)
  • Design resilient, secure, and high performing platforms in Public Cloud using JPMC best practices
  • Measure and optimize system performance, with an eye toward pushing our capabilities forward, getting ahead of customer needs, and innovating to continually improve
  • Provide primary operational support and engineering for the public cloud platform and debug and optimize systems and automate routine tasks
  • Collaborate with a cross-functional team to develop real-world solutions and positive user experiences at every interaction
  • Drive Game days, Resiliency tests and Chaos engineering exercises
  • Utilize programming languages like Java, Python, SQL, Node, Go, and Scala, Open Source RDBMS and NoSQL databases, Container Orchestration services including Docker and Kubernetes, and a variety of AWS tools and services
  • Required qualifications, capabilities, and skills
  • Formal training or certification on software engineering concepts and 10+ years applied experience
  • Hands-on practical experience delivering system design, application development, testing, and operational stability
  • Advanced in one or more programming language(s) - Java, Python, Go
  • A strong understanding of business technology drivers and their impact on architecture design, performance and monitoring, best practices
  • Design and building web environments on AWS, which includes working with services like EC2, ALB, NLB, Aurora Postgres, DynamoDB, EKS, ECS fargate, MFTS, SQS/SNS, S3 and Route53
  • Advanced in modern technologies such as: Java version 8+, Spring Boot, Restful Microservices, AWS or Cloud Foundry, Kubernetes.
  • Experience using DevOps tools in a cloud environment, such as Ansible, Artifactory, Docker, GitHub, Jenkins, Kubernetes, Maven, and Sonar Qube
  • Experience and knowledge of writing Infrastructure-as-Code (IaC) and Environment-as-Code (EaC), using tools like CloudFormation or Terraform
  • Experience with high volume, SLA critical applications, and building upon messaging and or event-driven architectures
  • Deep understanding of financial industry and their IT systems
  • Preferred qualifications, capabilities, and skills
  • Expert in one or more programming language(s) preferably Java
  • AWS Associate level certification in Developer, Solutions Architect or DevOps
  • Experience in building the AWS infrastructure like EKS, EC2, ECS, S3, DynamoDB, RDS, MFTS, Route53, ALB, NLB
  • Experience with high volume, mission critical applications, and building upon messaging and or event-driven architectures using Apache Kafka
  • Experience with logging, observability and monitoring tools including Splunk, Datadog, Dynatrace. CloudWatch or Grafana
  • Experience in automation and continuous delivery methods using Shell scripts, Gradle, Maven, Jenkins, Spinnaker
  • Experience with microservices architecture, high volume, SLA critical applications and their interdependencies with other applications, microservices and databases
  • Experience developing process, tooling, and methods to help improve operational maturity

Please share your resumes at resumes@donatotech.net

Loading
Please wait..!!