Cloud Software & Data Engineer Job at Schlumberger, Houston, TX

ZDVvcFlTWEJvWlhVTitBTitBMVRUVUJvTmc9PQ==
  • Schlumberger
  • Houston, TX

Job Description

A Cloud Software & Data Engineer is responsible for developing data engineering applications using third-party and in-house frameworks, leveraging a broad set of development skills that cover data engineering, data accessibility skillsets. The Cloud Software & Data Engineer is responsible for the complete software lifecycle - analysis, design, development, testing, implementation and support, as well as troubleshooting issues, deployment/upgrade of services and associated data, performance tuning and other maintenance work. This specific type of cloud developer will focus on additional items: data engineering (large scale data transformation and manipulation, ETL, etc.), as well as infrastructure fine-tuning for optimization purposes. The position reports to the software project manager.

Responsibilities
  • Work with subject matter experts to clarify requirements and use cases.
  • Turn requirements and user stories into functionality via implementation efforts which include: design, build & maintain efficient, reusable, reliable code for high quality software and services, documentation and traceability.
  • Develop server-side services to be elastically scalable and secure by design to support high volume & high velocity data processing. Services should be backward and forward compatible to ease deployment.
  • Ensure the solution is deployable, operable, and secure.
  • Write and maintain provisioning, deployment, CI/CD and maintenance scripts for services they developed.
  • Write Unit Tests, Automation testing, Data Simulations.
  • Support, maintain, troubleshoot and fine-tune working cloud environments and the software run within.
  • Builds prototypes, products and systems that meets the project quality standards and requirements.
  • Be an individual contributor which includes technical leadership and documentation to developers and stakeholders.
  • Provide timely corrective actions on all assigned defects and issues.
  • Contributes to development plan by providing task estimates.
  • Fulfil organizational responsibilities (sharing knowledge & experience with other teams/ groups)
  • Conduct technical training(s)/session(s), write whitepapers/case studies/blogs etc.
  • REQUIREMENTS
  • Bachelor's degree or higher in Computer Science or related with minimum 5 years working experience.
  • 5+ years of software development experience in Big Data technologies (Spark Database & Data Lakes).
  • SQL, No-SQL, JSON, CSV, Parquet data type experience.
  • Most Importantly - Hands on experience building scalable data pipelines using Python & PySpark
  • Advanced knowledge of large-scale parallel computing engines (Spark) - provisioning, deployment, development of computing pipelines, operation and support with performance tuning (3y+).
  • Good experience in building/tuning Spark pipelines in Python. (take out)
  • Good Programming experience with Core Python.
  • Design, build and maintain data processing pipelines in Apache NiFi, Spark Jobs.
  • Extensive knowledge of data structures, patterns and algorithms (5y+).
  • Expertise with several back-end development languages and their associated frameworks like Python (3y+).
  • In-depth knowledge of application, cloud networking and security as well as related development best-practices and patterns (3y+).
  • Advanced knowledge of containerization and virtualization (Kubernetes), as well as scaling clusters & debugging issues on high volume/velocity data jobs and best practices (3y+).
  • Good experience in Spark, Databricks on Kubernetes.
  • Cloud platform knowledge - Azure public cloud expertise (3y+).
  • Advanced knowledge of DevOps, CI/CD and cloud deployment practices (5y+).
  • Advanced skills in setting up and operating databases (relational and non-relational) (3y+)
  • Experienced in application profiling, bottleneck analysis and performance tuning.
  • Effective communication and cross functional skills.
  • Problem solving skills, Team player, adaptable & quick worker.
  • Have worked in highly Agile projects in the past.
  • Bachelor's degree or higher in Computer Science or related with minimum 5 years working experience.
  • 5+ years of software development experience in Big Data technologies (Spark Database & Data Lakes).
  • SQL, No-SQL, JSON, CSV, Parquet data type experience.
  • Advanced knowledge of large-scale parallel computing engines (Spark) - provisioning, deployment, development of computing pipelines, operation and support with performance tuning (3y+).
  • Good experience in building/tuning Spark pipelines in Python.
  • Good Programming experience with Python.
  • Design, build and maintain data processing pipelines in Apache NiFi, Spark Jobs.
  • Extensive knowledge of data structures, patterns and algorithms (5y+).
  • Expertise with several back-end development languages and their associated frameworks like Python (3y+).
  • In-depth knowledge of application, cloud networking and security as well as related development best-practices and patterns (3y+).
  • Advanced knowledge of containerization and virtualization (Kubernetes), as well as scaling clusters & debugging issues on high volume/velocity data jobs and best practices (3y+).
  • Good experience in Spark, Databricks on Kubernetes.
  • Cloud platform knowledge - Azure public cloud expertise (3y+).
  • Advanced knowledge of DevOps, CI/CD and cloud deployment practices (5y+).
  • Advanced skills in setting up and operating databases (relational and non-relational) (3y+)
  • Experienced in application profiling, bottleneck analysis and performance tuning.
  • Effective communication and cross functional skills.
  • Problem solving skills, Team player, adaptable & quick worker.
  • Have worked in highly Agile projects in the past.

Job Tags

Work experience placement,

Similar Jobs

Denver Freightways Express, Inc.

No CDL Straight Truck Non CDL Job Job at Denver Freightways Express, Inc.

No CDL Straight Truck Non CDL JobDenver Freightways Express, Inc. a family owned trucking company in Commerce City is looking for local Non-CDL straight truck driver. Class A drivers are welcome to apply as well. We are looking for someone to work Monday-Friday,... 

Tranquil Trails Travel

Scheduling Assistant - Work From Home Job at Tranquil Trails Travel

 ...Job Title: Scheduling Assistant Work From Home Company Name: Dreamscape Destinations Job Type: Full-Time Company Overview: Welcome to Dreamscape Destinations, where we transform dreams into reality through extraordinary travel experiences. As a company dedicated... 

Stellent IT LLC

Sap consultant Job at Stellent IT LLC

 ...Job Title -SAP BW Consultant Location -Juno Beach, Florida (Onsite) Duration -long term Job Description - SAP BW experience is priority #1 (more so than the tax/finance background ) They key is finding someone that knows how to run... 

Dreamville Staffing

Heavy Equipment Operator Job at Dreamville Staffing

 ...Responsibilities: Performs responsible skilled heavy equipment operation work in the landfill operations for a Solid Waste Department Operates a variety of heavy equipment such as large bulldozers, compactors, excavators, motor graders, loaders, front loading collections... 

Radiant Digital

Cyber Security Analyst Job at Radiant Digital

 ...Job Title : Cyber Security Analyst Location : Cary, NC (NC0017)-USA and Ashburn, VA hybrid MUST HAVE SKILLS: Ability to obtain GSA Public Trust clearance At least three years of experience in security related fields including prior SOC experience...