Develop and manage scalable batch processing systems using tools in the Apache Hadoop ecosystem while optimizing data workflows and collaborating across teams. Strong focus on cloud solutions, particularly GCP.
About the Role
We are seeking a Senior Big Data Engineer with deep expertise in distributed systems, batch data processing, and large-scale data pipelines. The ideal candidate has strong hands-on experience with Oozie, Pig, the Apache Hadoop ecosystem, and programming proficiency in Java (preferred) or Python. This role requires a deep understanding of data structures and algorithms, along with a proven track record of writing production-grade code and building robust data workflows.
This is a fully remote position and requires an independent, self-driven engineer who thrives in complex technical environments and communicates effectively across teams.
Work Location: US-Remote, Canada-Remote
Key Responsibilities:
- Design and develop scalable batch processing systems using technologies like Hadoop, Oozie, Pig, Hive, MapReduce, and HBase, with hands-on coding in Java or Python (Java is a must).
- Must be able to lead Jira Epics
- Write clean, efficient, and production-ready code with a strong focus on data structures and algorithmic problem-solving applied to real-world data engineering tasks.
- Develop, manage, and optimize complex data workflows within the Apache Hadoop ecosystem, with a strong focus on Oozie orchestration and job scheduling.
- Leverage Google Cloud Platform (GCP) tools such as Dataproc, GCS, and Composer to build scalable and cloud-native big data solutions.
- Implement DevOps and automation best practices, including CI/CD pipelines, infrastructure as code (IaC), and performance tuning across distributed systems.
- Collaborate with cross-functional teams to ensure data pipeline reliability, code quality, and operational excellence in a remote-first environment.
Qualifications:
- Bachelors's degree in Computer Science, software engineering or related field of study.
- Experience with managed cloud services and understanding of cloud-based batch processing systems are critical.
- Must be able to lead Jira Epics is MUST
- Proficiency in Oozie, Airflow, Map Reduce, Java are MUST haves.
- Strong programming skills with Java (specifically Spark), Python, Pig, and SQL.
- Expertise in public cloud services, particularly in GCP.
- Proficiency in the Apache Hadoop ecosystem with Oozie, Pig, Hive, Map Reduce.
- Familiarity with BigTable and Redis.
- Experienced in Infrastructure and Applied DevOps principles in daily work. Utilize tools for continuous integration and continuous deployment (CI/CD), and Infrastructure as Code (IaC) like Terraform to automate and improve development and release processes.
- Proven experience in engineering batch processing systems at scale.
Must Have: (Important)
- 5+ years of experience in customer-facing software/technology or consulting.
- 5+ years of experience with “on-premises to cloud” migrations or IT transformations.
- 5+ years of experience building, and operating solutions built on GCP
- Proficiency in Oozie andPig
- Must be able to lead Jira Epics
- Proficiency in Java or Python
The following information is required by pay transparency legislation in the following states: CA, CO, HI, NY, and WA. This information applies only to individuals working in these states.
· The anticipated starting pay range for Colorado is: $116,100 - $170,280.
· The anticipated starting pay range for the states of Hawaii and New York (not including NYC) is: $123,600 - $181,280.
· The anticipated starting pay range for California, New York City and Washington is: $135,300 - $198,440.
Unless already included in the posted pay range and based on eligibility, the role may include variable compensation in the form of bonus, commissions, or other discretionary payments. These discretionary payments are based on company and/or individual performance and may change at any time. Actual compensation is influenced by a wide array of factors including but not limited to skill set, level of experience, licenses and certifications, and specific work location. Information on benefits offered is here.
#LI-RL1
#LI-Remote
About Rackspace Technology
We are the multicloud solutions experts. We combine our expertise with the world’s leading technologies — across applications, data and security — to deliver end-to-end solutions. We have a proven record of advising customers based on their business challenges, designing solutions that scale, building and managing those solutions, and optimizing returns into the future. Named a best place to work, year after year according to Fortune, Forbes and Glassdoor, we attract and develop world-class talent. Join us on our mission to embrace technology, empower customers and deliver the future.
More on Rackspace Technology
Though we’re all different, Rackers thrive through our connection to a central goal: to be a valued member of a winning team on an inspiring mission. We bring our whole selves to work every day. And we embrace the notion that unique perspectives fuel innovation and enable us to best serve our customers and communities around the globe. We welcome you to apply today and want you to know that we are committed to offering equal employment opportunity without regard to age, color, disability, gender reassignment or identity or expression, genetic information, marital or civil partner status, pregnancy or maternity status, military or veteran status, nationality, ethnic or national origin, race, religion or belief, sexual orientation, or any legally protected characteristic. If you have a disability or special need that requires accommodation, please let us know.
Top Skills
Apache Hadoop
Ci/Cd
Composer
Dataproc
GCP
Gcs
Hbase
Hive
Java
Mapreduce
Oozie
Pig
Python
SQL
Terraform
Similar Jobs
Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
The role involves architecting, developing, and maintaining backend systems for Mail & Messaging, utilizing Python and integrating AI processes.
Top Skills:
AnsibleAWSAzureDockerGCPGoJavaJavaScriptKubernetesPuppetPython
Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
The role involves advising customers on technical governance, designing solutions for digital transformation, and guiding implementations on the ServiceNow platform, emphasizing customer outcomes and best practices.
Top Skills:
Amazon Web ServicesAWSAzureCloud-Based SystemsOracle CloudSalesforceServicenowWorkday
Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
This role involves developing cloud automations and AI/ML solutions on the ServiceNow platform, requiring extensive software development experience.
Top Skills:
AIJavaScriptMlPostgresRestServicenowSoapXML
What you need to know about the Charlotte Tech Scene
Ranked among the hottest tech cities in 2024 by CompTIA, Charlotte is quickly cementing its place as a major U.S. tech hub. Home to more than 90,000 tech workers, the city’s ecosystem is primed for continued growth, fueled by billions in annual funding from heavyweights like Microsoft and RevTech Labs, which has created thousands of fintech jobs and made the city a go-to for tech pros looking for their next big opportunity.
Key Facts About Charlotte Tech
- Number of Tech Workers: 90,859; 6.5% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Lowe’s, Bank of America, TIAA, Microsoft, Honeywell
- Key Industries: Fintech, artificial intelligence, cybersecurity, cloud computing, e-commerce
- Funding Landscape: $3.1 billion in venture capital funding in 2024 (CED)
- Notable Investors: Microsoft, Google, Falfurrias Management Partners, RevTech Labs Foundation
- Research Centers and Universities: University of North Carolina at Charlotte, Northeastern University, North Carolina Research Campus