Calix Logo

Calix

Staff Software Engineer- Cloud Platform

Posted 15 Days Ago
Remote
Hiring Remotely in USA
136K-266K
Senior level
Remote
Hiring Remotely in USA
136K-266K
Senior level
The GCP Looker Administrator will manage and optimize Looker instances on Google Cloud Platform, collaborating with data teams to ensure high availability and scalability while supporting BI initiatives and data governance policies.
The summary above was generated by AI

Calix provides the cloud, software platforms, systems and services required for communications service providers to simplify their businesses, excite their subscribers and grow their value.

This is a remote based position in US.
 

We, the Cloud Platform Engineering team at Calix are responsible for the Platforms, Tools, and CI/CD pipelines at Calix. Our mission is to enable Calix engineers to accelerate the delivery of world-class products while ensuring the high availability, 
We are seeking a skilled and experienced GCP Cloud Platform Engineer to join Cloud Platform team. The ideal candidate will be responsible for managing, optimizing, and maintaining our Looker instance hosted on Google Cloud Platform (GCP). This role involves ensuring the smooth operation of Looker, supporting business intelligence (BI) initiatives, and enabling data-driven decision-making across the organization. The GCP Looker Administrator will work closely with data engineers, analysts, and business stakeholders to deliver scalable and efficient solutions.

We are looking for a GCP Cloud Platform Engineer to design, implement, and manage cloud infrastructure and data pipelines using Google Cloud Platform (GCP) services like DataStreamDataflowApache FlinkApache Spark, and Dataproc. The ideal candidate will have a strong background in DevOps practicescloud infrastructure automation, and big data technologies. You will collaborate with data engineers, developers, and operations teams to ensure seamless deployment, monitoring, and optimization of data solutions.

Responsibilities: 

  • Design and implement cloud infrastructure using IaC – Terraform etc.
  • Automate provisioning and management of Dataproc clustersDataflow jobs, and other GCP resources
  • Build and maintain CD pipelines for deploying data pipelines, streaming applications, and cloud infrastructure.
  • Integrate tools like GitLab CI/CD, or Cloud Build for automated testing and deployment.
  • Deploy and manage real-time and batch data pipelines using DataflowDataStream, and Apache Flink.
  • Ensure seamless integration of data pipelines with other GCP services like Big QueryCloud Storage, and Kafka or Pub/Sub.
  • Implement monitoring and alerting solutions using Cloud MonitoringCloud Logging, and Prometheus.
  • Monitor performance, reliability, and cost of Dataproc clustersDataflow jobs, and streaming applications.
  • Optimize cloud infrastructure and data pipelines for performance, scalability, and cost-efficiency.
  • Implement security best practices for GCP resources, including IAM policies, encryption, and network security.
  • Ensure Observability is an integral part of the infrastructure platforms and provides adequate visibility about their health, utilization, and cost. 
  • Collaborate extensively with cross functional teams to understand their requirements; educate them through documentation/trainings and improve the adoption of the platforms/tools.  

Qualifications: 

  • 7+ years of overall experience in DevOps -cloud engineering, or data engineering.
  • 3+ years of experience in DevOps, cloud engineering, or data engineering.
  • Proficiency in Google Cloud Platform (GCP) services, including DataflowDataStreamDataprocBig Query, and Cloud Storage.
  • Strong experience with Apache Spark and Apache Flink for distributed data processing.
  • Knowledge of real-time data streaming technologies (e.g., Apache KafkaPub/Sub).
  • Familiarity with data orchestration tools like Apache Airflow or Cloud Composer.
  • Expertise in Infrastructure as Code (IaC) tools like Terraform or Cloud Deployment Manager.
  • Experience with CI/CD tools like JenkinsGitLab CI/CD, or Cloud Build.
  • Knowledge of containerization and orchestration tools like Docker and Kubernetes.
  • Strong scripting skills for automation (e.g., BashPython).
  • Experience with monitoring tools like Cloud MonitoringPrometheus, and Grafana.
  • Familiarity with logging tools like Cloud Logging or ELK Stack.
  • Strong problem-solving and analytical skills.
  • Excellent communication and collaboration abilities.
  • Ability to work in a fast-paced, agile environment.

Compensation will vary based on geographical location (see below) within the United States. Individual pay is determined by the candidate's location of residence and multiple factors, including job-related skills, experience, and education.

For more information on our benefits click here.

There are different ranges applied to specific locations. The average base pay range (or OTE range for sales) in the U.S. for the position is listed below.

San Francisco Bay Area Only:

156,400.00 - 265,700.00 USD Annual

All Other Locations:

136,000.00 - 231,000.00 USD Annual

Top Skills

BigQuery
Cloud Sql
Data Studio
GCP
Grafana
JavaScript
Kafka
Kubernetes
Looker
Prometheus
Pub/Sub
Python
SQL
Terraform

Similar Jobs

An Hour Ago
Remote
Hybrid
Austin, TX, USA
176K-246K Annually
Senior level
176K-246K Annually
Senior level
Cloud • Information Technology • Security • Software • Cybersecurity
The role focuses on architecting Salesforce solutions for Partner Sales, optimizing processes, and ensuring integration with financial systems for global scalability.
Top Skills: NetSuiteRevproSalesforceSalesforce CpqSalesforce PrmStripe
2 Hours Ago
Remote
San Francisco, CA, USA
171K-274K Annually
Expert/Leader
171K-274K Annually
Expert/Leader
Cloud • Information Technology • Productivity • Security • Software • App development • Automation
As a Principal Frontend Software Engineer, you will design and develop large-scale frontend applications, enhancing Agile software development and team collaboration through Jira Align.
Top Skills: Aws (Ec2DockerKubernetesReactS3)Sqs
3 Hours Ago
Remote
United States
135K-170K
Senior level
135K-170K
Senior level
Artificial Intelligence • Consumer Web • Edtech • HR Tech • Information Technology • Software • Conversational AI
Seeking a Solutions Architect with extensive experience in data engineering, cloud applications, and agile methodologies to drive enterprise data infrastructure improvements.
Top Skills: Azure DatabricksCi/CdGitflowPower BIPysparkPythonRSparkSQL ServerTableau

What you need to know about the Charlotte Tech Scene

Ranked among the hottest tech cities in 2024 by CompTIA, Charlotte is quickly cementing its place as a major U.S. tech hub. Home to more than 90,000 tech workers, the city’s ecosystem is primed for continued growth, fueled by billions in annual funding from heavyweights like Microsoft and RevTech Labs, which has created thousands of fintech jobs and made the city a go-to for tech pros looking for their next big opportunity.

Key Facts About Charlotte Tech

  • Number of Tech Workers: 90,859; 6.5% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Lowe’s, Bank of America, TIAA, Microsoft, Honeywell
  • Key Industries: Fintech, artificial intelligence, cybersecurity, cloud computing, e-commerce
  • Funding Landscape: $3.1 billion in venture capital funding in 2024 (CED)
  • Notable Investors: Microsoft, Google, Falfurrias Management Partners, RevTech Labs Foundation
  • Research Centers and Universities: University of North Carolina at Charlotte, Northeastern University, North Carolina Research Campus

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account